id
stringlengths
14
15
text
stringlengths
23
2.21k
source
stringlengths
52
97
6541376b3282-5
name="simple_sequential")Waiting for W&B process to finish... <strong style="color:green">(success).</strong>View run <strong style="color:#cdcd00">llm</strong> at: <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914</a><br/>Synced 5 W&B file(s), 2 media file(s), 5 artifact file(s) and 0 other file(s)Find logs at: <code>./wandb/run-20230318_150408-e47j1914/logs</code>VBox(children=(Label(value='Waiting for wandb.init()...\r'), FloatProgress(value=0.016745895149999985, max=1.0…Tracking run with wandb version 0.14.0Run data is saved locally in <code>/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150534-jyxma7hu</code>Syncing run <strong><a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu' target="_blank">simple_sequential</a></strong> to <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target="_blank">Weights & Biases</a> (<a href='https://wandb.me/run' target="_blank">docs</a>)<br/>View project at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo</a>View run at <a
https://python.langchain.com/docs/integrations/providers/wandb_tracking
6541376b3282-6
run at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu</a>from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChain# SCENARIO 2 - Chaintemplate = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)test_prompts = [ { "title": "documentary about good video games that push the boundary of game design" }, {"title": "cocaine bear vs heroin wolf"}, {"title": "the best in class mlops tooling"},]synopsis_chain.apply(test_prompts)wandb_callback.flush_tracker(synopsis_chain, name="agent")Waiting for W&B process to finish... <strong style="color:green">(success).</strong>View run <strong style="color:#cdcd00">simple_sequential</strong> at: <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu</a><br/>Synced 4 W&B file(s), 2 media file(s), 6 artifact file(s) and 0 other file(s)Find logs at:
https://python.langchain.com/docs/integrations/providers/wandb_tracking
6541376b3282-7
file(s), 6 artifact file(s) and 0 other file(s)Find logs at: <code>./wandb/run-20230318_150534-jyxma7hu/logs</code>VBox(children=(Label(value='Waiting for wandb.init()...\r'), FloatProgress(value=0.016736786816666675, max=1.0…Tracking run with wandb version 0.14.0Run data is saved locally in <code>/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150550-wzy59zjq</code>Syncing run <strong><a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq' target="_blank">agent</a></strong> to <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target="_blank">Weights & Biases</a> (<a href='https://wandb.me/run' target="_blank">docs</a>)<br/>View project at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo</a>View run at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq</a>from langchain.agents import initialize_agent, load_toolsfrom langchain.agents import AgentType# SCENARIO 3 - Agent with Toolstools = load_tools(["serpapi", "llm-math"], llm=llm)agent = initialize_agent( tools, llm,
https://python.langchain.com/docs/integrations/providers/wandb_tracking
6541376b3282-8
= initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,)agent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", callbacks=callbacks,)wandb_callback.flush_tracker(agent, reset=False, finish=True)> Entering new AgentExecutor chain... I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.Action: SearchAction Input: "Leo DiCaprio girlfriend"Observation: DiCaprio had a steady girlfriend in Camila Morrone. He had been with the model turned actress for nearly five years, as they were first said to be dating at the end of 2017. And the now 26-year-old Morrone is no stranger to Hollywood.Thought: I need to calculate her age raised to the 0.43 power.Action: CalculatorAction Input: 26^0.43Observation: Answer: 4.059182145592686Thought: I now know the final answer.Final Answer: Leo DiCaprio's girlfriend is Camila Morrone and her current age raised to the 0.43 power is 4.059182145592686.> Finished chain.Waiting for W&B process to finish... <strong style="color:green">(success).</strong>View run <strong style="color:#cdcd00">agent</strong> at: <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq</a><br/>Synced 5 W&B file(s), 2 media file(s), 7 artifact file(s) and 0 other file(s)Find logs at:
https://python.langchain.com/docs/integrations/providers/wandb_tracking
6541376b3282-9
file(s), 7 artifact file(s) and 0 other file(s)Find logs at: <code>./wandb/run-20230318_150550-wzy59zjq/logs</code>PreviousVespaNextWeatherCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/providers/wandb_tracking
8e2ec8817ada-0
C Transformers | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/providers/ctransformers
8e2ec8817ada-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale
https://python.langchain.com/docs/integrations/providers/ctransformers
8e2ec8817ada-2
EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerC TransformersOn this pageC TransformersThis page covers how to use the C Transformers library within LangChain.
https://python.langchain.com/docs/integrations/providers/ctransformers
8e2ec8817ada-3
It is broken into two parts: installation and setup, and then references to specific C Transformers wrappers.Installation and Setup​Install the Python package with pip install ctransformersDownload a supported GGML model (see Supported Models)Wrappers​LLM​There exists a CTransformers LLM wrapper, which you can access with:from langchain.llms import CTransformersIt provides a unified interface for all models:llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2')print(llm('AI is going to'))If you are getting illegal instruction error, try using lib='avx' or lib='basic':llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2', lib='avx')It can be used with models hosted on the Hugging Face Hub:llm = CTransformers(model='marella/gpt-2-ggml')If a model repo has multiple model files (.bin files), specify a model file using:llm = CTransformers(model='marella/gpt-2-ggml', model_file='ggml-model.bin')Additional parameters can be passed using the config parameter:config = {'max_new_tokens': 256, 'repetition_penalty': 1.1}llm = CTransformers(model='marella/gpt-2-ggml', config=config)See Documentation for a list of available parameters.For a more detailed walkthrough of this, see this notebook.PreviousConfluenceNextDatabricksInstallation and SetupWrappersLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/providers/ctransformers
cd6a69821b35-0
Reddit | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/providers/reddit
cd6a69821b35-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale
https://python.langchain.com/docs/integrations/providers/reddit
cd6a69821b35-2
EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerRedditOn this pageRedditReddit is an American social news aggregation, content rating, and discussion website.Installation and Setup​First, you need to install a python package.pip install prawMake a Reddit Application and initialize the loader with with your Reddit API credentials.Document Loader​See a usage example.from langchain.document_loaders import RedditPostsLoaderPreviousRebuffNextRedisInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/providers/reddit
b1b5ea4c0c10-0
Baseten | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/providers/baseten
b1b5ea4c0c10-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale
https://python.langchain.com/docs/integrations/providers/baseten
b1b5ea4c0c10-2
EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerBasetenOn this pageBasetenLearn how to use LangChain with models deployed on Baseten.Installation and setup​Create a Baseten account and API key.Install the Baseten Python client with pip install basetenUse your API key to authenticate with baseten loginInvoking a model​Baseten integrates with LangChain through the LLM module, which provides a standardized and interoperable interface for models that are deployed on your Baseten workspace.You can deploy foundation models like WizardLM and Alpaca with one click from the Baseten model library or if you have your own model, deploy it with this tutorial.In this example, we'll work with WizardLM. Deploy WizardLM here and follow along with the deployed model's version ID.from langchain.llms import Basetenwizardlm = Baseten(model="MODEL_VERSION_ID", verbose=True)wizardlm("What is the difference between a Wizard and a Sorcerer?")PreviousBananaNextBeamInstallation and setupInvoking a modelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/providers/baseten
2723ad032a8d-0
GooseAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/providers/gooseai
2723ad032a8d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale
https://python.langchain.com/docs/integrations/providers/gooseai
2723ad032a8d-2
EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerGooseAIOn this pageGooseAIThis page covers how to use the GooseAI ecosystem within LangChain.
https://python.langchain.com/docs/integrations/providers/gooseai
2723ad032a8d-3
It is broken into two parts: installation and setup, and then references to specific GooseAI wrappers.Installation and Setup​Install the Python SDK with pip install openaiGet your GooseAI api key from this link here.Set the environment variable (GOOSEAI_API_KEY).import osos.environ["GOOSEAI_API_KEY"] = "YOUR_API_KEY"Wrappers​LLM​There exists an GooseAI LLM wrapper, which you can access with: from langchain.llms import GooseAIPreviousGoogle SerperNextGPT4AllInstallation and SetupWrappersLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/providers/gooseai
cc54da66899e-0
DataForSEO | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/providers/dataforseo
cc54da66899e-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale
https://python.langchain.com/docs/integrations/providers/dataforseo
cc54da66899e-2
EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerDataForSEOOn this pageDataForSEOThis page provides instructions on how to use the DataForSEO search APIs within LangChain.Installation and Setup​Get a DataForSEO API Access login and password, and set them as environment variables (DATAFORSEO_LOGIN and DATAFORSEO_PASSWORD respectively). You can find it in your dashboard.Wrappers​Utility​The DataForSEO utility wraps the API. To import this utility, use:from langchain.utilities import DataForSeoAPIWrapperFor a detailed walkthrough of this wrapper, see this notebook.Tool​You can also load this wrapper as a Tool to use with an Agent:from langchain.agents import load_toolstools = load_tools(["dataforseo-api-search"])Example usage​dataforseo = DataForSeoAPIWrapper(api_login="your_login", api_password="your_password")result = dataforseo.run("Bill Gates")print(result)Environment Variables​You can store your DataForSEO API Access login and password as environment variables. The wrapper will automatically check for these environment variables if no values are provided:import osos.environ["DATAFORSEO_LOGIN"] = "your_login"os.environ["DATAFORSEO_PASSWORD"] = "your_password"dataforseo = DataForSeoAPIWrapper()result = dataforseo.run("weather in Los Angeles")print(result)PreviousDatadog LogsNextDeepInfraInstallation and
https://python.langchain.com/docs/integrations/providers/dataforseo
cc54da66899e-3
in Los Angeles")print(result)PreviousDatadog LogsNextDeepInfraInstallation and SetupWrappersUtilityToolExample usageEnvironment VariablesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/providers/dataforseo
7182e4b1b566-0
Zep | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/providers/zep
7182e4b1b566-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShale
https://python.langchain.com/docs/integrations/providers/zep
7182e4b1b566-2
EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerZepOn this pageZepZep - A long-term memory store for LLM applications.Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.Vector search over memories, with messages automatically embedded on creation.Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.Python and JavaScript SDKs.Zep project Installation and Setup​pip install zep_pythonRetriever​See a usage example.from langchain.retrievers import ZepRetrieverPreviousYouTubeNextZillizInstallation and SetupRetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/providers/zep
f19b09dc5acc-0
Chat models | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsAnthropicAzureGoogle Cloud Platform Vertex AI PaLMJinaChatLlama APIOpenAIPromptLayer ChatOpenAIDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsChat modelsChat models📄� AnthropicThis notebook covers how to get started with Anthropic chat models.📄� AzureThis notebook goes over how to connect to an Azure hosted OpenAI endpoint📄� Google Cloud Platform Vertex AI PaLMNote: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.📄� JinaChatThis notebook covers how to get started with JinaChat chat models.📄� Llama APIThis notebook shows how to use LangChain with LlamaAPI - a hosted version of Llama2 that adds in support for function calling.📄� OpenAIThis notebook covers how to get started with OpenAI chat models.📄� PromptLayer ChatOpenAIThis example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests.PreviousStreamlitNextAnthropicCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/chat/
64de26e0946b-0
PromptLayer ChatOpenAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/chat/promptlayer_chatopenai
64de26e0946b-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsAnthropicAzureGoogle Cloud Platform Vertex AI PaLMJinaChatLlama APIOpenAIPromptLayer ChatOpenAIDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsChat modelsPromptLayer ChatOpenAIOn this pagePromptLayer ChatOpenAIThis example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests.Install PromptLayer​The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.pip install promptlayerImports​import osfrom langchain.chat_models import PromptLayerChatOpenAIfrom langchain.schema import HumanMessageSet the Environment API Key​You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.Set it as an environment variable called PROMPTLAYER_API_KEY.os.environ["PROMPTLAYER_API_KEY"] = "**********"Use the PromptLayerOpenAI LLM like normal​You can optionally pass in pl_tags to track your requests with PromptLayer's tagging feature.chat = PromptLayerChatOpenAI(pl_tags=["langchain"])chat([HumanMessage(content="I am a cat and I want")]) AIMessage(content='to take a nap in a cozy spot. I search around for a suitable place and finally settle on a soft cushion on the window sill. I curl up into a ball and close my eyes, relishing the warmth of the sun on my fur. As I drift off to sleep, I can hear the birds chirping outside and feel the gentle breeze blowing through the window. This is the life of a contented cat.', additional_kwargs={})The above request should
https://python.langchain.com/docs/integrations/chat/promptlayer_chatopenai
64de26e0946b-2
window. This is the life of a contented cat.', additional_kwargs={})The above request should now appear on your PromptLayer dashboard.Using PromptLayer Track​If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id. chat = PromptLayerChatOpenAI(return_pl_id=True)chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]])for res in chat_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100)Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well.
https://python.langchain.com/docs/integrations/chat/promptlayer_chatopenai
64de26e0946b-3
Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.PreviousOpenAINextDocument loadersInstall PromptLayerImportsSet the Environment API KeyUse the PromptLayerOpenAI LLM like normalUsing PromptLayer TrackCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/chat/promptlayer_chatopenai
aa2d3183dea4-0
JinaChat | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/chat/jinachat
aa2d3183dea4-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsAnthropicAzureGoogle Cloud Platform Vertex AI PaLMJinaChatLlama APIOpenAIPromptLayer ChatOpenAIDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsChat modelsJinaChatJinaChatThis notebook covers how to get started with JinaChat chat models.from langchain.chat_models import JinaChatfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessagechat = JinaChat(temperature=0)messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ),]chat(messages) AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:template = ( "You are a helpful assistant that translates {input_language} to
https://python.langchain.com/docs/integrations/chat/jinachat
aa2d3183dea4-2
= ( "You are a helpful assistant that translates {input_language} to {output_language}.")system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat( chat_prompt.format_prompt( input_language="English", output_language="French", text="I love programming." ).to_messages()) AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)PreviousGoogle Cloud Platform Vertex AI PaLMNextLlama APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/chat/jinachat
b0b93f72ed2d-0
Anthropic | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/chat/anthropic
b0b93f72ed2d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsAnthropicAzureGoogle Cloud Platform Vertex AI PaLMJinaChatLlama APIOpenAIPromptLayer ChatOpenAIDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsChat modelsAnthropicOn this pageAnthropicThis notebook covers how to get started with Anthropic chat models.from langchain.chat_models import ChatAnthropicfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessagechat = ChatAnthropic()messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages) AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)ChatAnthropic also supports async and streaming functionality:​from langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerawait chat.agenerate([messages]) LLMResult(generations=[[ChatGeneration(text=" J'aime programmer.", generation_info=None, message=AIMessage(content=" J'aime programmer.", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])chat = ChatAnthropic( streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),)chat(messages)
https://python.langchain.com/docs/integrations/chat/anthropic
b0b93f72ed2d-2
J'aime la programmation. AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)PreviousChat modelsNextAzureChatAnthropic also supports async and streaming functionality:CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/chat/anthropic
9a48225e0809-0
Llama API | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/chat/llama_api
9a48225e0809-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsAnthropicAzureGoogle Cloud Platform Vertex AI PaLMJinaChatLlama APIOpenAIPromptLayer ChatOpenAIDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsChat modelsLlama APILlama APIThis notebook shows how to use LangChain with LlamaAPI - a hosted version of Llama2 that adds in support for function calling.!pip install -U llamaapifrom llamaapi import LlamaAPI# Replace 'Your_API_Token' with your actual API tokenllama = LlamaAPI('Your_API_Token')from langchain_experimental.llms import ChatLlamaAPI /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.12) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(model = ChatLlamaAPI(client=llama)from langchain.chains import create_tagging_chainschema = { "properties": { "sentiment": {"type": "string", 'description': 'the sentiment encountered in the passage'}, "aggressiveness": {"type": "integer", 'description': 'a 0-10 score of how aggressive the passage is'}, "language": {"type": "string", 'description': 'the language of the passage'}, }}chain =
https://python.langchain.com/docs/integrations/chat/llama_api
9a48225e0809-2
"string", 'description': 'the language of the passage'}, }}chain = create_tagging_chain(schema, model)chain.run("give me your money") {'sentiment': 'aggressive', 'aggressiveness': 8}PreviousJinaChatNextOpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/chat/llama_api
6fdc67bb6f08-0
Google Cloud Platform Vertex AI PaLM | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm
6fdc67bb6f08-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsAnthropicAzureGoogle Cloud Platform Vertex AI PaLMJinaChatLlama APIOpenAIPromptLayer ChatOpenAIDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsChat modelsGoogle Cloud Platform Vertex AI PaLMGoogle Cloud Platform Vertex AI PaLMNote: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the GCP Service Specific Terms. Pre-GA products and features may have limited support, and changes to pre-GA products and features may not be compatible with other pre-GA versions. For more information, see the launch stage descriptions. Further, by using PaLM API on Vertex AI, you agree to the Generative AI Preview terms and conditions (Preview Terms).For PaLM API on Vertex AI, you can process personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms).To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:Have credentials configured for your environment (gcloud, workload identity, etc...)Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variableThis codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAChttps://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth#!pip
https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm
6fdc67bb6f08-2
install google-cloud-aiplatformfrom langchain.chat_models import ChatVertexAIfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import HumanMessage, SystemMessagechat = ChatVertexAI()messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ),]chat(messages) AIMessage(content='Sure, here is the translation of the sentence "I love programming" from English to French:\n\nJ\'aime programmer.', additional_kwargs={}, example=False)You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:template = ( "You are a helpful assistant that translates {input_language} to {output_language}.")system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat( chat_prompt.format_prompt( input_language="English", output_language="French", text="I love programming."
https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm
6fdc67bb6f08-3
input_language="English", output_language="French", text="I love programming." ).to_messages()) AIMessage(content='Sure, here is the translation of "I love programming" in French:\n\nJ\'aime programmer.', additional_kwargs={}, example=False)You can now leverage the Codey API for code chat within Vertex AI. The model name is:codechat-bison: for code assistancechat = ChatVertexAI(model_name="codechat-bison")messages = [ HumanMessage( content="How do I create a python function to identify all prime numbers?" )]chat(messages) AIMessage(content='The following Python function can be used to identify all prime numbers up to a given integer:\n\n```\ndef is_prime(n):\n """\n Determines whether the given integer is prime.\n\n Args:\n n: The integer to be tested for primality.\n\n Returns:\n True if n is prime, False otherwise.\n """\n\n # Check if n is divisible by 2.\n if n % 2 == 0:\n return False\n\n # Check if n is divisible by any integer from 3 to the square root', additional_kwargs={}, example=False)PreviousAzureNextJinaChatCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm
3d441c66b30e-0
Azure | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsAnthropicAzureGoogle Cloud Platform Vertex AI PaLMJinaChatLlama APIOpenAIPromptLayer ChatOpenAIDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsChat modelsAzureAzureThis notebook goes over how to connect to an Azure hosted OpenAI endpointfrom langchain.chat_models import AzureChatOpenAIfrom langchain.schema import HumanMessageBASE_URL = "https://${TODO}.openai.azure.com"API_KEY = "..."DEPLOYMENT_NAME = "chat"model = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-05-15", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type="azure",)model( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ]) AIMessage(content="\n\nJ'aime programmer.", additional_kwargs={})PreviousAnthropicNextGoogle Cloud Platform Vertex AI PaLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/chat/azure_chat_openai
c93466c38ba4-0
OpenAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/chat/openai
c93466c38ba4-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsAnthropicAzureGoogle Cloud Platform Vertex AI PaLMJinaChatLlama APIOpenAIPromptLayer ChatOpenAIDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsChat modelsOpenAIOpenAIThis notebook covers how to get started with OpenAI chat models.from langchain.chat_models import ChatOpenAIfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessagechat = ChatOpenAI(temperature=0)The above cell assumes that your OpenAI API key is set in your environment variables. If you would rather manually specify your API key and/or organization ID, use the following code:chat = ChatOpenAI(temperature=0, openai_api_key="YOUR_API_KEY", openai_organization="YOUR_ORGANIZATION_ID")Remove the openai_organization parameter should it not apply to you.messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ),]chat(messages) AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False)You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a
https://python.langchain.com/docs/integrations/chat/openai
c93466c38ba4-2
use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:template = ( "You are a helpful assistant that translates {input_language} to {output_language}.")system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat( chat_prompt.format_prompt( input_language="English", output_language="French", text="I love programming." ).to_messages()) AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False)PreviousLlama APINextPromptLayer ChatOpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/chat/openai
f0b1e063b7cd-0
Document transformers | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_transformers/
f0b1e063b7cd-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersDoctran Extract PropertiesDoctran Interrogate DocumentsDoctran Translate Documentshtml2textOpenAI Functions Metadata TaggerLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument transformersDocument transformers📄� Doctran Extract PropertiesWe can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata.📄� Doctran Interrogate DocumentsDocuments used in a vector store knowledge base are typically stored in narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the liklihood of retrieving relevant documents, and decrease the liklihood of retrieving irrelevant documents.📄� Doctran Translate DocumentsComparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically.📄� html2texthtml2text is a Python script that converts a page of HTML into clean, easy-to-read plain ASCII text.📄� OpenAI Functions Metadata TaggerIt can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious.PreviousYouTube transcriptsNextDoctran Extract
https://python.langchain.com/docs/integrations/document_transformers/
f0b1e063b7cd-2
documents, performing this labelling process manually can be tedious.PreviousYouTube transcriptsNextDoctran Extract PropertiesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_transformers/
8a30f5fa1157-0
html2text | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_transformers/html2text
8a30f5fa1157-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersDoctran Extract PropertiesDoctran Interrogate DocumentsDoctran Translate Documentshtml2textOpenAI Functions Metadata TaggerLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument transformershtml2texthtml2texthtml2text is a Python script that converts a page of HTML into clean, easy-to-read plain ASCII text. The ASCII also happens to be valid Markdown (a text-to-HTML format).pip install html2textfrom langchain.document_loaders import AsyncHtmlLoaderurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load() Fetching pages: 100%|############| 2/2 [00:00<00:00, 10.75it/s]from langchain.document_transformers import Html2TextTransformerurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]html2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs)docs_transformed[0].page_content[1000:2000] " * ESPNFC\n\n * X Games\n\n * SEC Network\n\n## ESPN Apps\n\n * ESPN\n\n * ESPN Fantasy\n\n## Follow ESPN\n\n * Facebook\n\n * Twitter\n\n * Instagram\n\n * Snapchat\n\n * YouTube\n\n * The ESPN Daily Podcast\n\n2023 FIFA Women's World Cup\n\n## Follow live: Canada takes on
https://python.langchain.com/docs/integrations/document_transformers/html2text
8a30f5fa1157-2
Daily Podcast\n\n2023 FIFA Women's World Cup\n\n## Follow live: Canada takes on Nigeria in group stage of Women's World Cup\n\n2m\n\nEPA/Morgan Hancock\n\n## TOP HEADLINES\n\n * Snyder fined $60M over findings in investigation\n * NFL owners approve $6.05B sale of Commanders\n * Jags assistant comes out as gay in NFL milestone\n * O's alone atop East after topping slumping Rays\n * ACC's Phillips: Never condoned hazing at NU\n\n * Vikings WR Addison cited for driving 140 mph\n * 'Taking his time': Patient QB Rodgers wows Jets\n * Reyna got U.S. assurances after Berhalter rehire\n * NFL Future Power Rankings\n\n## USWNT AT THE WORLD CUP\n\n### USA VS. VIETNAM: 9 P.M. ET FRIDAY\n\n## How do you defend against Alex Morgan? Former opponents sound off\n\nThe U.S. forward is unstoppable at this level, scoring 121 goals and adding 49"docs_transformed[1].page_content[1000:2000] "t's brain,\ncomplemented by several key components:\n\n * **Planning**\n * Subgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\n * Reflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\n * **Memory**\n * Short-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\n * Long-term memory: This provides the agent with the
https://python.langchain.com/docs/integrations/document_transformers/html2text
8a30f5fa1157-3
the model to learn.\n * Long-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\n * **Tool use**\n * The agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution c"PreviousDoctran Translate DocumentsNextOpenAI Functions Metadata TaggerCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_transformers/html2text
2e29b80767a8-0
Doctran Translate Documents | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document
2e29b80767a8-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersDoctran Extract PropertiesDoctran Interrogate DocumentsDoctran Translate Documentshtml2textOpenAI Functions Metadata TaggerLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument transformersDoctran Translate DocumentsOn this pageDoctran Translate DocumentsComparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically.However, it can still be useful to use a LLM translate documents into other languages before vectorizing them. This is especially helpful when users are expected to query the knowledge base in different languages, or when state of the art embeddings models are not available for a given language.We can accomplish this using the Doctran library, which uses OpenAI's function calling feature to translate documents between languages.pip install doctranfrom langchain.schema import Documentfrom langchain.document_transformers import DoctranTextTranslatorfrom dotenv import load_dotenvload_dotenv() TrueInput​This is the document we'll translatesample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document
2e29b80767a8-2
data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected] Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document
2e29b80767a8-3
to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & [email protected]"""documents = [Document(page_content=sample_text)]qa_translator = DoctranTextTranslator(language="spanish")Output​After translating a document, the result will be returned as a new document with the page_content translated into the target languagetranslated_document = await qa_translator.atransform_documents(documents)print(translated_document[0].page_content) [Generado con ChatGPT] Documento confidencial - Solo para uso interno Fecha: 1 de julio de 2023 Asunto: Actualizaciones y discusiones sobre varios temas Estimado equipo, Espero que este correo electrónico les encuentre bien. En este documento, me gustaría proporcionarles algunas actualizaciones importantes y discutir varios temas que requieren nuestra atención. Por favor, traten la información contenida aquí como altamente confidencial. Medidas de seguridad y privacidad Como parte de nuestro compromiso continuo para garantizar la seguridad y privacidad de los datos de nuestros clientes, hemos implementado medidas robustas en todos nuestros sistemas. Nos
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document
2e29b80767a8-4
de los datos de nuestros clientes, hemos implementado medidas robustas en todos nuestros sistemas. Nos gustaría elogiar a John Doe (correo electrónico: [email protected]) del departamento de TI por su diligente trabajo en mejorar nuestra seguridad de red. En adelante, recordamos amablemente a todos que se adhieran estrictamente a nuestras políticas y directrices de protección de datos. Además, si se encuentran con cualquier riesgo de seguridad o incidente potencial, por favor repórtelo inmediatamente a nuestro equipo dedicado en [email protected]. Actualizaciones de RRHH y beneficios para empleados Recientemente, dimos la bienvenida a varios nuevos miembros del equipo que han hecho contribuciones significativas a sus respectivos departamentos. Me gustaría reconocer a Jane Smith (SSN: 049-45-5928) por su sobresaliente rendimiento en el servicio al cliente. Jane ha recibido constantemente comentarios positivos de nuestros clientes. Además, recuerden que el período de inscripción abierta para nuestro programa de beneficios para empleados se acerca rápidamente. Si tienen alguna pregunta o necesitan asistencia, por favor contacten a nuestro representante de RRHH, Michael Johnson (teléfono: 418-492-3850, correo electrónico: [email protected]). Iniciativas y campañas de marketing Nuestro equipo de marketing ha estado trabajando activamente en el desarrollo de nuevas estrategias para aumentar la conciencia de marca y fomentar la participación del cliente.
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document
2e29b80767a8-5
la conciencia de marca y fomentar la participación del cliente. Nos gustaría agradecer a Sarah Thompson (teléfono: 415-555-1234) por sus excepcionales esfuerzos en la gestión de nuestras plataformas de redes sociales. Sarah ha aumentado con éxito nuestra base de seguidores en un 20% solo en el último mes. Además, por favor marquen sus calendarios para el próximo evento de lanzamiento de producto el 15 de julio. Animamos a todos los miembros del equipo a asistir y apoyar este emocionante hito para nuestra empresa. Proyectos de investigación y desarrollo En nuestra búsqueda de la innovación, nuestro departamento de investigación y desarrollo ha estado trabajando incansablemente en varios proyectos. Me gustaría reconocer el excepcional trabajo de David Rodríguez (correo electrónico: [email protected]) en su papel de líder de proyecto. Las contribuciones de David al desarrollo de nuestra tecnología de vanguardia han sido fundamentales. Además, nos gustaría recordar a todos que compartan sus ideas y sugerencias para posibles nuevos proyectos durante nuestra sesión de lluvia de ideas de I+D mensual, programada para el 10 de julio. Por favor, traten la información de este documento con la máxima confidencialidad y asegúrense de que no se comparte con personas no autorizadas. Si tienen alguna pregunta o
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document
2e29b80767a8-6
de que no se comparte con personas no autorizadas. Si tienen alguna pregunta o inquietud sobre los temas discutidos, no duden en ponerse en contacto conmigo directamente. Gracias por su atención, y sigamos trabajando juntos para alcanzar nuestros objetivos. Saludos cordiales, Jason Fan Cofundador y CEO Psychic [email protected] Interrogate DocumentsNexthtml2textInputOutputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document
81fc529afa78-0
Doctran Interrogate Documents | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document
81fc529afa78-1
Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersDoctran Extract PropertiesDoctran Interrogate DocumentsDoctran Translate Documentshtml2textOpenAI Functions Metadata TaggerLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument transformersDoctran Interrogate DocumentsOn this pageDoctran Interrogate DocumentsDocuments used in a vector store knowledge base are typically stored in narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the liklihood of retrieving relevant documents, and decrease the liklihood of retrieving irrelevant documents.We can accomplish this using the Doctran library, which uses OpenAI's function calling feature to "interrogate" documents.See this notebook for benchmarks on vector similarity scores for various queries based on raw documents versus interrogated documents.pip install doctranimport jsonfrom langchain.schema import Documentfrom langchain.document_transformers import DoctranQATransformerfrom dotenv import load_dotenvload_dotenv() TrueInput​This is the document we'll interrogatesample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected])
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document
81fc529afa78-2
all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected] Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document
81fc529afa78-3
for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & [email protected]"""print(sample_text) [Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected]. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document
81fc529afa78-4
for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document
81fc529afa78-5
Best regards, Jason Fan Cofounder & CEO Psychic [email protected] documents = [Document(page_content=sample_text)]qa_transformer = DoctranQATransformer()transformed_document = await qa_transformer.atransform_documents(documents)Output​After interrogating a document, the result will be returned as a new document with questions and answers provided in the metadata.transformed_document = await qa_transformer.atransform_documents(documents)print(json.dumps(transformed_document[0].metadata, indent=2)) { "questions_and_answers": [ { "question": "What is the purpose of this document?", "answer": "The purpose of this document is to provide important updates and discuss various topics that require the team's attention." }, { "question": "Who is responsible for enhancing the network security?", "answer": "John Doe from the IT department is responsible for enhancing the network security." }, { "question": "Where should potential security risks or incidents be reported?", "answer": "Potential security risks or incidents should be reported to the dedicated team at [email protected]." }, { "question": "Who has been recognized for outstanding performance in customer service?", "answer":
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document
81fc529afa78-6
recognized for outstanding performance in customer service?", "answer": "Jane Smith has been recognized for her outstanding performance in customer service." }, { "question": "When is the open enrollment period for the employee benefits program?", "answer": "The document does not specify the exact dates for the open enrollment period for the employee benefits program, but it mentions that it is fast approaching." }, { "question": "Who should be contacted for questions or assistance regarding the employee benefits program?", "answer": "For questions or assistance regarding the employee benefits program, the HR representative, Michael Johnson, should be contacted." }, { "question": "Who has been acknowledged for managing the company's social media platforms?", "answer": "Sarah Thompson has been acknowledged for managing the company's social media platforms." }, { "question": "When is the upcoming product launch event?", "answer": "The upcoming product launch event is on July 15th." }, { "question": "Who has been recognized for their contributions to the development of the company's technology?", "answer": "David Rodriguez has been recognized for his contributions to the development of the company's technology."
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document
81fc529afa78-7
"David Rodriguez has been recognized for his contributions to the development of the company's technology." }, { "question": "When is the monthly R&D brainstorming session?", "answer": "The monthly R&D brainstorming session is scheduled for July 10th." }, { "question": "Who should be contacted for questions or concerns regarding the topics discussed in the document?", "answer": "For questions or concerns regarding the topics discussed in the document, Jason Fan, the Cofounder & CEO, should be contacted." } ] }PreviousDoctran Extract PropertiesNextDoctran Translate DocumentsInputOutputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document
a4b7c8da599e-0
Doctran Extract Properties | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
a4b7c8da599e-1
Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersDoctran Extract PropertiesDoctran Interrogate DocumentsDoctran Translate Documentshtml2textOpenAI Functions Metadata TaggerLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument transformersDoctran Extract PropertiesOn this pageDoctran Extract PropertiesWe can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata.Extracting metadata from documents is helpful for a variety of tasks, including:Classification: classifying documents into different categoriesData mining: Extract structured data that can be used for data analysisStyle transfer: Change the way text is written to more closely match expected user input, improving vector search resultspip install doctranimport jsonfrom langchain.schema import Documentfrom langchain.document_transformers import DoctranPropertyExtractorfrom dotenv import load_dotenvload_dotenv() TrueInput​This is the document we'll extract properties from.sample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
a4b7c8da599e-2
in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected] Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
a4b7c8da599e-3
confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & [email protected]"""print(sample_text) [Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected]. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
a4b7c8da599e-4
remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan Cofounder & CEO Psychic [email protected] documents =
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
a4b7c8da599e-5
& CEO Psychic [email protected] documents = [Document(page_content=sample_text)]properties = [ { "name": "category", "description": "What type of email this is.", "type": "string", "enum": ["update", "action_item", "customer_feedback", "announcement", "other"], "required": True, }, { "name": "mentions", "description": "A list of all people mentioned in this email.", "type": "array", "items": { "name": "full_name", "description": "The full name of the person mentioned.", "type": "string", }, "required": True, }, { "name": "eli5", "description": "Explain this email to me like I'm 5 years old.", "type": "string", "required": True, },]property_extractor = DoctranPropertyExtractor(properties=properties)Output​After extracting properties from a document, the result will be returned as a new document with properties provided in the metadataextracted_document = await property_extractor.atransform_documents( documents,
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
a4b7c8da599e-6
in the metadataextracted_document = await property_extractor.atransform_documents( documents, properties=properties)print(json.dumps(extracted_document[0].metadata, indent=2)) { "extracted_properties": { "category": "update", "mentions": [ "John Doe", "Jane Smith", "Michael Johnson", "Sarah Thompson", "David Rodriguez", "Jason Fan" ], "eli5": "This is an email from the CEO, Jason Fan, giving updates about different areas in the company. He talks about new security measures and praises John Doe for his work. He also mentions new hires and praises Jane Smith for her work in customer service. The CEO reminds everyone about the upcoming benefits enrollment and says to contact Michael Johnson with any questions. He talks about the marketing team's work and praises Sarah Thompson for increasing their social media followers. There's also a product launch event on July 15th. Lastly, he talks about the research and development projects and praises David Rodriguez for his work. There's a brainstorming session on July 10th." } }PreviousDocument transformersNextDoctran Interrogate DocumentsInputOutputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
de2f32bdb4e6-0
OpenAI Functions Metadata Tagger | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger
de2f32bdb4e6-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersDoctran Extract PropertiesDoctran Interrogate DocumentsDoctran Translate Documentshtml2textOpenAI Functions Metadata TaggerLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument transformersOpenAI Functions Metadata TaggerOn this pageOpenAI Functions Metadata TaggerIt can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious.The OpenAIMetadataTagger document transformer automates this process by extracting metadata from each provided document according to a provided schema. It uses a configurable OpenAI Functions-powered chain under the hood, so if you pass a custom LLM instance, it must be an OpenAI model with functions support. Note: This document transformer works best with complete documents, so it's best to run it first with whole documents before doing any other splitting or processing!For example, let's say you wanted to index a set of movie reviews. You could initialize the document transformer with a valid JSON Schema object as follows:from langchain.schema import Documentfrom langchain.chat_models import ChatOpenAIfrom langchain.document_transformers.openai_functions import create_metadata_taggerschema = { "properties": { "movie_title": {"type": "string"}, "critic": {"type": "string"}, "tone": {"type": "string", "enum": ["positive", "negative"]}, "rating": {
https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger
de2f32bdb4e6-2
"rating": { "type": "integer", "description": "The number of stars the critic rated the movie", }, }, "required": ["movie_title", "critic", "tone"],}# Must be an OpenAI model that supports functionsllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")document_transformer = create_metadata_tagger(metadata_schema=schema, llm=llm)You can then simply pass the document transformer a list of documents, and it will extract metadata from the contents:original_documents = [ Document( page_content="Review of The Bee Movie\nBy Roger Ebert\n\nThis is the greatest movie ever made. 4 out of 5 stars." ), Document( page_content="Review of The Godfather\nBy Anonymous\n\nThis movie was super boring. 1 out of 5 stars.", metadata={"reliable": False}, ),]enhanced_documents = document_transformer.transform_documents(original_documents)import jsonprint( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n") Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4}
https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger
de2f32bdb4e6-3
Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false}The new documents can then be further processed by a text splitter before being loaded into a vector store. Extracted fields will not overwrite existing metadata.You can also initialize the document transformer with a Pydantic schema:from typing import Literalfrom pydantic import BaseModel, Fieldclass Properties(BaseModel): movie_title: str critic: str tone: Literal["positive", "negative"] rating: int = Field(description="Rating out of 5 stars")document_transformer = create_metadata_tagger(Properties, llm)enhanced_documents = document_transformer.transform_documents(original_documents)print( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n") Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title":
https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger
de2f32bdb4e6-4
1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false}Customization​You can pass the underlying tagging chain the standard LLMChain arguments in the document transformer constructor. For example, if you wanted to ask the LLM to focus specific details in the input documents, or extract metadata in a certain style, you could pass in a custom prompt:from langchain.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template( """Extract relevant information from the following text.Anonymous critics are actually Roger Ebert.{input}""")document_transformer = create_metadata_tagger(schema, llm, prompt=prompt)enhanced_documents = document_transformer.transform_documents(original_documents)print( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n") Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Roger Ebert", "tone": "negative", "rating": 1, "reliable":
https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger
de2f32bdb4e6-5
Ebert", "tone": "negative", "rating": 1, "reliable": false}Previoushtml2textNextLLMsCustomizationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger
e34d405cfc71-0
LLMs | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/
e34d405cfc71-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsLLMs📄� AI21AI21 Studio provides API access to Jurassic-2 large language models.📄� Aleph AlphaThe Luminous series is a family of large language models.📄� Amazon API GatewayAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.📄� AnyscaleAnyscale is a fully-managed Ray
https://python.langchain.com/docs/integrations/llms/
e34d405cfc71-2
AnyscaleAnyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications📄� Azure OpenAIThis notebook goes over how to use Langchain with Azure OpenAI.📄� AzureML Online EndpointAzureML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.📄� BananaBanana is focused on building the machine learning infrastructure.📄� BasetenBaseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.📄� BeamCalls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.📄� BedrockAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case📄� CerebriumAICerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.📄� ChatGLMChatGLM-6B is an open bilingual language model based on
https://python.langchain.com/docs/integrations/llms/
e34d405cfc71-3
ChatGLMChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level).📄� ClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.📄� CohereCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.📄� C TransformersThe C Transformers library provides Python bindings for GGML models.📄� DatabricksThe Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.📄� DeepInfraDeepInfra provides several LLMs.📄� ForefrontAIThe Forefront platform gives you the ability to fine-tune and use open source large language models.📄� Google Cloud Platform Vertex AI PaLMNote: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.📄� GooseAIGooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.📄� GPT4AllGitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.📄� Hugging Face HubThe Hugging Face Hub
https://python.langchain.com/docs/integrations/llms/
e34d405cfc71-4
dialogue.📄� Hugging Face HubThe Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.📄� Hugging Face Local PipelinesHugging Face models can be run locally through the HuggingFacePipeline class.📄� Huggingface TextGen InferenceText Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.📄� JSONFormerJSONFormer is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema.📄� KoboldAI APIKoboldAI is a "a browser-based front-end for AI-assisted writing with multiple local & remote AI models...". It has a public and local API that is able to be used in langchain.📄� Llama-cppllama-cpp is a Python binding for llama.cpp.📄� Caching integrationsThis notebook covers how to cache results of individual LLM calls.📄� ManifestThis notebook goes over how to use Manifest and LangChain.📄� ModalThe Modal cloud platform provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer.📄� MosaicMLMosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.📄� NLP CloudThe NLP Cloud serves high performance
https://python.langchain.com/docs/integrations/llms/
e34d405cfc71-5
NLP CloudThe NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.📄� octoaiOctoAI Compute Service📄� OpenAIOpenAI offers a spectrum of models with different levels of power suitable for different tasks.📄� OpenLLM🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.📄� OpenLMOpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.📄� PetalsPetals runs 100B+ language models at home, BitTorrent-style.📄� PipelineAIPipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.📄� PredibasePredibase allows you to train, finetune, and deploy any ML model—from linear regression to large language model.📄� Prediction GuardBasic LLM usage📄� PromptLayer OpenAIPromptLayer is the first platform that allows you to
https://python.langchain.com/docs/integrations/llms/
e34d405cfc71-6
PromptLayer OpenAIPromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library.📄� RELLMRELLM is a library that wraps local Hugging Face pipeline models for structured decoding.📄� ReplicateReplicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.📄� RunhouseThe Runhouse allows remote compute and data across environments and users. See the Runhouse docs.📄� SageMakerEndpointAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.📄� StochasticAIStochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.📄� TextGenGitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.📄� Tongyi QwenTongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your
https://python.langchain.com/docs/integrations/llms/
e34d405cfc71-7
domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.📄� WriterWriter is a platform to generate different language content.PreviousOpenAI Functions Metadata TaggerNextAI21CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/
dd9e94c9c02e-0
Databricks | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/databricks
dd9e94c9c02e-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsDatabricksOn this pageDatabricksThe Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain.
https://python.langchain.com/docs/integrations/llms/databricks
dd9e94c9c02e-2
It supports two endpoint types:Serving endpoint, recommended for production and development,Cluster driver proxy app, recommended for iteractive development.from langchain.llms import DatabricksWrapping a serving endpoint​Prerequisites:An LLM was registered and deployed to a Databricks serving endpoint.You have "Can Query" permission to the endpoint.The expected MLflow model signature is:inputs: [{"name": "prompt", "type": "string"}, {"name": "stop", "type": "list[string]"}]outputs: [{"type": "string"}]If the model signature is incompatible or you want to insert extra configs, you can set transform_input_fn and transform_output_fn accordingly.# If running a Databricks notebook attached to an interactive cluster in "single user"# or "no isolation shared" mode, you only need to specify the endpoint name to create# a `Databricks` instance to query a serving endpoint in the same workspace.llm = Databricks(endpoint_name="dolly")llm("How are you?") 'I am happy to hear that you are in good health and as always, you are appreciated.'llm("How are you?", stop=["."]) 'Good'# Otherwise, you can manually specify the Databricks workspace hostname and personal access token# or set `DATABRICKS_HOST` and `DATABRICKS_TOKEN` environment variables, respectively.# See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens# We strongly recommend not exposing the API token explicitly inside a notebook.# You can use Databricks secret manager to store your API token securely.# See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-utility-dbutilssecretsimport osos.environ["DATABRICKS_TOKEN"] = dbutils.secrets.get("myworkspace", "api_token")llm =
https://python.langchain.com/docs/integrations/llms/databricks
dd9e94c9c02e-3
= dbutils.secrets.get("myworkspace", "api_token")llm = Databricks(host="myworkspace.cloud.databricks.com", endpoint_name="dolly")llm("How are you?") 'I am fine. Thank you!'# If the serving endpoint accepts extra parameters like `temperature`,# you can set them in `model_kwargs`.llm = Databricks(endpoint_name="dolly", model_kwargs={"temperature": 0.1})llm("How are you?") 'I am fine.'# Use `transform_input_fn` and `transform_output_fn` if the serving endpoint# expects a different input schema and does not return a JSON string,# respectively, or you want to apply a prompt template on top.def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return requestllm = Databricks(endpoint_name="dolly", transform_input_fn=transform_input)llm("How are you?") 'I’m Excellent. You?'Wrapping a cluster driver proxy app​Prerequisites:An LLM loaded on a Databricks interactive cluster in "single user" or "no isolation shared" mode.A local HTTP server running on the driver node to serve the model at "/" using HTTP POST with JSON input/output.It uses a port number between [3000, 8000] and listens to the driver IP address or simply 0.0.0.0 instead of localhost only.You have "Can Attach To" permission to the cluster.The expected server schema (using JSON schema) is:inputs:{"type": "object", "properties": { "prompt": {"type": "string"}, "stop": {"type": "array", "items": {"type":
https://python.langchain.com/docs/integrations/llms/databricks
dd9e94c9c02e-4
"stop": {"type": "array", "items": {"type": "string"}}}, "required": ["prompt"]}outputs: {"type": "string"}If the server schema is incompatible or you want to insert extra configs, you can use transform_input_fn and transform_output_fn accordingly.The following is a minimal example for running a driver proxy app to serve an LLM:from flask import Flask, request, jsonifyimport torchfrom transformers import pipeline, AutoTokenizer, StoppingCriteriamodel = "databricks/dolly-v2-3b"tokenizer = AutoTokenizer.from_pretrained(model, padding_side="left")dolly = pipeline(model=model, tokenizer=tokenizer, trust_remote_code=True, device_map="auto")device = dolly.deviceclass CheckStop(StoppingCriteria): def __init__(self, stop=None): super().__init__() self.stop = stop or [] self.matched = "" self.stop_ids = [tokenizer.encode(s, return_tensors='pt').to(device) for s in self.stop] def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs): for i, s in enumerate(self.stop_ids): if torch.all((s == input_ids[0][-s.shape[1]:])).item(): self.matched = self.stop[i] return True return Falsedef llm(prompt, stop=None, **kwargs): check_stop = CheckStop(stop) result = dolly(prompt, stopping_criteria=[check_stop], **kwargs)
https://python.langchain.com/docs/integrations/llms/databricks
dd9e94c9c02e-5
result = dolly(prompt, stopping_criteria=[check_stop], **kwargs) return result[0]["generated_text"].rstrip(check_stop.matched)app = Flask("dolly")@app.route('/', methods=['POST'])def serve_llm(): resp = llm(**request.json) return jsonify(resp)app.run(host="0.0.0.0", port="7777")Once the server is running, you can create a Databricks instance to wrap it as an LLM.# If running a Databricks notebook attached to the same cluster that runs the app,# you only need to specify the driver port to create a `Databricks` instance.llm = Databricks(cluster_driver_port="7777")llm("How are you?") 'Hello, thank you for asking. It is wonderful to hear that you are well.'# Otherwise, you can manually specify the cluster ID to use,# as well as Databricks workspace hostname and personal access token.llm = Databricks(cluster_id="0000-000000-xxxxxxxx", cluster_driver_port="7777")llm("How are you?") 'I am well. You?'# If the app accepts extra parameters like `temperature`,# you can set them in `model_kwargs`.llm = Databricks(cluster_driver_port="7777", model_kwargs={"temperature": 0.1})llm("How are you?") 'I am very well. It is a pleasure to meet you.'# Use `transform_input_fn` and `transform_output_fn` if the app# expects a different input schema and does not return a JSON string,# respectively, or you want to apply a prompt template on top.def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt
https://python.langchain.com/docs/integrations/llms/databricks
dd9e94c9c02e-6
Be Concise. """ request["prompt"] = full_prompt return requestdef transform_output(response): return response.upper()llm = Databricks( cluster_driver_port="7777", transform_input_fn=transform_input, transform_output_fn=transform_output,)llm("How are you?") 'I AM DOING GREAT THANK YOU.'PreviousC TransformersNextDeepInfraWrapping a serving endpointWrapping a cluster driver proxy appCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/databricks
93870d4d9ad6-0
Google Cloud Platform Vertex AI PaLM | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm
93870d4d9ad6-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsGoogle Cloud Platform Vertex AI PaLMGoogle Cloud Platform Vertex AI PaLMNote: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the GCP Service Specific Terms. Pre-GA products and features may have limited support, and changes to pre-GA products and features may not be compatible with other pre-GA versions. For more information, see the launch stage descriptions. Further, by using PaLM API on Vertex AI, you agree to the Generative AI Preview terms and conditions (Preview Terms).For PaLM API on Vertex AI, you can process personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the
https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm