id
stringlengths
14
15
text
stringlengths
23
2.21k
source
stringlengths
52
97
93870d4d9ad6-2
personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms).To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:Have credentials configured for your environment (gcloud, workload identity, etc...)Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variableThis codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAChttps://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth#!pip install google-cloud-aiplatformfrom langchain.llms import VertexAIfrom langchain import PromptTemplate, LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = VertexAI()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) 'Justin Bieber was born on March 1, 1994. The Super Bowl in 1994 was won by the San Francisco 49ers.\nThe final answer: San Francisco 49ers.'You can now leverage the Codey API for code generation within Vertex AI. The model names are:code-bison: for code suggestioncode-gecko: for code completionllm = VertexAI(model_name="code-bison")llm_chain = LLMChain(prompt=prompt, llm=llm)question = "Write a python function that identifies if the number is a prime number?"llm_chain.run(question) '```python\ndef is_prime(n):\n """\n
https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm
93870d4d9ad6-3
'```python\ndef is_prime(n):\n """\n Determines if a number is prime.\n\n Args:\n n: The number to be tested.\n\n Returns:\n True if the number is prime, False otherwise.\n """\n\n # Check if the number is 1.\n if n == 1:\n return False\n\n # Check if the number is 2.\n if n == 2:\n return True\n\n'PreviousForefrontAINextGooseAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm
53b6767b7355-0
C Transformers | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/ctransformers
53b6767b7355-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsC TransformersC TransformersThe C Transformers library provides Python bindings for GGML models.This example goes over how to use LangChain to interact with C Transformers models.Install%pip install ctransformersLoad Modelfrom langchain.llms import CTransformersllm = CTransformers(model="marella/gpt-2-ggml")Generate Textprint(llm("AI is going to"))Streamingfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = CTransformers( model="marella/gpt-2-ggml", callbacks=[StreamingStdOutCallbackHandler()])response = llm("AI is going to")LLMChainfrom langchain import PromptTemplate, LLMChaintemplate = """Question: {question}Answer:"""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain =
https://python.langchain.com/docs/integrations/llms/ctransformers
53b6767b7355-2
= PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)response = llm_chain.run("What is AI?")PreviousCohereNextDatabricksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/ctransformers
d357652bbea9-0
Hugging Face Hub | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/huggingface_hub
d357652bbea9-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsHugging Face HubOn this pageHugging Face HubThe Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.This example showcases how to connect to the Hugging Face Hub and use different models.Installation and Setup​To use, you should have the huggingface_hub python package installed.pip install huggingface_hub# get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-tokenfrom getpass import getpassHUGGINGFACEHUB_API_TOKEN = getpass() ········import
https://python.langchain.com/docs/integrations/llms/huggingface_hub
d357652bbea9-2
········import osos.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKENPrepare Examples​from langchain import HuggingFaceHubfrom langchain import PromptTemplate, LLMChainquestion = "Who won the FIFA World Cup in the year 1994? "template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Examples​Below are some examples of models you can access through the Hugging Face Hub integration.Flan, by Google​repo_id = "google/flan-t5-xxl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other optionsllm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question)) The FIFA World Cup was held in the year 1994. West Germany won the FIFA World Cup in 1994Dolly, by Databricks​See Databricks organization page for a list of available models.repo_id = "databricks/dolly-v2-3b"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question)) First of all, the world cup was won by the Germany. Then the Argentina won the world cup in 2022. So, the Argentina won the world cup in
https://python.langchain.com/docs/integrations/llms/huggingface_hub
d357652bbea9-3
the Argentina won the world cup in 2022. So, the Argentina won the world cup in 1994. Question: WhoCamel, by Writer​See Writer's organization page for a list of available models.repo_id = "Writer/camel-5b-hf" # See https://huggingface.co/Writer for other optionsllm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))XGen, by Salesforce​See more information.repo_id = "Salesforce/xgen-7b-8k-base"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))Falcon, by Technology Innovation Institute (TII)​See more information.repo_id = "tiiuae/falcon-40b"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))PreviousGPT4AllNextHugging Face Local PipelinesInstallation and SetupPrepare ExamplesExamplesFlan, by GoogleDolly, by DatabricksCamel, by WriterXGen, by SalesforceFalcon, by Technology Innovation Institute (TII)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/huggingface_hub
8bffd875f53b-0
KoboldAI API | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/koboldai
8bffd875f53b-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsKoboldAI APIKoboldAI APIKoboldAI is a "a browser-based front-end for AI-assisted writing with multiple local & remote AI models...". It has a public and local API that is able to be used in langchain.This example goes over how to use LangChain with that API.Documentation can be found in the browser adding /api to the end of your endpoint (i.e http://127.0.0.1/:5000/api).from langchain.llms import KoboldApiLLMReplace the endpoint seen below with the one shown in the output after starting the webui with --api or --public-apiOptionally, you can pass in parameters like temperature or max_lengthllm = KoboldApiLLM(endpoint="http://192.168.1.144:5000", max_length=80)response =
https://python.langchain.com/docs/integrations/llms/koboldai
8bffd875f53b-2
max_length=80)response = llm("### Instruction:\nWhat is the first book of the bible?\n### Response:")PreviousJSONFormerNextLlama-cppCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/koboldai
226a42d67377-0
Runhouse | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/runhouse
226a42d67377-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsRunhouseRunhouseThe Runhouse allows remote compute and data across environments and users. See the Runhouse docs.This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.Note: Code uses SelfHosted name instead of the Runhouse.pip install runhousefrom langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLMfrom langchain import PromptTemplate, LLMChainimport runhouse as rh INFO | 2023-04-17 16:47:36,173 | No auth token provided, so not using RNS API to save and load configs# For an on-demand A100 with GCP, Azure, or Lambdagpu = rh.cluster(name="rh-a10x",
https://python.langchain.com/docs/integrations/llms/runhouse
226a42d67377-2
with GCP, Azure, or Lambdagpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)# For an on-demand A10G with AWS (no single A100s on AWS)# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')# For an existing cluster# gpu = rh.cluster(ips=['<ip of the cluster>'],# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},# name='rh-a10x')template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = SelfHostedHuggingFaceLLM( model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds "\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber"You can also load more custom models through the SelfHostedHuggingFaceLLM interface:llm = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-small",
https://python.langchain.com/docs/integrations/llms/runhouse
226a42d67377-3
model_id="google/flan-t5-small", task="text2text-generation", hardware=gpu,)llm("What is the capital of Germany?") INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds 'berlin'Using a custom load function, we can load a custom pipeline directly on the remote hardware:def load_pipeline(): from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) # Need to be inside the fn in notebooks model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) return pipedef inference_fn(pipeline, prompt, stop=None): return pipeline(prompt)[0]["generated_text"][len(prompt) :]llm = SelfHostedHuggingFaceLLM( model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)llm("Who is the current US president?") INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds 'john w.
https://python.langchain.com/docs/integrations/llms/runhouse
226a42d67377-4
| Time to send message: 0.3 seconds 'john w. bush'You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:pipeline = load_pipeline()llm = SelfHostedPipeline.from_pipeline( pipeline=pipeline, hardware=gpu, model_reqs=model_reqs)Instead, we can also send it to the hardware's filesystem, which will be much faster.rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to( gpu, path="models")llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu)PreviousReplicateNextSageMakerEndpointCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/runhouse
1071dcc2a60c-0
TextGen | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/textgen
1071dcc2a60c-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsTextGenOn this pageTextGenGitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.This example goes over how to use LangChain to interact with LLM models via the text-generation-webui API integration.Please ensure that you have text-generation-webui configured and an LLM installed. Recommended installation via the one-click installer appropriate for your OS.Once text-generation-webui is installed and confirmed working via the web interface, please enable the api option either through the web model configuration tab, or by adding the run-time arg --api to your start command.Set model_url and run the example​model_url = "http://localhost:5000"import langchainfrom langchain import PromptTemplate, LLMChainfrom
https://python.langchain.com/docs/integrations/llms/textgen
1071dcc2a60c-2
langchainfrom langchain import PromptTemplate, LLMChainfrom langchain.llms import TextGenlangchain.debug = Truetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = TextGen(model_url=model_url)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question)PreviousStochasticAINextTongyi QwenSet model_url and run the exampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/textgen
520f4b4df23d-0
AzureML Online Endpoint | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
520f4b4df23d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsAzureML Online EndpointOn this pageAzureML Online EndpointAzureML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.This notebook goes over how to use an LLM hosted on an AzureML online endpointfrom langchain.llms.azureml_endpoint import AzureMLOnlineEndpointSet up​To use the wrapper, you must deploy a model on AzureML and obtain the following parameters:endpoint_api_key: The API key provided by the endpointendpoint_url: The REST endpoint url provided by the endpointdeployment_name: The deployment name of the endpointContent Formatter​The content_formatter parameter is
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
520f4b4df23d-2
The deployment name of the endpointContent Formatter​The content_formatter parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a ContentFormatterBase class is provided to allow users to transform data to their liking. Additionally, there are three content formatters already provided:OSSContentFormatter: Formats request and response data for models from the Open Source category in the Model Catalog. Note, that not all models in the Open Source category may follow the same schemaDollyContentFormatter: Formats request and response data for the dolly-v2-12b modelHFContentFormatter: Formats request and response data for text-generation Hugging Face modelsBelow is an example using a summarization model from Hugging Face.Custom Content Formatter​from typing import Dictfrom langchain.llms.azureml_endpoint import AzureMLOnlineEndpoint, ContentFormatterBaseimport osimport jsonclass CustomFormatter(ContentFormatterBase): content_type = "application/json" accepts = "application/json" def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps( { "inputs": [prompt], "parameters": model_kwargs, "options": {"use_cache": False, "wait_for_model": True}, } ) return str.encode(input_str) def format_response_payload(self, output:
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
520f4b4df23d-3
return str.encode(input_str) def format_response_payload(self, output: bytes) -> str: response_json = json.loads(output) return response_json[0]["summary_text"]content_formatter = CustomFormatter()llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("BART_ENDPOINT_API_KEY"), endpoint_url=os.getenv("BART_ENDPOINT_URL"), deployment_name="linydub-bart-large-samsum-3", model_kwargs={"temperature": 0.8, "max_new_tokens": 400}, content_formatter=content_formatter,)large_text = """On January 7, 2020, Blockberry Creative announced that HaSeul would not participate in the promotion for Loona's next album because of mental health concerns. She was said to be diagnosed with "intermittent anxiety symptoms" and would be taking time to focus on her health.[39] On February 5, 2020, Loona released their second EP titled [#] (read as hash), along with the title track "So What".[40] Although HaSeul did not appear in the title track, her vocals are featured on three other songs on the album, including "365". Once peaked at number 1 on the daily Gaon Retail Album Chart,[41] the EP then debuted at number 2 on the weekly Gaon Album Chart. On March 12, 2020, Loona won their first music show trophy with "So What" on Mnet's M Countdown.[42]On October 19, 2020, Loona released their third EP titled [12:00] (read as midnight),[43] accompanied by its first single "Why Not?". HaSeul was again not involved in the album, out of her own decision to focus on the recovery
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
520f4b4df23d-4
was again not involved in the album, out of her own decision to focus on the recovery of her health.[44] The EP then became their first album to enter the Billboard 200, debuting at number 112.[45] On November 18, Loona released the music video for "Star", another song on [12:00].[46] Peaking at number 40, "Star" is Loona's first entry on the Billboard Mainstream Top 40, making them the second K-pop girl group to enter the chart.[47]On June 1, 2021, Loona announced that they would be having a comeback on June 28, with their fourth EP, [&] (read as and).[48] The following day, on June 2, a teaser was posted to Loona's official social media accounts showing twelve sets of eyes, confirming the return of member HaSeul who had been on hiatus since early 2020.[49] On June 12, group members YeoJin, Kim Lip, Choerry, and Go Won released the song "Yum-Yum" as a collaboration with Cocomong.[50] On September 8, they released another collaboration song named "Yummy-Yummy".[51] On June 27, 2021, Loona announced at the end of their special clip that they are making their Japanese debut on September 15 under Universal Music Japan sublabel EMI Records.[52] On August 27, it was announced that Loona will release the double A-side single, "Hula Hoop / Star Seed" on September 15, with a physical CD release on October 20.[53] In December, Chuu filed an injunction to suspend her exclusive contract with Blockberry Creative.[54][55]"""summarized_text = llm(large_text)print(summarized_text) HaSeul won her first music show trophy with "So
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
520f4b4df23d-5
HaSeul won her first music show trophy with "So What" on Mnet's M Countdown. Loona released their second EP titled [#] (read as hash] on February 5, 2020. HaSeul did not take part in the promotion of the album because of mental health issues. On October 19, 2020, they released their third EP called [12:00]. It was their first album to enter the Billboard 200, debuting at number 112. On June 2, 2021, the group released their fourth EP called Yummy-Yummy. On August 27, it was announced that they are making their Japanese debut on September 15 under Universal Music Japan sublabel EMI Records.Dolly with LLMChain​from langchain import PromptTemplatefrom langchain.llms.azureml_endpoint import DollyContentFormatterfrom langchain.chains import LLMChainformatter_template = "Write a {word_count} word essay about {topic}."prompt = PromptTemplate( input_variables=["word_count", "topic"], template=formatter_template)content_formatter = DollyContentFormatter()llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("DOLLY_ENDPOINT_API_KEY"), endpoint_url=os.getenv("DOLLY_ENDPOINT_URL"), deployment_name="databricks-dolly-v2-12b-4", model_kwargs={"temperature": 0.8, "max_tokens": 300}, content_formatter=content_formatter,)chain = LLMChain(llm=llm, prompt=prompt)print(chain.run({"word_count": 100, "topic": "how to make friends"})) Many people are willing to talk about themselves; it's others who seem to be stuck up. Try to understand others where they're coming from. Like minded people can build a
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
520f4b4df23d-6
be stuck up. Try to understand others where they're coming from. Like minded people can build a tribe together.Serializing an LLM​You can also save and load LLM configurationsfrom langchain.llms.loading import load_llmfrom langchain.llms.azureml_endpoint import AzureMLEndpointClientsave_llm = AzureMLOnlineEndpoint( deployment_name="databricks-dolly-v2-12b-4", model_kwargs={ "temperature": 0.2, "max_tokens": 150, "top_p": 0.8, "frequency_penalty": 0.32, "presence_penalty": 72e-3, },)save_llm.save("azureml.json")loaded_llm = load_llm("azureml.json")print(loaded_llm) AzureMLOnlineEndpoint Params: {'deployment_name': 'databricks-dolly-v2-12b-4', 'model_kwargs': {'temperature': 0.2, 'max_tokens': 150, 'top_p': 0.8, 'frequency_penalty': 0.32, 'presence_penalty': 0.072}}PreviousAzure OpenAINextBananaSet upContent FormatterCustom Content FormatterDolly with LLMChainSerializing an LLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/azureml_endpoint_example
e6dda29fe6b8-0
GPT4All | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/gpt4all
e6dda29fe6b8-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsGPT4AllOn this pageGPT4AllGitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.This example goes over how to use LangChain to interact with GPT4All models.%pip install gpt4all > /dev/null Note: you may need to restart the kernel to use updated packages.from langchain import PromptTemplate, LLMChainfrom langchain.llms import GPT4Allfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlertemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Specify Model​To run locally, download a compatible ggml-formatted model. Download option 1: The
https://python.langchain.com/docs/integrations/llms/gpt4all
e6dda29fe6b8-2
run locally, download a compatible ggml-formatted model. Download option 1: The gpt4all page has a useful Model Explorer section:Select a model of interestDownload using the UI and move the .bin to the local_path (noted below)For more info, visit https://github.com/nomic-ai/gpt4all.Download option 2: Uncomment the below block to download a model. You may want to update url to a new version, whih can be browsed using the gpt4all page.local_path = ( "./models/ggml-gpt4all-l13b-snoozy.bin" # replace with your desired local file path)# import requests# from pathlib import Path# from tqdm import tqdm# Path(local_path).parent.mkdir(parents=True, exist_ok=True)# # Example model. Check https://github.com/nomic-ai/gpt4all for the latest models.# url = 'http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin'# # send a GET request to the URL to download the file. Stream since it's large# response = requests.get(url, stream=True)# # open the file in binary mode and write the contents of the response to it in chunks# # This is a large file, so be prepared to wait.# with open(local_path, 'wb') as f:# for chunk in tqdm(response.iter_content(chunk_size=8192)):# if chunk:# f.write(chunk)# Callbacks support token-wise streamingcallbacks = [StreamingStdOutCallbackHandler()]# Verbose is required to pass to the callback managerllm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)# If you want to use a custom model add the backend parameter# Check
https://python.langchain.com/docs/integrations/llms/gpt4all
e6dda29fe6b8-3
verbose=True)# If you want to use a custom model add the backend parameter# Check https://docs.gpt4all.io/gpt4all_python.html for supported backendsllm = GPT4All(model=local_path, backend="gptj", callbacks=callbacks, verbose=True)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question)PreviousGooseAINextHugging Face HubSpecify ModelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/gpt4all
41bc4ac4ec9d-0
Baseten | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/baseten
41bc4ac4ec9d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsBasetenBasetenBaseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.This example demonstrates using Langchain with models deployed on Baseten.SetupTo run this notebook, you'll need a Baseten account and an API key.You'll also need to install the Baseten Python package:pip install basetenimport basetenbaseten.login("YOUR_API_KEY")Single model callFirst, you'll need to deploy a model to Baseten.You can deploy foundation models like WizardLM and Alpaca with one click from the Baseten model library or if you have your own model, deploy it with this tutorial.In this example, we'll work with WizardLM. Deploy WizardLM here and follow along with the deployed model's version ID.from langchain.llms import Baseten#
https://python.langchain.com/docs/integrations/llms/baseten
41bc4ac4ec9d-2
and follow along with the deployed model's version ID.from langchain.llms import Baseten# Load the modelwizardlm = Baseten(model="MODEL_VERSION_ID", verbose=True)# Prompt the modelwizardlm("What is the difference between a Wizard and a Sorcerer?")Chained model callsWe can chain together multiple calls to one or multiple models, which is the whole point of Langchain!This example uses WizardLM to plan a meal with an entree, three sides, and an alcoholic and non-alcoholic beverage pairing.from langchain.chains import SimpleSequentialChainfrom langchain import PromptTemplate, LLMChain# Build the first link in the chainprompt = PromptTemplate( input_variables=["cuisine"], template="Name a complex entree for a {cuisine} dinner. Respond with just the name of a single dish.",)link_one = LLMChain(llm=wizardlm, prompt=prompt)# Build the second link in the chainprompt = PromptTemplate( input_variables=["entree"], template="What are three sides that would go with {entree}. Respond with only a list of the sides.",)link_two = LLMChain(llm=wizardlm, prompt=prompt)# Build the third link in the chainprompt = PromptTemplate( input_variables=["sides"], template="What is one alcoholic and one non-alcoholic beverage that would go well with this list of sides: {sides}. Respond with only the names of the beverages.",)link_three = LLMChain(llm=wizardlm, prompt=prompt)# Run the full chain!menu_maker = SimpleSequentialChain( chains=[link_one, link_two, link_three], verbose=True)menu_maker.run("South Indian")PreviousBananaNextBeamCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/baseten
8d8793de2fb5-0
Writer | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/writer
8d8793de2fb5-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsWriterWriterWriter is a platform to generate different language content.This example goes over how to use LangChain to interact with Writer models.You have to get the WRITER_API_KEY here.from getpass import getpassWRITER_API_KEY = getpass() ········import osos.environ["WRITER_API_KEY"] = WRITER_API_KEYfrom langchain.llms import Writerfrom langchain import PromptTemplate, LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])# If you get an error, probably, you need to set up the "base_url" parameter that can be taken from the error log.llm = Writer()llm_chain = LLMChain(prompt=prompt,
https://python.langchain.com/docs/integrations/llms/writer
8d8793de2fb5-2
from the error log.llm = Writer()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousTongyi QwenNextMemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/writer
46d34e9c5525-0
Clarifai | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/clarifai
46d34e9c5525-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsClarifaiClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.This example goes over how to use LangChain to interact with Clarifai models. To use Clarifai, you must have an account and a Personal Access Token (PAT) key.
https://python.langchain.com/docs/integrations/llms/clarifai
46d34e9c5525-2
Check here to get or create a PAT.Dependencies# Install required dependenciespip install clarifaiImportsHere we will be setting the personal access token. You can find your PAT under settings/security in your Clarifai account.# Please login and get your API key from https://clarifai.com/settings/securityfrom getpass import getpassCLARIFAI_PAT = getpass() ········# Import the required modulesfrom langchain.llms import Clarifaifrom langchain import PromptTemplate, LLMChainInputCreate a prompt template to be used with the LLM Chain:template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])SetupSetup the user id and app id where the model resides. You can find a list of public models on https://clarifai.com/explore/modelsYou will have to also initialize the model id and if needed, the model version id. Some models have many versions, you can choose the one appropriate for your task.USER_ID = "openai"APP_ID = "chat-completion"MODEL_ID = "GPT-3_5-turbo"# You can provide a specific model version as the model_version_id arg.# MODEL_VERSION_ID = "MODEL_VERSION_ID"# Initialize a Clarifai LLMclarifai_llm = Clarifai( pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)# Create LLM chainllm_chain = LLMChain(prompt=prompt, llm=clarifai_llm)Run Chainquestion = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) 'Justin Bieber was born on March 1, 1994.
https://python.langchain.com/docs/integrations/llms/clarifai
46d34e9c5525-3
'Justin Bieber was born on March 1, 1994. So, we need to figure out the Super Bowl winner for the 1994 season. The NFL season spans two calendar years, so the Super Bowl for the 1994 season would have taken place in early 1995. \n\nThe Super Bowl in question is Super Bowl XXIX, which was played on January 29, 1995. The game was won by the San Francisco 49ers, who defeated the San Diego Chargers by a score of 49-26. Therefore, the San Francisco 49ers won the Super Bowl in the year Justin Bieber was born.'PreviousChatGLMNextCohereCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/clarifai
49ca6b066a9e-0
Huggingface TextGen Inference | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference
49ca6b066a9e-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsHuggingface TextGen InferenceOn this pageHuggingface TextGen InferenceText Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.This notebooks goes over how to use a self hosted LLM using Text Generation Inference.To use, you should have the text_generation python package installed.# !pip3 install text_generationfrom langchain.llms import HuggingFaceTextGenInferencellm = HuggingFaceTextGenInference( inference_server_url="http://localhost:8010/", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01,
https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference
49ca6b066a9e-2
typical_p=0.95, temperature=0.01, repetition_penalty=1.03,)llm("What did foo say about bar?")Streaming​from langchain.llms import HuggingFaceTextGenInferencefrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = HuggingFaceTextGenInference( inference_server_url="http://localhost:8010/", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03, stream=True)llm("What did foo say about bar?", callbacks=[StreamingStdOutCallbackHandler()])PreviousHugging Face Local PipelinesNextJSONFormerStreamingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference
a57a5d854a80-0
CerebriumAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/cerebriumai_example
a57a5d854a80-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsCerebriumAIOn this pageCerebriumAICerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.This notebook goes over how to use Langchain with CerebriumAI.Install cerebrium​The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium.# Install the packagepip3 install cerebriumImports​import osfrom langchain.llms import CerebriumAIfrom langchain import PromptTemplate, LLMChainSet the Environment API Key​Make sure to get your API key from CerebriumAI. See here. You are given a 1 hour free of serverless GPU compute to test different
https://python.langchain.com/docs/integrations/llms/cerebriumai_example
a57a5d854a80-2
See here. You are given a 1 hour free of serverless GPU compute to test different models.os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE"Create the CerebriumAI instance​You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url.llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE")Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousBedrockNextChatGLMInstall cerebriumImportsSet the Environment API KeyCreate the CerebriumAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/cerebriumai_example
7efa063ce8d0-0
Replicate | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsReplicateOn this pageReplicateReplicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.This example goes over how to use LangChain to interact with Replicate modelsSetup​# magics to auto-reload external modules in case you are making changes to langchain while working on this notebook%autoreload 2To run this notebook, you'll need to create a replicate account and install the replicate python client.poetry run pip install replicate Collecting replicate Using cached replicate-0.9.0-py3-none-any.whl (21 kB) Requirement already satisfied: packaging in
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-2
(21 kB) Requirement already satisfied: packaging in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (23.1) Requirement already satisfied: pydantic>1 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (1.10.9) Requirement already satisfied: requests>2 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (2.28.2) Requirement already satisfied: typing-extensions>=4.2.0 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from pydantic>1->replicate) (4.5.0) Requirement already satisfied: charset-normalizer<4,>=2 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (3.1.0) Requirement already satisfied: idna<4,>=2.5 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (3.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (1.26.16) Requirement already satisfied: certifi>=2017.4.17 in
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-3
Requirement already satisfied: certifi>=2017.4.17 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (2023.5.7) Installing collected packages: replicate Successfully installed replicate-0.9.0# get a token: https://replicate.com/accountfrom getpass import getpassREPLICATE_API_TOKEN = getpass()import osos.environ["REPLICATE_API_TOKEN"] = REPLICATE_API_TOKENfrom langchain.llms import Replicatefrom langchain import PromptTemplate, LLMChainCalling a model​Find a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version.For example, here is LLama-V2.llm = Replicate( model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", input={"temperature": 0.75, "max_length": 500, "top_p": 1},)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""llm(prompt) "1. Dogs do not have the ability to operate complex machinery like cars.\n2. Dogs do not have the physical dexterity or coordination to manipulate the controls of a car.\n3. Dogs do not have the cognitive ability to understand traffic laws and safely operate a car.\n4. Therefore, no, a dog cannot drive a car.\nAssistant, please provide the reasoning step by step.\n\nAssistant:\n\n1. Dogs do not have the ability to operate complex machinery like
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-4
Dogs do not have the ability to operate complex machinery like cars.\n\t* This is because dogs do not possess the necessary cognitive abilities to understand how to operate a car.\n2. Dogs do not have the physical dexterity or coordination to manipulate the controls of a car.\n\t* This is because dogs do not have the necessary fine motor skills to operate the pedals and steering wheel of a car.\n3. Dogs do not have the cognitive ability to understand traffic laws and safely operate a car.\n\t* This is because dogs do not have the ability to comprehend and interpret traffic signals, road signs, and other drivers' behaviors.\n4. Therefore, no, a dog cannot drive a car."As another example, for this dolly model, click on the API tab. The model name/version would be: replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5Only the model param is required, but we can add other model params when initializing.For example, if we were running stable diffusion and wanted to change the image dimensions:Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})Note that only the first output of a model will be returned.llm = Replicate( model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")prompt = """Answer the following yes/no question by reasoning step by step.
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-5
= """Answer the following yes/no question by reasoning step by step. Can a dog drive a car?"""llm(prompt) 'No, dogs are not capable of driving cars since they do not have hands to operate a steering wheel nor feet to control a gas pedal. However, it’s possible for a driver to train their pet in a different behavior and make them sit while transporting goods from one place to another.\n\n'We can call any replicate model using this syntax. For example, we can call stable diffusion.text2image = Replicate( model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={"image_dimensions": "512x512"},)image_output = text2image("A cat riding a motorcycle by Picasso")image_output 'https://replicate.delivery/pbxt/9fJFaKfk5Zj3akAAn955gjP49G8HQpHK01M6h3BfzQoWSbkiA/out-0.png'The model spits out a URL. Let's render it.poetry run pip install Pillow Collecting Pillow Using cached Pillow-10.0.0-cp39-cp39-manylinux_2_28_x86_64.whl (3.4 MB) Installing collected packages: Pillow Successfully installed Pillow-10.0.0from PIL import Imageimport requestsfrom io import BytesIOresponse = requests.get(image_output)img = Image.open(BytesIO(response.content))img ![png](_replicate_files/output_18_0.png) Streaming Response​You can optionally stream the response as
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-6
Streaming Response​You can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on Streaming for more information.from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = Replicate( streaming=True, callbacks=[StreamingStdOutCallbackHandler()], model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", input={"temperature": 0.75, "max_length": 500, "top_p": 1},)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""_ = llm(prompt) 1. Dogs do not have the ability to operate complex machinery like cars. 2. Dogs do not have the physical dexterity to manipulate the controls of a car. 3. Dogs do not have the cognitive ability to understand traffic laws and drive safely. Therefore, the answer is no, a dog cannot drive a car.Stop SequencesYou can also specify stop sequences. If you have a definite stop sequence for the generation that you are going to parse with anyway, it is better (cheaper and faster!) to just cancel the generation once one or more stop sequences are reached, rather than letting the model ramble on till the specified max_length. Stop sequences work regardless of whether you are in streaming mode or not, and Replicate only charges you for the generation up until the stop sequence.import timellm = Replicate(
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-7
you for the generation up until the stop sequence.import timellm = Replicate( model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", input={"temperature": 0.01, "max_length": 500, "top_p": 1},)prompt = """User: What is the best way to learn python?Assistant:"""start_time = time.perf_counter()raw_output = llm(prompt) # raw output, no stopend_time = time.perf_counter()print(f"Raw output:\n {raw_output}")print(f"Raw output runtime: {end_time - start_time} seconds")start_time = time.perf_counter()stopped_output = llm(prompt, stop=["\n\n"]) # stop on double newlinesend_time = time.perf_counter()print(f"Stopped output:\n {stopped_output}")print(f"Stopped output runtime: {end_time - start_time} seconds") Raw output: There are several ways to learn Python, and the best method for you will depend on your learning style and goals. Here are a few suggestions: 1. Online tutorials and courses: Websites such as Codecademy, Coursera, and edX offer interactive coding lessons and courses on Python. These can be a great way to get started, especially if you prefer a self-paced approach. 2. Books: There are many excellent books on Python that can provide a comprehensive introduction to the language. Some popular options include "Python Crash Course" by Eric Matthes, "Learning Python" by Mark Lutz, and "Automate the Boring Stuff with Python" by Al Sweigart.
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-8
and "Automate the Boring Stuff with Python" by Al Sweigart. 3. Online communities: Participating in online communities such as Reddit's r/learnpython community or Python communities on Discord can be a great way to get support and feedback as you learn. 4. Practice: The best way to learn Python is by doing. Start by writing simple programs and gradually work your way up to more complex projects. 5. Find a mentor: Having a mentor who is experienced in Python can be a great way to get guidance and feedback as you learn. 6. Join online meetups and events: Joining online meetups and events can be a great way to connect with other Python learners and get a sense of the community. 7. Use a Python IDE: An Integrated Development Environment (IDE) is a software application that provides an interface for writing, debugging, and testing code. Using a Python IDE such as PyCharm, VSCode, or Spyder can make writing and debugging Python code much easier. 8. Learn by building: One of the best ways to learn Python is by building projects. Start with small projects and gradually work your way up to more complex ones. 9. Learn from others: Look at other people's code, understand how it works and try to implement it in your own way. 10. Be patient: Learning a programming language takes time and practice, so be patient with yourself and don't get discouraged if you don't understand something at first. Please let me know if you have any other questions or if there is anything Raw output runtime: 32.74260359999607 seconds Stopped output: There are several ways to learn Python, and the best method for you will depend
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-9
There are several ways to learn Python, and the best method for you will depend on your learning style and goals. Here are a few suggestions: Stopped output runtime: 3.2350128999969456 secondsChaining Calls​The whole point of langchain is to... chain! Here's an example of how do that.from langchain.chains import SimpleSequentialChainFirst, let's define the LLM for this model as a flan-5, and text2image as a stable diffusion model.dolly_llm = Replicate( model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")text2image = Replicate( model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf")First prompt in the chainprompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?",)chain = LLMChain(llm=dolly_llm, prompt=prompt)Second prompt to get the logo for company descriptionsecond_prompt = PromptTemplate( input_variables=["company_name"], template="Write a description of a logo for this company: {company_name}",)chain_two = LLMChain(llm=dolly_llm, prompt=second_prompt)Third prompt, let's create the image based on the description output from prompt 2third_prompt = PromptTemplate( input_variables=["company_logo_description"], template="{company_logo_description}",)chain_three =
https://python.langchain.com/docs/integrations/llms/replicate
7efa063ce8d0-10
input_variables=["company_logo_description"], template="{company_logo_description}",)chain_three = LLMChain(llm=text2image, prompt=third_prompt)Now let's run it!# Run the chain specifying only the input variable for the first chain.overall_chain = SimpleSequentialChain( chains=[chain, chain_two, chain_three], verbose=True)catchphrase = overall_chain.run("colorful socks")print(catchphrase) > Entering new SimpleSequentialChain chain... Colorful socks could be named "Dazzle Socks" A logo featuring bright colorful socks could be named Dazzle Socks https://replicate.delivery/pbxt/682XgeUlFela7kmZgPOf39dDdGDDkwjsCIJ0aQ0AO5bTbbkiA/out-0.png > Finished chain. https://replicate.delivery/pbxt/682XgeUlFela7kmZgPOf39dDdGDDkwjsCIJ0aQ0AO5bTbbkiA/out-0.pngresponse = requests.get( "https://replicate.delivery/pbxt/682XgeUlFela7kmZgPOf39dDdGDDkwjsCIJ0aQ0AO5bTbbkiA/out-0.png")img = Image.open(BytesIO(response.content))img ![png](_replicate_files/output_35_0.png) PreviousRELLMNextRunhouseSetupCalling a modelStreaming ResponseChaining CallsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/replicate
1a782c459ffd-0
Manifest | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/manifest
1a782c459ffd-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsManifestOn this pageManifestThis notebook goes over how to use Manifest and LangChain.For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifestAnother example of using Manifest with Langchain.pip install manifest-mlfrom manifest import Manifestfrom langchain.llms.manifest import ManifestWrappermanifest = Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5000")print(manifest.client.get_model_params())llm = ManifestWrapper( client=manifest, llm_kwargs={"temperature": 0.001, "max_tokens": 256})# Map reduce examplefrom langchain import PromptTemplatefrom langchain.text_splitter import CharacterTextSplitterfrom langchain.chains.mapreduce import
https://python.langchain.com/docs/integrations/llms/manifest
1a782c459ffd-2
langchain.text_splitter import CharacterTextSplitterfrom langchain.chains.mapreduce import MapReduceChain_prompt = """Write a concise summary of the following:{text}CONCISE SUMMARY:"""prompt = PromptTemplate(template=_prompt, input_variables=["text"])text_splitter = CharacterTextSplitter()mp_chain = MapReduceChain.from_params(llm, prompt, text_splitter)with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()mp_chain.run(state_of_the_union) 'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing businesses to reopen. The president also said that the government will continue to send vaccines to 112 countries, more than any other nation. "We have lost so much to COVID-19," Trump said. "Time with one another. And worst of all, so much loss of life." He said the CDC is working on a vaccine for kids under 5, and that the government will be ready with plenty of vaccines when they are available. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government is launching a "Test to Treat" initiative that will allow people to get tested at a pharmacy and get antiviral pills on the spot at no cost. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government will continue to send vaccines to 112 countries, more than any other nation. "We are coming for your'Compare HF Models​from langchain.model_laboratory import ModelLaboratorymanifest1 = ManifestWrapper( client=Manifest( client_name="huggingface",
https://python.langchain.com/docs/integrations/llms/manifest
1a782c459ffd-3
client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5000" ), llm_kwargs={"temperature": 0.01},)manifest2 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5001" ), llm_kwargs={"temperature": 0.01},)manifest3 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5002" ), llm_kwargs={"temperature": 0.01},)llms = [manifest1, manifest2, manifest3]model_lab = ModelLaboratory(llms)model_lab.compare("What color is a flamingo?") Input: What color is a flamingo? ManifestWrapper Params: {'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B', 'temperature': 0.01} pink ManifestWrapper Params: {'model_name': 'EleutherAI/gpt-neo-125M', 'model_path': 'EleutherAI/gpt-neo-125M', 'temperature': 0.01} A flamingo is a small, round ManifestWrapper Params: {'model_name': 'google/flan-t5-xl', 'model_path': 'google/flan-t5-xl', 'temperature': 0.01} pink
https://python.langchain.com/docs/integrations/llms/manifest
1a782c459ffd-4
'temperature': 0.01} pink PreviousCaching integrationsNextModalCompare HF ModelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/manifest
75508466fe09-0
PromptLayer OpenAI | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/promptlayer_openai
75508466fe09-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsPromptLayer OpenAIOn this pagePromptLayer OpenAIPromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library.PromptLayer records all your OpenAI API requests, allowing you to search and explore request history in the PromptLayer dashboard.This example showcases how to connect to PromptLayer to start recording your OpenAI requests.Another example is here.Install PromptLayer​The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.pip install promptlayerImports​import osfrom langchain.llms import PromptLayerOpenAIimport promptlayerSet the Environment API Key​You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the
https://python.langchain.com/docs/integrations/llms/promptlayer_openai
75508466fe09-2
can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.Set it as an environment variable called PROMPTLAYER_API_KEY.You also need an OpenAI Key, called OPENAI_API_KEY.from getpass import getpassPROMPTLAYER_API_KEY = getpass() ········os.environ["PROMPTLAYER_API_KEY"] = PROMPTLAYER_API_KEYfrom getpass import getpassOPENAI_API_KEY = getpass() ········os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYUse the PromptLayerOpenAI LLM like normal​You can optionally pass in pl_tags to track your requests with PromptLayer's tagging feature.llm = PromptLayerOpenAI(pl_tags=["langchain"])llm("I am a cat and I want")The above request should now appear on your PromptLayer dashboard.Using PromptLayer Track​If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id. llm = PromptLayerOpenAI(return_pl_id=True)llm_results = llm.generate(["Tell me a joke"])for res in llm_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100)Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well.
https://python.langchain.com/docs/integrations/llms/promptlayer_openai
75508466fe09-3
Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.PreviousPrediction GuardNextRELLMInstall PromptLayerImportsSet the Environment API KeyUse the PromptLayerOpenAI LLM like normalUsing PromptLayer TrackCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/promptlayer_openai
d12ca5b395ef-0
SageMakerEndpoint | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/sagemaker
d12ca5b395ef-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsSageMakerEndpointOn this pageSageMakerEndpointAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.pip3 install langchain boto3Set up​You have to set up following required parameters of the SagemakerEndpoint call:endpoint_name: The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region.credentials_profile_name: The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used.
https://python.langchain.com/docs/integrations/llms/sagemaker
d12ca5b395ef-2
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.htmlExample​from langchain.docstore.document import Documentexample_doc_1 = """Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.Therefore, Peter stayed with her at the hospital for 3 days without leaving."""docs = [ Document( page_content=example_doc_1, )]from typing import Dictfrom langchain import PromptTemplate, SagemakerEndpointfrom langchain.llms.sagemaker_endpoint import LLMContentHandlerfrom langchain.chains.question_answering import load_qa_chainimport jsonquery = """How long was Elizabeth hospitalized?"""prompt_template = """Use the following pieces of context to answer the question at the end.{context}Question: {question}Answer:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"])class ContentHandler(LLMContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps({prompt: prompt, **model_kwargs}) return input_str.encode("utf-8") def transform_output(self, output: bytes) -> str: response_json = json.loads(output.read().decode("utf-8")) return response_json[0]["generated_text"]content_handler = ContentHandler()chain = load_qa_chain( llm=SagemakerEndpoint(
https://python.langchain.com/docs/integrations/llms/sagemaker
d12ca5b395ef-3
= load_qa_chain( llm=SagemakerEndpoint( endpoint_name="endpoint-name", credentials_profile_name="credentials-profile-name", region_name="us-west-2", model_kwargs={"temperature": 1e-10}, content_handler=content_handler, ), prompt=PROMPT,)chain({"input_documents": docs, "question": query}, return_only_outputs=True)PreviousRunhouseNextStochasticAISet upExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/sagemaker
6aa02d5831fe-0
NLP Cloud | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/nlpcloud
6aa02d5831fe-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsNLP CloudNLP CloudThe NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.This example goes over how to use LangChain to interact with NLP Cloud models.pip install nlpcloud# get a token: https://docs.nlpcloud.com/#authenticationfrom getpass import getpassNLPCLOUD_API_KEY = getpass()
https://python.langchain.com/docs/integrations/llms/nlpcloud
6aa02d5831fe-2
getpass import getpassNLPCLOUD_API_KEY = getpass() ········import osos.environ["NLPCLOUD_API_KEY"] = NLPCLOUD_API_KEYfrom langchain.llms import NLPCloudfrom langchain import PromptTemplate, LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = NLPCloud()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) ' Justin Bieber was born in 1994, so the team that won the Super Bowl that year was the San Francisco 49ers.'PreviousMosaicMLNextoctoaiCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/nlpcloud
0a72634c5820-0
OpenLLM | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/openllm
0a72634c5820-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsOpenLLMOn this pageOpenLLM🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.Installation​Install openllm through PyPIpip install openllmLaunch OpenLLM server locally​To start an LLM server, use openllm start command. For example, to start a dolly-v2 server, run the following command from a terminal:openllm start dolly-v2Wrapper​from langchain.llms import OpenLLMserver_url = "http://localhost:3000" # Replace with remote host if you are running on a remote serverllm =
https://python.langchain.com/docs/integrations/llms/openllm
0a72634c5820-2
# Replace with remote host if you are running on a remote serverllm = OpenLLM(server_url=server_url)Optional: Local LLM Inference​You may also choose to initialize an LLM managed by OpenLLM locally from current process. This is useful for development purpose and allows developers to quickly try out different types of LLMs.When moving LLM applications to production, we recommend deploying the OpenLLM server separately and access via the server_url option demonstrated above.To load an LLM locally via the LangChain wrapper:from langchain.llms import OpenLLMllm = OpenLLM( model_name="dolly-v2", model_id="databricks/dolly-v2-3b", temperature=0.94, repetition_penalty=1.2,)Integrate with a LLMChain​from langchain import PromptTemplate, LLMChaintemplate = "What is a good name for a company that makes {product}?"prompt = PromptTemplate(template=template, input_variables=["product"])llm_chain = LLMChain(prompt=prompt, llm=llm)generated = llm_chain.run(product="mechanical keyboard")print(generated) iLkbPreviousOpenAINextOpenLMInstallationLaunch OpenLLM server locallyWrapperOptional: Local LLM InferenceIntegrate with a LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/openllm
726322b3977f-0
OpenLM | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/openlm
726322b3977f-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsOpenLMOn this pageOpenLMOpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code.This examples goes over how to use LangChain to interact with both OpenAI and HuggingFace. You'll need API keys from both.Setup​Install dependencies and set API keys.# Uncomment to install openlm and openai if you haven't already# !pip install openlm# !pip install openaifrom getpass import getpassimport osimport subprocess# Check if OPENAI_API_KEY environment variable is setif "OPENAI_API_KEY" not in os.environ: print("Enter your OpenAI API key:")
https://python.langchain.com/docs/integrations/llms/openlm
726322b3977f-2
not in os.environ: print("Enter your OpenAI API key:") os.environ["OPENAI_API_KEY"] = getpass()# Check if HF_API_TOKEN environment variable is setif "HF_API_TOKEN" not in os.environ: print("Enter your HuggingFace Hub API key:") os.environ["HF_API_TOKEN"] = getpass()Using LangChain with OpenLM​Here we're going to call two models in an LLMChain, text-davinci-003 from OpenAI and gpt2 on HuggingFace.from langchain.llms import OpenLMfrom langchain import PromptTemplate, LLMChainquestion = "What is the capital of France?"template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])for model in ["text-davinci-003", "huggingface.co/gpt2"]: llm = OpenLM(model=model) llm_chain = LLMChain(prompt=prompt, llm=llm) result = llm_chain.run(question) print( """Model: {}Result: {}""".format( model, result ) ) Model: text-davinci-003 Result: France is a country in Europe. The capital of France is Paris. Model: huggingface.co/gpt2 Result: Question: What is the capital of France? Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far
https://python.langchain.com/docs/integrations/llms/openlm
726322b3977f-3
a complicated issue, and I don't see any solutions to all this, but it is still far morePreviousOpenLLMNextPetalsSetupUsing LangChain with OpenLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/openlm
0e9aba145aa6-0
Caching integrations | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsCaching integrationsOn this pageCaching integrationsThis notebook covers how to cache results of individual LLM calls.import langchainfrom langchain.llms import OpenAI# To make the caching really obvious, lets use a slower model.llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)In Memory Cache​from langchain.cache import InMemoryCachelangchain.llm_cache = InMemoryCache()# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms Wall time: 4.83 s "\n\nWhy couldn't the bicycle stand up by itself?
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-2
s "\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!"# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 238 µs, sys: 143 µs, total: 381 µs Wall time: 1.76 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'SQLite Cache​rm .langchain.db# We can do the same thing with a SQLite cachefrom langchain.cache import SQLiteCachelangchain.llm_cache = SQLiteCache(database_path=".langchain.db")# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms Wall time: 825 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms Wall time: 2.67 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'Redis Cache​Standard Cache​Use Redis to cache prompts and responses.# We can do the same thing with a Redis cache# (make sure your local Redis instance is running first before running this example)from redis import Redisfrom langchain.cache import RedisCachelangchain.llm_cache = RedisCache(redis_=Redis())# The first time, it is not yet in cache, so it should take
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-3
The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 6.88 ms, sys: 8.75 ms, total: 15.6 ms Wall time: 1.04 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 1.59 ms, sys: 610 µs, total: 2.2 ms Wall time: 5.58 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'Semantic Cache​Use Redis to cache prompts and responses and evaluate hits based on semantic similarity.from langchain.embeddings import OpenAIEmbeddingsfrom langchain.cache import RedisSemanticCachelangchain.llm_cache = RedisSemanticCache( redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings())# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 351 ms, sys: 156 ms, total: 507 ms Wall time: 3.37 s "\n\nWhy don't scientists trust atoms?\nBecause they make up everything."# The second time, while not a direct hit, the question is semantically similar to the original question,# so it uses the cached result!llm("Tell me one joke") CPU times: user 6.25 ms, sys: 2.72 ms, total: 8.97 ms Wall time: 262 ms "\n\nWhy don't scientists trust
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-4
Wall time: 262 ms "\n\nWhy don't scientists trust atoms?\nBecause they make up everything."GPTCache​We can use GPTCache for exact match caching OR to cache results based on semantic similarityLet's first start with an example of exact matchfrom gptcache import Cachefrom gptcache.manager.factory import manager_factoryfrom gptcache.processor.pre import get_promptfrom langchain.cache import GPTCacheimport hashlibdef get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest()def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) cache_obj.init( pre_embedding_func=get_prompt, data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"), )langchain.llm_cache = GPTCache(init_gptcache)# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 21.5 ms, sys: 21.3 ms, total: 42.8 ms Wall time: 6.2 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 571 µs, sys: 43 µs, total: 614 µs Wall time: 635 µs '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'Let's now show an example of similarity cachingfrom gptcache import Cachefrom
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-5
side!'Let's now show an example of similarity cachingfrom gptcache import Cachefrom gptcache.adapter.api import init_similar_cachefrom langchain.cache import GPTCacheimport hashlibdef get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest()def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{hashed_llm}")langchain.llm_cache = GPTCache(init_gptcache)# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 1.42 s, sys: 279 ms, total: 1.7 s Wall time: 8.44 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# This is an exact match, so it finds it in the cachellm("Tell me a joke") CPU times: user 866 ms, sys: 20 ms, total: 886 ms Wall time: 226 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# This is not an exact match, but semantically within distance so it hits!llm("Tell me joke") CPU times: user 853 ms, sys: 14.8 ms, total: 868 ms Wall time: 224 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'Momento Cache​Use Momento to cache prompts and responses.Requires momento to use, uncomment below to install:# !pip install momentoYou'll need
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-6
prompts and responses.Requires momento to use, uncomment below to install:# !pip install momentoYou'll need to get a Momento auth token to use this class. This can either be passed in to a momento.CacheClient if you'd like to instantiate that directly, as a named parameter auth_token to MomentoChatMessageHistory.from_client_params, or can just be set as an environment variable MOMENTO_AUTH_TOKEN.from datetime import timedeltafrom langchain.cache import MomentoCachecache_name = "langchain"ttl = timedelta(days=1)langchain.llm_cache = MomentoCache.from_client_params(cache_name, ttl)# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 40.7 ms, sys: 16.5 ms, total: 57.2 ms Wall time: 1.73 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The second time it is, so it goes faster# When run in the same region as the cache, latencies are single digit msllm("Tell me a joke") CPU times: user 3.16 ms, sys: 2.98 ms, total: 6.14 ms Wall time: 57.9 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'SQLAlchemy Cache​# You can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy.# from langchain.cache import SQLAlchemyCache# from sqlalchemy import create_engine# engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")# langchain.llm_cache = SQLAlchemyCache(engine)Custom SQLAlchemy Schemas​# You can define your own declarative SQLAlchemyCache child class to customize the
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-7
You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:from sqlalchemy import Column, Integer, String, Computed, Index, Sequencefrom sqlalchemy import create_enginefrom sqlalchemy.ext.declarative import declarative_basefrom sqlalchemy_utils import TSVectorTypefrom langchain.cache import SQLAlchemyCacheBase = declarative_base()class FulltextLLMCache(Base): # type: ignore """Postgres table for fulltext-indexed LLM Cache""" __tablename__ = "llm_cache_fulltext" id = Column(Integer, Sequence("cache_id"), primary_key=True) prompt = Column(String, nullable=False) llm = Column(String, nullable=False) idx = Column(Integer) response = Column(String) prompt_tsv = Column( TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True), ) __table_args__ = ( Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"), )engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache)Optional Caching​You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLMllm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False)llm("Tell me a joke") CPU times: user 5.8 ms,
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-8
me a joke") CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms Wall time: 745 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'llm("Tell me a joke") CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms Wall time: 623 ms '\n\nTwo guys stole a calendar. They got six months each.'Optional Caching in Chains​You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.llm = OpenAI(model_name="text-davinci-002")no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)from langchain.text_splitter import CharacterTextSplitterfrom langchain.chains.mapreduce import MapReduceChaintext_splitter = CharacterTextSplitter()with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()texts = text_splitter.split_text(state_of_the_union)from langchain.docstore.document import Documentdocs = [Document(page_content=t) for t in texts[:3]]from langchain.chains.summarize import load_summarize_chainchain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm)chain.run(docs) CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms Wall
https://python.langchain.com/docs/integrations/llms/llm_caching
0e9aba145aa6-9
ms, sys: 60.3 ms, total: 512 ms Wall time: 5.09 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.'When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step.chain.run(docs) CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms Wall time: 1.04 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.'rm .langchain.db sqlite.dbPreviousLlama-cppNextManifestIn Memory CacheSQLite CacheRedis CacheStandard CacheSemantic CacheGPTCacheMomento CacheSQLAlchemy CacheCustom SQLAlchemy SchemasOptional CachingOptional Caching in ChainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/llm_caching
76bd44cd747f-0
Modal | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/modal
76bd44cd747f-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsModalModalThe Modal cloud platform provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer.
https://python.langchain.com/docs/integrations/llms/modal
76bd44cd747f-2
Use modal to run your own custom LLM models instead of depending on LLM APIs.This example goes over how to use LangChain to interact with a modal HTTPS web endpoint.Question-answering with LangChain is another example of how to use LangChain alonside Modal. In that example, Modal runs the LangChain application end-to-end and uses OpenAI as its LLM API.pip install modal# Register an account with Modal and get a new token.modal token new Launching login page in your browser window... If this is not showing up, please copy this URL into your web browser manually: https://modal.com/token-flow/tf-Dzm3Y01234mqmm1234Vcu3The langchain.llms.modal.Modal integration class requires that you deploy a Modal application with a web endpoint that complies with the following JSON interface:The LLM prompt is accepted as a str value under the key "prompt"The LLM response returned as a str value under the key "prompt"Example request JSON:{ "prompt": "Identify yourself, bot!", "extra": "args are allowed",}Example response JSON:{ "prompt": "This is the LLM speaking",}An example 'dummy' Modal web endpoint function fulfilling this interface would be......class Request(BaseModel): prompt: [email protected]()@modal.web_endpoint(method="POST")def web(request: Request): _ = request # ignore input return {"prompt": "hello world"}See Modal's web endpoints guide for the basics of setting up an endpoint that fulfils this interface.See Modal's 'Run Falcon-40B with AutoGPTQ' open-source LLM example as a starting point for your custom LLM!Once you have a deployed Modal web endpoint, you can pass its URL into the langchain.llms.modal.Modal
https://python.langchain.com/docs/integrations/llms/modal
76bd44cd747f-3
a deployed Modal web endpoint, you can pass its URL into the langchain.llms.modal.Modal LLM class. This class can then function as a building block in your chain.from langchain.llms import Modalfrom langchain import PromptTemplate, LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])endpoint_url = "https://ecorp--custom-llm-endpoint.modal.run" # REPLACE ME with your deployed Modal web endpoint's URLllm = Modal(endpoint_url=endpoint_url)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousManifestNextMosaicMLCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/modal
a1fab997bb4a-0
Prediction Guard | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/predictionguard
a1fab997bb4a-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleAzure OpenAIAzureML Online EndpointBananaBasetenBeamBedrockCerebriumAIChatGLMClarifaiCohereC TransformersDatabricksDeepInfraForefrontAIGoogle Cloud Platform Vertex AI PaLMGooseAIGPT4AllHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJSONFormerKoboldAI APILlama-cppCaching integrationsManifestModalMosaicMLNLP CloudoctoaiOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAITextGenTongyi QwenWriterMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsLLMsPrediction GuardOn this pagePrediction Guardpip install predictionguard langchainimport osimport predictionguard as pgfrom langchain.llms import PredictionGuardfrom langchain import PromptTemplate, LLMChainBasic LLM usage​# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows# you to access all the latest open access models (see https://docs.predictionguard.com)os.environ["OPENAI_API_KEY"] = "<your OpenAI api key>"# Your Prediction Guard API key. Get one at predictionguard.comos.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"pgllm = PredictionGuard(model="OpenAI-text-davinci-003")pgllm("Tell me a joke")Control the output structure/ type of LLMs​template = """Respond to
https://python.langchain.com/docs/integrations/llms/predictionguard
a1fab997bb4a-2
the output structure/ type of LLMs​template = """Respond to the following query based on the context.Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! � We have officially added TWO new candle subscription box options! 📦Exclusive Candle Box - $80 Monthly Candle Box - $45 (NEW!)Scent of The Month Box - $28 (NEW!)Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! �Query: {query}Result: """prompt = PromptTemplate(template=template, input_variables=["query"])# Without "guarding" or controlling the output of the LLM.pgllm(prompt.format(query="What kind of post is this?"))# With "guarding" or controlling the output of the LLM. See the# Prediction Guard docs (https://docs.predictionguard.com) to learn how to# control the output with integer, float, boolean, JSON, and other types and# structures.pgllm = PredictionGuard( model="OpenAI-text-davinci-003", output={ "type": "categorical", "categories": ["product announcement", "apology", "relational"], },)pgllm(prompt.format(query="What kind of post is this?"))Chaining​pgllm = PredictionGuard(model="OpenAI-text-davinci-003")template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)question =
https://python.langchain.com/docs/integrations/llms/predictionguard
a1fab997bb4a-3
= LLMChain(prompt=prompt, llm=pgllm, verbose=True)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.predict(question=question)template = """Write a {adjective} poem about {subject}."""prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)llm_chain.predict(adjective="sad", subject="ducks")PreviousPredibaseNextPromptLayer OpenAIBasic LLM usageControl the output structure/ type of LLMsChainingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/llms/predictionguard
d955ed71cf75-0
Petals | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/llms/petals_example