id
stringlengths
14
15
text
stringlengths
101
5.26k
source
stringlengths
57
120
9ed49f9f780f-0
.ipynb .pdf NGram Overlap ExampleSelector NGram Overlap ExampleSelector# The NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive. The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input. from langchain.prompts import PromptTemplate from langchain.prompts.example_selector.ngram_overlap import NGramOverlapExampleSelector from langchain.prompts import FewShotPromptTemplate, PromptTemplate example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) # These are a lot of examples of a pretend task of creating antonyms. examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"}, ] # These are examples of a fictional translation task. examples = [ {"input": "See Spot run.", "output": "Ver correr a Spot."}, {"input": "My dog barks.", "output": "Mi perro ladra."}, {"input": "Spot can run.", "output": "Spot puede correr."}, ] example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) example_selector = NGramOverlapExampleSelector( # These are the examples it has available to choose from. examples=examples, # This is the PromptTemplate being used to format the examples. example_prompt=example_prompt, # This is the threshold, at which selector stops. # It is set to -1.0 by default. threshold=-1.0, # For negative threshold: # Selector sorts examples by ngram overlap score, and excludes none. # For threshold greater than 1.0: # Selector excludes all examples, and returns an empty list. # For threshold equal to 0.0: # Selector sorts examples by ngram overlap score, # and excludes those with no ngram overlap with input. ) dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the Spanish translation of every input", suffix="Input: {sentence}\nOutput:", input_variables=["sentence"], ) # An example input with large ngram overlap with "Spot can run." # and no overlap with "My dog barks." print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: See Spot run. Output: Ver correr a Spot. Input: My dog barks. Output: Mi perro ladra. Input: Spot can run fast. Output: # You can add examples to NGramOverlapExampleSelector as well. new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."} example_selector.add_example(new_example) print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: See Spot run. Output: Ver correr a Spot. Input: Spot plays fetch. Output: Spot juega a buscar. Input: My dog barks. Output: Mi perro ladra. Input: Spot can run fast. Output: # You can set a threshold at which examples are excluded. # For example, setting threshold equal to 0.0 # excludes examples with no ngram overlaps with input. # Since "My dog barks." has no ngram overlaps with "Spot can run fast."
https://langchain.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
9ed49f9f780f-1
# it is excluded. example_selector.threshold=0.0 print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: See Spot run. Output: Ver correr a Spot. Input: Spot plays fetch. Output: Spot juega a buscar. Input: Spot can run fast. Output: # Setting small nonzero threshold example_selector.threshold=0.09 print(dynamic_prompt.format(sentence="Spot can play fetch.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: Spot plays fetch. Output: Spot juega a buscar. Input: Spot can play fetch. Output: # Setting threshold greater than 1.0 example_selector.threshold=1.0+1e-9 print(dynamic_prompt.format(sentence="Spot can play fetch.")) Give the Spanish translation of every input Input: Spot can play fetch. Output: previous Maximal Marginal Relevance ExampleSelector next Similarity ExampleSelector By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
49045e7c8e58-0
.ipynb .pdf LengthBased ExampleSelector LengthBased ExampleSelector# This ExampleSelector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more. from langchain.prompts import PromptTemplate from langchain.prompts import FewShotPromptTemplate from langchain.prompts.example_selector import LengthBasedExampleSelector # These are a lot of examples of a pretend task of creating antonyms. examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"}, ] example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) example_selector = LengthBasedExampleSelector( # These are the examples it has available to choose from. examples=examples, # This is the PromptTemplate being used to format the examples. example_prompt=example_prompt, # This is the maximum length that the formatted examples should be. # Length is measured by the get_text_length function below. max_length=25, # This is the function used to get the length of a string, which is used # to determine which examples to include. It is commented out because # it is provided as a default value if none is specified. # get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x)) ) dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) # An example with small input, so it selects all examples. print(dynamic_prompt.format(adjective="big")) Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: # An example with long input, so it selects only one example. long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else" print(dynamic_prompt.format(adjective=long_string)) Give the antonym of every input Input: happy Output: sad Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else Output: # You can add an example to an example selector as well. new_example = {"input": "big", "output": "small"} dynamic_prompt.example_selector.add_example(new_example) print(dynamic_prompt.format(adjective="enthusiastic")) Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: small Input: enthusiastic Output: previous How to create a custom example selector next Maximal Marginal Relevance ExampleSelector By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/length_based.html
83d1e17d693a-0
.ipynb .pdf Similarity ExampleSelector Similarity ExampleSelector# The SemanticSimilarityExampleSelector selects examples based on which examples are most similar to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs. from langchain.prompts.example_selector import SemanticSimilarityExampleSelector from langchain.vectorstores import Chroma from langchain.embeddings import OpenAIEmbeddings from langchain.prompts import FewShotPromptTemplate, PromptTemplate example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) # These are a lot of examples of a pretend task of creating antonyms. examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"}, ] example_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # This is the number of examples to produce. k=1 ) similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. # Input is a feeling, so should select the happy/sad example print(similar_prompt.format(adjective="worried")) Give the antonym of every input Input: happy Output: sad Input: worried Output: # Input is a measurement, so should select the tall/short example print(similar_prompt.format(adjective="fat")) Give the antonym of every input Input: happy Output: sad Input: fat Output: # You can add new examples to the SemanticSimilarityExampleSelector as well similar_prompt.example_selector.add_example({"input": "enthusiastic", "output": "apathetic"}) print(similar_prompt.format(adjective="joyful")) Give the antonym of every input Input: happy Output: sad Input: joyful Output: previous NGram Overlap ExampleSelector next Output Parsers By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/similarity.html
589c461060dd-0
.ipynb .pdf Maximal Marginal Relevance ExampleSelector Maximal Marginal Relevance ExampleSelector# The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples. from langchain.prompts.example_selector import MaxMarginalRelevanceExampleSelector, SemanticSimilarityExampleSelector from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings from langchain.prompts import FewShotPromptTemplate, PromptTemplate example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) # These are a lot of examples of a pretend task of creating antonyms. examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"}, ] example_selector = MaxMarginalRelevanceExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # This is the number of examples to produce. k=2 ) mmr_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) # Input is a feeling, so should select the happy/sad example as the first one print(mmr_prompt.format(adjective="worried")) Give the antonym of every input Input: happy Output: sad Input: windy Output: calm Input: worried Output: # Let's compare this to what we would just get if we went solely off of similarity, # by using SemanticSimilarityExampleSelector instead of MaxMarginalRelevanceExampleSelector. example_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # This is the number of examples to produce. k=2 ) similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) print(similar_prompt.format(adjective="worried")) Give the antonym of every input Input: happy Output: sad Input: sunny Output: gloomy Input: worried Output: previous LengthBased ExampleSelector next NGram Overlap ExampleSelector By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/mmr.html
fe5cfec8036e-0
.md .pdf Getting Started Contents What is a prompt template? Create a prompt template Template formats Validate template Serialize prompt template Pass few shot examples to a prompt template Select examples for a prompt template Getting Started# In this tutorial, we will learn about: what a prompt template is, and why it is needed, how to create a prompt template, how to pass few shot examples to a prompt template, how to select examples for a prompt template. What is a prompt template?# A prompt template refers to a reproducible way to generate a prompt. It contains a text string (“the template”), that can take in a set of parameters from the end user and generate a prompt. The prompt template may contain: instructions to the language model, a set of few shot examples to help the language model generate a better response, a question to the language model. The following code snippet contains an example of a prompt template: from langchain import PromptTemplate template = """ I want you to act as a naming consultant for new companies. What is a good name for a company that makes {product}? """ prompt = PromptTemplate( input_variables=["product"], template=template, ) prompt.format(product="colorful socks") # -> I want you to act as a naming consultant for new companies. # -> What is a good name for a company that makes colorful socks? Create a prompt template# You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt. from langchain import PromptTemplate # An example prompt with no input variables no_input_prompt = PromptTemplate(input_variables=[], template="Tell me a joke.") no_input_prompt.format() # -> "Tell me a joke." # An example prompt with one input variable one_input_prompt = PromptTemplate(input_variables=["adjective"], template="Tell me a {adjective} joke.") one_input_prompt.format(adjective="funny") # -> "Tell me a funny joke." # An example prompt with multiple input variables multiple_input_prompt = PromptTemplate( input_variables=["adjective", "content"], template="Tell me a {adjective} joke about {content}." ) multiple_input_prompt.format(adjective="funny", content="chickens") # -> "Tell me a funny joke about chickens." If you do not wish to specify input_variables manually, you can also create a PromptTemplate using from_template class method. langchain will automatically infer the input_variables based on the template passed. template = "Tell me a {adjective} joke about {content}." prompt_template = PromptTemplate.from_template(template) prompt_template.input_variables # -> ['adjective', 'content'] prompt_template.format(adjective="funny", content="chickens") # -> Tell me a funny joke about chickens. You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates. Template formats# By default, PromptTemplate will treat the provided template as a Python f-string. You can specify other template format through template_format argument: # Make sure jinja2 is installed before running this jinja2_template = "Tell me a {{ adjective }} joke about {{ content }}" prompt_template = PromptTemplate.from_template(template=jinja2_template, template_format="jinja2") prompt_template.format(adjective="funny", content="chickens") # -> Tell me a funny joke about chickens. Currently, PromptTemplate only supports jinja2 and f-string templating format. If there is any other templating format that you would like to use, feel free to open an issue in the Github page. Validate template# By default, PromptTemplate will validate the template string by checking whether the input_variables match the variables defined in template. You can disable this behavior by setting validate_template to False template = "I am learning langchain because {reason}." prompt_template = PromptTemplate(template=template, input_variables=["reason", "foo"]) # ValueError due to extra variables prompt_template = PromptTemplate(template=template, input_variables=["reason", "foo"], validate_template=False) # No error Serialize prompt template# You can save your PromptTemplate into a file in your local filesystem. langchain will automatically infer the file format through the file extension name. Currently, langchain supports saving template to YAML and JSON file. prompt_template.save("awesome_prompt.json") # Save to JSON file from langchain.prompts import load_prompt
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/getting_started.html
fe5cfec8036e-1
from langchain.prompts import load_prompt loaded_prompt = load_prompt("awesome_prompt.json") assert prompt_template == loaded_prompt langchain also supports loading prompt template from LangChainHub, which contains a collection of useful prompts you can use in your project. You can read more about LangChainHub and the prompts available with it here. from langchain.prompts import load_prompt prompt = load_prompt("lc://prompts/conversation/prompt.json") prompt.format(history="", input="What is 1 + 1?") You can learn more about serializing prompt template in How to serialize prompts. Pass few shot examples to a prompt template# Few shot examples are a set of examples that can be used to help the language model generate a better response. To generate a prompt with few shot examples, you can use the FewShotPromptTemplate. This class takes in a PromptTemplate and a list of few shot examples. It then formats the prompt template with the few shot examples. In this example, we’ll create a prompt to generate word antonyms. from langchain import PromptTemplate, FewShotPromptTemplate # First, create the list of few shot examples. examples = [ {"word": "happy", "antonym": "sad"}, {"word": "tall", "antonym": "short"}, ] # Next, we specify the template to format the examples we have provided. # We use the `PromptTemplate` class for this. example_formatter_template = """Word: {word} Antonym: {antonym} """ example_prompt = PromptTemplate( input_variables=["word", "antonym"], template=example_formatter_template, ) # Finally, we create the `FewShotPromptTemplate` object. few_shot_prompt = FewShotPromptTemplate( # These are the examples we want to insert into the prompt. examples=examples, # This is how we want to format the examples when we insert them into the prompt. example_prompt=example_prompt, # The prefix is some text that goes before the examples in the prompt. # Usually, this consists of intructions. prefix="Give the antonym of every input\n", # The suffix is some text that goes after the examples in the prompt. # Usually, this is where the user input will go suffix="Word: {input}\nAntonym: ", # The input variables are the variables that the overall prompt expects. input_variables=["input"], # The example_separator is the string we will use to join the prefix, examples, and suffix together with. example_separator="\n", ) # We can now generate a prompt using the `format` method. print(few_shot_prompt.format(input="big")) # -> Give the antonym of every input # -> # -> Word: happy # -> Antonym: sad # -> # -> Word: tall # -> Antonym: short # -> # -> Word: big # -> Antonym: Select examples for a prompt template# If you have a large number of examples, you can use the ExampleSelector to select a subset of examples that will be most informative for the Language Model. This will help you generate a prompt that is more likely to generate a good response. Below, we’ll use the LengthBasedExampleSelector, which selects examples based on the length of the input. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more. We’ll continue with the example from the previous section, but this time we’ll use the LengthBasedExampleSelector to select the examples. from langchain.prompts.example_selector import LengthBasedExampleSelector # These are a lot of examples of a pretend task of creating antonyms. examples = [ {"word": "happy", "antonym": "sad"}, {"word": "tall", "antonym": "short"}, {"word": "energetic", "antonym": "lethargic"}, {"word": "sunny", "antonym": "gloomy"}, {"word": "windy", "antonym": "calm"}, ] # We'll use the `LengthBasedExampleSelector` to select the examples. example_selector = LengthBasedExampleSelector( # These are the examples is has available to choose from. examples=examples, # This is the PromptTemplate being used to format the examples.
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/getting_started.html
fe5cfec8036e-2
# This is the PromptTemplate being used to format the examples. example_prompt=example_prompt, # This is the maximum length that the formatted examples should be. # Length is measured by the get_text_length function below. max_length=25 # This is the function used to get the length of a string, which is used # to determine which examples to include. It is commented out because # it is provided as a default value if none is specified. # get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x)) ) # We can now use the `example_selector` to create a `FewShotPromptTemplate`. dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Word: {input}\nAntonym:", input_variables=["input"], example_separator="\n\n", ) # We can now generate a prompt using the `format` method. print(dynamic_prompt.format(input="big")) # -> Give the antonym of every input # -> # -> Word: happy # -> Antonym: sad # -> # -> Word: tall # -> Antonym: short # -> # -> Word: energetic # -> Antonym: lethargic # -> # -> Word: sunny # -> Antonym: gloomy # -> # -> Word: windy # -> Antonym: calm # -> # -> Word: big # -> Antonym: In contrast, if we provide a very long input, the LengthBasedExampleSelector will select fewer examples to include in the prompt. long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else" print(dynamic_prompt.format(input=long_string)) # -> Give the antonym of every input # -> Word: happy # -> Antonym: sad # -> # -> Word: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else # -> Antonym: LangChain comes with a few example selectors that you can use. For more details on how to use them, see Example Selectors. You can create custom example selectors that select examples based on any criteria you want. For more details on how to do this, see Creating a custom example selector. previous Prompt Templates next How-To Guides Contents What is a prompt template? Create a prompt template Template formats Validate template Serialize prompt template Pass few shot examples to a prompt template Select examples for a prompt template By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/getting_started.html
0bec5795a7de-0
.rst .pdf How-To Guides How-To Guides# If you’re new to the library, you may want to start with the Quickstart. The user guide here shows more advanced workflows and how to use the library in different ways. Connecting to a Feature Store How to create a custom prompt template How to create a prompt template that uses few shot examples How to work with partial Prompt Templates Prompt Composition How to serialize prompts previous Getting Started next Connecting to a Feature Store By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/how_to_guides.html
cb73239fa79a-0
.ipynb .pdf How to create a custom prompt template Contents Why are custom prompt templates needed? Creating a Custom Prompt Template Use the custom prompt template How to create a custom prompt template# Let’s suppose we want the LLM to generate English language explanations of a function given its name. To achieve this task, we will create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function. Why are custom prompt templates needed?# LangChain provides a set of default prompt templates that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template. Take a look at the current set of default prompt templates here. Creating a Custom Prompt Template# There are essentially two distinct prompt templates available - string prompt templates and chat prompt templates. String prompt templates provides a simple prompt in string format, while chat prompt templates produces a more structured prompt to be used with a chat API. In this guide, we will create a custom prompt using a string prompt template. To create a custom string prompt template, there are two requirements: It has an input_variables attribute that exposes what input variables the prompt template expects. It exposes a format method that takes in keyword arguments corresponding to the expected input_variables and returns the formatted prompt. We will create a custom prompt template that takes in the function name as input and formats the prompt to provide the source code of the function. To achieve this, let’s first create a function that will return the source code of a function given its name. import inspect def get_source_code(function_name): # Get the source code of the function return inspect.getsource(function_name) Next, we’ll create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function. from langchain.prompts import StringPromptTemplate from pydantic import BaseModel, validator class FunctionExplainerPromptTemplate(StringPromptTemplate, BaseModel): """ A custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function. """ @validator("input_variables") def validate_input_variables(cls, v): """ Validate that the input variables are correct. """ if len(v) != 1 or "function_name" not in v: raise ValueError("function_name must be the only input_variable.") return v def format(self, **kwargs) -> str: # Get the source code of the function source_code = get_source_code(kwargs["function_name"]) # Generate the prompt to be sent to the language model prompt = f""" Given the function name and source code, generate an English language explanation of the function. Function Name: {kwargs["function_name"].__name__} Source Code: {source_code} Explanation: """ return prompt def _prompt_type(self): return "function-explainer" Use the custom prompt template# Now that we have created a custom prompt template, we can use it to generate prompts for our task. fn_explainer = FunctionExplainerPromptTemplate(input_variables=["function_name"]) # Generate a prompt for the function "get_source_code" prompt = fn_explainer.format(function_name=get_source_code) print(prompt) Given the function name and source code, generate an English language explanation of the function. Function Name: get_source_code Source Code: def get_source_code(function_name): # Get the source code of the function return inspect.getsource(function_name) Explanation: previous Connecting to a Feature Store next How to create a prompt template that uses few shot examples Contents Why are custom prompt templates needed? Creating a Custom Prompt Template Use the custom prompt template By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
cb19152b8063-0
.ipynb .pdf Prompt Composition Prompt Composition# This notebook goes over how to compose multiple prompts together. This can be useful when you want to reuse parts of prompts. This can be done with a PipelinePrompt. A PipelinePrompt consists of two main parts: final_prompt: This is the final prompt that is returned pipeline_prompts: This is a list of tuples, consisting of a string (name) and a Prompt Template. Each PromptTemplate will be formatted and then passed to future prompt templates as a variable with the same name as name from langchain.prompts.pipeline import PipelinePromptTemplate from langchain.prompts.prompt import PromptTemplate full_template = """{introduction} {example} {start}""" full_prompt = PromptTemplate.from_template(full_template) introduction_template = """You are impersonating {person}.""" introduction_prompt = PromptTemplate.from_template(introduction_template) example_template = """Here's an example of an interaction: Q: {example_q} A: {example_a}""" example_prompt = PromptTemplate.from_template(example_template) start_template = """Now, do this for real! Q: {input} A:""" start_prompt = PromptTemplate.from_template(start_template) input_prompts = [ ("introduction", introduction_prompt), ("example", example_prompt), ("start", start_prompt) ] pipeline_prompt = PipelinePromptTemplate(final_prompt=full_prompt, pipeline_prompts=input_prompts) pipeline_prompt.input_variables ['example_a', 'person', 'example_q', 'input'] print(pipeline_prompt.format( person="Elon Musk", example_q="What's your favorite car?", example_a="Telsa", input="What's your favorite social media site?" )) You are impersonating Elon Musk. Here's an example of an interaction: Q: What's your favorite car? A: Telsa Now, do this for real! Q: What's your favorite social media site? A: previous How to work with partial Prompt Templates next How to serialize prompts By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/prompt_composition.html
4f42ee7b736a-0
.ipynb .pdf How to work with partial Prompt Templates Contents Partial With Strings Partial With Functions How to work with partial Prompt Templates# A prompt template is a class with a .format method which takes in a key-value map and returns a string (a prompt) to pass to the language model. Like other methods, it can make sense to “partial” a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values. LangChain supports this in two ways: we allow for partially formatted prompts (1) with string values, (2) with functions that return string values. These two different ways support different use cases. In the documentation below we go over the motivations for both use cases as well as how to do it in LangChain. Partial With Strings# One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that. Below is an example of doing this: from langchain.prompts import PromptTemplate prompt = PromptTemplate(template="{foo}{bar}", input_variables=["foo", "bar"]) partial_prompt = prompt.partial(foo="foo"); print(partial_prompt.format(bar="baz")) foobaz You can also just initialize the prompt with the partialed variables. prompt = PromptTemplate(template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"}) print(prompt.format(bar="baz")) foobaz Partial With Functions# The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can’t hard code it in the prompt, and passing it along with the other input variables is a bit annoying. In this case, it’s very handy to be able to partial the prompt with a function that always returns the current date. from datetime import datetime def _get_datetime(): now = datetime.now() return now.strftime("%m/%d/%Y, %H:%M:%S") prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective", "date"] ); partial_prompt = prompt.partial(date=_get_datetime) print(partial_prompt.format(adjective="funny")) Tell me a funny joke about the day 02/27/2023, 22:15:16 You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow. prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective"], partial_variables={"date": _get_datetime} ); print(prompt.format(adjective="funny")) Tell me a funny joke about the day 02/27/2023, 22:15:16 previous How to create a prompt template that uses few shot examples next Prompt Composition Contents Partial With Strings Partial With Functions By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/partial.html
2d3b70e088b8-0
.ipynb .pdf Connecting to a Feature Store Contents Feast Load Feast Store Prompts Use in a chain Tecton Prerequisites Define and Load Features Prompts Use in a chain Featureform Initialize Featureform Prompts Use in a chain Connecting to a Feature Store# Feature stores are a concept from traditional machine learning that make sure data fed into models is up-to-date and relevant. For more on this, see here. This concept is extremely relevant when considering putting LLM applications in production. In order to personalize LLM applications, you may want to combine LLMs with up-to-date information about particular users. Feature stores can be a great way to keep that data fresh, and LangChain provides an easy way to combine that data with LLMs. In this notebook we will show how to connect prompt templates to feature stores. The basic idea is to call a feature store from inside a prompt template to retrieve values that are then formatted into the prompt. Feast# To start, we will use the popular open source feature store framework Feast. This assumes you have already run the steps in the README around getting started. We will build of off that example in getting started, and create and LLMChain to write a note to a specific driver regarding their up-to-date statistics. Load Feast Store# Again, this should be set up according to the instructions in the Feast README from feast import FeatureStore # You may need to update the path depending on where you stored it feast_repo_path = "../../../../../my_feature_repo/feature_repo/" store = FeatureStore(repo_path=feast_repo_path) Prompts# Here we will set up a custom FeastPromptTemplate. This prompt template will take in a driver id, look up their stats, and format those stats into a prompt. Note that the input to this prompt template is just driver_id, since that is the only user defined piece (all other variables are looked up inside the prompt template). from langchain.prompts import PromptTemplate, StringPromptTemplate template = """Given the driver's up to date stats, write them note relaying those stats to them. If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better Here are the drivers stats: Conversation rate: {conv_rate} Acceptance rate: {acc_rate} Average Daily Trips: {avg_daily_trips} Your response:""" prompt = PromptTemplate.from_template(template) class FeastPromptTemplate(StringPromptTemplate): def format(self, **kwargs) -> str: driver_id = kwargs.pop("driver_id") feature_vector = store.get_online_features( features=[ 'driver_hourly_stats:conv_rate', 'driver_hourly_stats:acc_rate', 'driver_hourly_stats:avg_daily_trips' ], entity_rows=[{"driver_id": driver_id}] ).to_dict() kwargs["conv_rate"] = feature_vector["conv_rate"][0] kwargs["acc_rate"] = feature_vector["acc_rate"][0] kwargs["avg_daily_trips"] = feature_vector["avg_daily_trips"][0] return prompt.format(**kwargs) prompt_template = FeastPromptTemplate(input_variables=["driver_id"]) print(prompt_template.format(driver_id=1001)) Given the driver's up to date stats, write them note relaying those stats to them. If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better Here are the drivers stats: Conversation rate: 0.4745151400566101 Acceptance rate: 0.055561766028404236 Average Daily Trips: 936 Your response: Use in a chain# We can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store from langchain.chat_models import ChatOpenAI from langchain.chains import LLMChain chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template) chain.run(1001)
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
2d3b70e088b8-1
chain.run(1001) "Hi there! I wanted to update you on your current stats. Your acceptance rate is 0.055561766028404236 and your average daily trips are 936. While your conversation rate is currently 0.4745151400566101, I have no doubt that with a little extra effort, you'll be able to exceed that .5 mark! Keep up the great work! And remember, even chickens can't always cross the road, but they still give it their best shot." Tecton# Above, we showed how you could use Feast, a popular open source and self-managed feature store, with LangChain. Our examples below will show a similar integration using Tecton. Tecton is a fully managed feature platform built to orchestrate the complete ML feature lifecycle, from transformation to online serving, with enterprise-grade SLAs. Prerequisites# Tecton Deployment (sign up at https://tecton.ai) TECTON_API_KEY environment variable set to a valid Service Account key Define and Load Features# We will use the user_transaction_counts Feature View from the Tecton tutorial as part of a Feature Service. For simplicity, we are only using a single Feature View; however, more sophisticated applications may require more feature views to retrieve the features needed for its prompt. user_transaction_metrics = FeatureService( name = "user_transaction_metrics", features = [user_transaction_counts] ) The above Feature Service is expected to be applied to a live workspace. For this example, we will be using the “prod” workspace. import tecton workspace = tecton.get_workspace("prod") feature_service = workspace.get_feature_service("user_transaction_metrics") Prompts# Here we will set up a custom TectonPromptTemplate. This prompt template will take in a user_id , look up their stats, and format those stats into a prompt. Note that the input to this prompt template is just user_id, since that is the only user defined piece (all other variables are looked up inside the prompt template). from langchain.prompts import PromptTemplate, StringPromptTemplate template = """Given the vendor's up to date transaction stats, write them a note based on the following rules: 1. If they had a transaction in the last day, write a short congratulations message on their recent sales 2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more. 3. Always add a silly joke about chickens at the end Here are the vendor's stats: Number of Transactions Last Day: {transaction_count_1d} Number of Transactions Last 30 Days: {transaction_count_30d} Your response:""" prompt = PromptTemplate.from_template(template) class TectonPromptTemplate(StringPromptTemplate): def format(self, **kwargs) -> str: user_id = kwargs.pop("user_id") feature_vector = feature_service.get_online_features(join_keys={"user_id": user_id}).to_dict() kwargs["transaction_count_1d"] = feature_vector["user_transaction_counts.transaction_count_1d_1d"] kwargs["transaction_count_30d"] = feature_vector["user_transaction_counts.transaction_count_30d_1d"] return prompt.format(**kwargs) prompt_template = TectonPromptTemplate(input_variables=["user_id"]) print(prompt_template.format(user_id="user_469998441571")) Given the vendor's up to date transaction stats, write them a note based on the following rules: 1. If they had a transaction in the last day, write a short congratulations message on their recent sales 2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more. 3. Always add a silly joke about chickens at the end Here are the vendor's stats: Number of Transactions Last Day: 657 Number of Transactions Last 30 Days: 20326 Your response: Use in a chain# We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Tecton Feature Platform from langchain.chat_models import ChatOpenAI from langchain.chains import LLMChain chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template) chain.run("user_469998441571") 'Wow, congratulations on your recent sales! Your business is really soaring like a chicken on a hot air balloon! Keep up the great work!' Featureform#
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
2d3b70e088b8-2
Featureform# Finally, we will use Featureform an open-source and enterprise-grade feature store to run the same example. Featureform allows you to work with your infrastructure like Spark or locally to define your feature transformations. Initialize Featureform# You can follow in the instructions in the README to initialize your transformations and features in Featureform. import featureform as ff client = ff.Client(host="demo.featureform.com") Prompts# Here we will set up a custom FeatureformPromptTemplate. This prompt template will take in the average amount a user pays per transactions. Note that the input to this prompt template is just avg_transaction, since that is the only user defined piece (all other variables are looked up inside the prompt template). from langchain.prompts import PromptTemplate, StringPromptTemplate template = """Given the amount a user spends on average per transaction, let them know if they are a high roller. Otherwise, make a silly joke about chickens at the end to make them feel better Here are the user's stats: Average Amount per Transaction: ${avg_transcation} Your response:""" prompt = PromptTemplate.from_template(template) class FeatureformPromptTemplate(StringPromptTemplate): def format(self, **kwargs) -> str: user_id = kwargs.pop("user_id") fpf = client.features([("avg_transactions", "quickstart")], {"user": user_id}) return prompt.format(**kwargs) prompt_template = FeatureformPrompTemplate(input_variables=["user_id"]) print(prompt_template.format(user_id="C1410926")) Use in a chain# We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Featureform Feature Platform from langchain.chat_models import ChatOpenAI from langchain.chains import LLMChain chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template) chain.run("C1410926") previous How-To Guides next How to create a custom prompt template Contents Feast Load Feast Store Prompts Use in a chain Tecton Prerequisites Define and Load Features Prompts Use in a chain Featureform Initialize Featureform Prompts Use in a chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html
f6c05cba93d4-0
.ipynb .pdf How to create a prompt template that uses few shot examples Contents Use Case Using an example set Create the example set Create a formatter for the few shot examples Feed examples and formatter to FewShotPromptTemplate Using an example selector Feed examples into ExampleSelector Feed example selector into FewShotPromptTemplate How to create a prompt template that uses few shot examples# In this tutorial, we’ll learn how to create a prompt template that uses few shot examples. We’ll use the FewShotPromptTemplate class to create a prompt template that uses few shot examples. This class either takes in a set of examples, or an ExampleSelector object. In this tutorial, we’ll go over both options. Use Case# In this tutorial, we’ll configure few shot examples for self-ask with search. Using an example set# Create the example set# To get started, create a list of few shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables. from langchain.prompts.few_shot import FewShotPromptTemplate from langchain.prompts.prompt import PromptTemplate examples = [ { "question": "Who lived longer, Muhammad Ali or Alan Turing?", "answer": """ Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali """ }, { "question": "When was the founder of craigslist born?", "answer": """ Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 """ }, { "question": "Who was the maternal grandfather of George Washington?", "answer": """ Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball """ }, { "question": "Are both the directors of Jaws and Casino Royale from the same country?", "answer": """ Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: No """ } ] Create a formatter for the few shot examples# Configure a formatter that will format the few shot examples into a string. This formatter should be a PromptTemplate object. example_prompt = PromptTemplate(input_variables=["question", "answer"], template="Question: {question}\n{answer}") print(example_prompt.format(**examples[0])) Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali Feed examples and formatter to FewShotPromptTemplate# Finally, create a FewShotPromptTemplate object. This object takes in the few shot examples and the formatter for the few shot examples. prompt = FewShotPromptTemplate( examples=examples, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"] ) print(prompt.format(input="Who was the father of Mary Ball Washington?")) Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes.
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
f6c05cba93d4-1
Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali Question: When was the founder of craigslist born? Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Question: Are both the directors of Jaws and Casino Royale from the same country? Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: No Question: Who was the father of Mary Ball Washington? Using an example selector# Feed examples into ExampleSelector# We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the FewShotPromptTemplate object, we will feed them into an ExampleSelector object. In this tutorial, we will use the SemanticSimilarityExampleSelector class. This class selects few shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few shot examples, as well as a vector store to perform the nearest neighbor search. from langchain.prompts.example_selector import SemanticSimilarityExampleSelector from langchain.vectorstores import Chroma from langchain.embeddings import OpenAIEmbeddings example_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # This is the number of examples to produce. k=1 ) # Select the most similar example to the input. question = "Who was the father of Mary Ball Washington?" selected_examples = example_selector.select_examples({"question": question}) print(f"Examples most similar to the input: {question}") for example in selected_examples: print("\n") for k, v in example.items(): print(f"{k}: {v}") Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Examples most similar to the input: Who was the father of Mary Ball Washington? question: Who was the maternal grandfather of George Washington? answer: Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Feed example selector into FewShotPromptTemplate# Finally, create a FewShotPromptTemplate object. This object takes in the example selector and the formatter for the few shot examples. prompt = FewShotPromptTemplate( example_selector=example_selector, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"] ) print(prompt.format(input="Who was the father of Mary Ball Washington?")) Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington.
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
f6c05cba93d4-2
Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Question: Who was the father of Mary Ball Washington? previous How to create a custom prompt template next How to work with partial Prompt Templates Contents Use Case Using an example set Create the example set Create a formatter for the few shot examples Feed examples and formatter to FewShotPromptTemplate Using an example selector Feed examples into ExampleSelector Feed example selector into FewShotPromptTemplate By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
1a1fe5222c39-0
.ipynb .pdf How to serialize prompts Contents PromptTemplate Loading from YAML Loading from JSON Loading Template from a File FewShotPromptTemplate Examples Loading from YAML Loading from JSON Examples in the Config Example Prompt from a File PromptTempalte with OutputParser How to serialize prompts# It is often preferrable to store prompts not as python code but as files. This can make it easy to share, store, and version prompts. This notebook covers how to do that in LangChain, walking through all the different types of prompts and the different serialization options. At a high level, the following design principles are applied to serialization: Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported. We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both. There is also a single entry point to load prompts from disk, making it easy to load any type of prompt. # All prompts are loaded through the `load_prompt` function. from langchain.prompts import load_prompt PromptTemplate# This section covers examples for loading a PromptTemplate. Loading from YAML# This shows an example of loading a PromptTemplate from YAML. !cat simple_prompt.yaml _type: prompt input_variables: ["adjective", "content"] template: Tell me a {adjective} joke about {content}. prompt = load_prompt("simple_prompt.yaml") print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens. Loading from JSON# This shows an example of loading a PromptTemplate from JSON. !cat simple_prompt.json { "_type": "prompt", "input_variables": ["adjective", "content"], "template": "Tell me a {adjective} joke about {content}." } prompt = load_prompt("simple_prompt.json") print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens. Loading Template from a File# This shows an example of storing the template in a separate file and then referencing it in the config. Notice that the key changes from template to template_path. !cat simple_template.txt Tell me a {adjective} joke about {content}. !cat simple_prompt_with_template_file.json { "_type": "prompt", "input_variables": ["adjective", "content"], "template_path": "simple_template.txt" } prompt = load_prompt("simple_prompt_with_template_file.json") print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens. FewShotPromptTemplate# This section covers examples for loading few shot prompt templates. Examples# This shows an example of what examples stored as json might look like. !cat examples.json [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"} ] And here is what the same examples stored as yaml might look like. !cat examples.yaml - input: happy output: sad - input: tall output: short Loading from YAML# This shows an example of loading a few shot example from YAML. !cat few_shot_prompt.yaml _type: few_shot input_variables: ["adjective"] prefix: Write antonyms for the following words. example_prompt: _type: prompt input_variables: ["input", "output"] template: "Input: {input}\nOutput: {output}" examples: examples.json suffix: "Input: {adjective}\nOutput:" prompt = load_prompt("few_shot_prompt.yaml") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: The same would work if you loaded examples from the yaml file. !cat few_shot_prompt_yaml_examples.yaml _type: few_shot input_variables: ["adjective"] prefix:
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
1a1fe5222c39-1
_type: few_shot input_variables: ["adjective"] prefix: Write antonyms for the following words. example_prompt: _type: prompt input_variables: ["input", "output"] template: "Input: {input}\nOutput: {output}" examples: examples.yaml suffix: "Input: {adjective}\nOutput:" prompt = load_prompt("few_shot_prompt_yaml_examples.yaml") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: Loading from JSON# This shows an example of loading a few shot example from JSON. !cat few_shot_prompt.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt": { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }, "examples": "examples.json", "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt.json") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: Examples in the Config# This shows an example of referencing the examples directly in the config. !cat few_shot_prompt_examples_in.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt": { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }, "examples": [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"} ], "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt_examples_in.json") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: Example Prompt from a File# This shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from example_prompt to example_prompt_path. !cat example_prompt.json { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" } !cat few_shot_prompt_example_prompt.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt_path": "example_prompt.json", "examples": "examples.json", "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt_example_prompt.json") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: PromptTempalte with OutputParser# This shows an example of loading a prompt along with an OutputParser from a file. ! cat prompt_with_output_parser.json { "input_variables": [ "question", "student_answer" ], "output_parser": { "regex": "(.*?)\\nScore: (.*)", "output_keys": [ "answer", "score" ], "default_output_key": null, "_type": "regex_parser" }, "partial_variables": {}, "template": "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:", "template_format": "f-string", "validate_template": true, "_type": "prompt" } prompt = load_prompt("prompt_with_output_parser.json")
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
1a1fe5222c39-2
} prompt = load_prompt("prompt_with_output_parser.json") prompt.output_parser.parse("George Washington was born in 1732 and died in 1799.\nScore: 1/2") {'answer': 'George Washington was born in 1732 and died in 1799.', 'score': '1/2'} previous Prompt Composition next Prompts Contents PromptTemplate Loading from YAML Loading from JSON Loading Template from a File FewShotPromptTemplate Examples Loading from YAML Loading from JSON Examples in the Config Example Prompt from a File PromptTempalte with OutputParser By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
9f4c2d26ae5b-0
.ipynb .pdf Output Parsers Output Parsers# Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in. Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement: get_format_instructions() -> str: A method which returns a string containing instructions for how the output of a language model should be formatted. parse(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure. And then one optional one: parse_with_prompt(str, PromptValue) -> Any: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Below we go over the main type of output parser, the PydanticOutputParser. See the examples folder for other options. from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.output_parsers import PydanticOutputParser from pydantic import BaseModel, Field, validator from typing import List model_name = 'text-davinci-003' temperature = 0.0 model = OpenAI(model_name=model_name, temperature=temperature) # Define your desired data structure. class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator('setup') def question_ends_with_question_mark(cls, field): if field[-1] != '?': raise ValueError("Badly formed question!") return field # Set up a parser + inject instructions into the prompt template. parser = PydanticOutputParser(pydantic_object=Joke) prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()} ) # And a query intented to prompt a language model to populate the data structure. joke_query = "Tell me a joke." _input = prompt.format_prompt(query=joke_query) output = model(_input.to_string()) parser.parse(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!') previous Output Parsers next CommaSeparatedListOutputParser By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/output_parsers/getting_started.html
a8939f07e56d-0
.ipynb .pdf Structured Output Parser Structured Output Parser# While the Pydantic/JSON parser is more powerful, we initially experimented data structures having text fields only. from langchain.output_parsers import StructuredOutputParser, ResponseSchema from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI Here we define the response schema we want to receive. response_schemas = [ ResponseSchema(name="answer", description="answer to the user's question"), ResponseSchema(name="source", description="source used to answer the user's question, should be a website.") ] output_parser = StructuredOutputParser.from_response_schemas(response_schemas) We now get a string that contains instructions for how the response should be formatted, and we then insert that into our prompt. format_instructions = output_parser.get_format_instructions() prompt = PromptTemplate( template="answer the users question as best as possible.\n{format_instructions}\n{question}", input_variables=["question"], partial_variables={"format_instructions": format_instructions} ) We can now use this to format a prompt to send to the language model, and then parse the returned result. model = OpenAI(temperature=0) _input = prompt.format_prompt(question="what's the capital of france?") output = model(_input.to_string()) output_parser.parse(output) {'answer': 'Paris', 'source': 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html'} And here’s an example of using this in a chat model chat_model = ChatOpenAI(temperature=0) prompt = ChatPromptTemplate( messages=[ HumanMessagePromptTemplate.from_template("answer the users question as best as possible.\n{format_instructions}\n{question}") ], input_variables=["question"], partial_variables={"format_instructions": format_instructions} ) _input = prompt.format_prompt(question="what's the capital of france?") output = chat_model(_input.to_messages()) output_parser.parse(output.content) {'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Paris'} previous RetryOutputParser next Memory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/structured.html
04258ed25ad1-0
.ipynb .pdf PydanticOutputParser PydanticOutputParser# This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema. Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-formed JSON. In the OpenAI family, DaVinci can do reliably but Curie’s ability already drops off dramatically. Use Pydantic to declare your data model. Pydantic’s BaseModel like a Python dataclass, but with actual type checking + coercion. from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.output_parsers import PydanticOutputParser from pydantic import BaseModel, Field, validator from typing import List model_name = 'text-davinci-003' temperature = 0.0 model = OpenAI(model_name=model_name, temperature=temperature) # Define your desired data structure. class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator('setup') def question_ends_with_question_mark(cls, field): if field[-1] != '?': raise ValueError("Badly formed question!") return field # And a query intented to prompt a language model to populate the data structure. joke_query = "Tell me a joke." # Set up a parser + inject instructions into the prompt template. parser = PydanticOutputParser(pydantic_object=Joke) prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()} ) _input = prompt.format_prompt(query=joke_query) output = model(_input.to_string()) parser.parse(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!') # Here's another example, but with a compound typed field. class Actor(BaseModel): name: str = Field(description="name of an actor") film_names: List[str] = Field(description="list of names of films they starred in") actor_query = "Generate the filmography for a random actor." parser = PydanticOutputParser(pydantic_object=Actor) prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()} ) _input = prompt.format_prompt(query=actor_query) output = model(_input.to_string()) parser.parse(output) Actor(name='Tom Hanks', film_names=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Cast Away', 'Toy Story']) previous OutputFixingParser next RetryOutputParser By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/pydantic.html
8632c7babfac-0
.ipynb .pdf Enum Output Parser Enum Output Parser# This notebook shows how to use an Enum output parser from langchain.output_parsers.enum import EnumOutputParser from enum import Enum class Colors(Enum): RED = "red" GREEN = "green" BLUE = "blue" parser = EnumOutputParser(enum=Colors) parser.parse("red") <Colors.RED: 'red'> # Can handle spaces parser.parse(" green") <Colors.GREEN: 'green'> # And new lines parser.parse("blue\n") <Colors.BLUE: 'blue'> # And raises errors when appropriate parser.parse("yellow") --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/enum.py:25, in EnumOutputParser.parse(self, response) 24 try: ---> 25 return self.enum(response.strip()) 26 except ValueError: File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:315, in EnumMeta.__call__(cls, value, names, module, qualname, type, start) 314 if names is None: # simple value lookup --> 315 return cls.__new__(cls, value) 316 # otherwise, functional API: we're creating a new Enum type File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:611, in Enum.__new__(cls, value) 610 if result is None and exc is None: --> 611 raise ve_exc 612 elif exc is None: ValueError: 'yellow' is not a valid Colors During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[8], line 2 1 # And raises errors when appropriate ----> 2 parser.parse("yellow") File ~/workplace/langchain/langchain/output_parsers/enum.py:27, in EnumOutputParser.parse(self, response) 25 return self.enum(response.strip()) 26 except ValueError: ---> 27 raise OutputParserException( 28 f"Response '{response}' is not one of the " 29 f"expected values: {self._valid_values}" 30 ) OutputParserException: Response 'yellow' is not one of the expected values: ['red', 'green', 'blue'] previous Datetime next OutputFixingParser By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/enum.html
204a1b14411b-0
.ipynb .pdf RetryOutputParser RetryOutputParser# While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can’t. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example. from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.output_parsers import PydanticOutputParser, OutputFixingParser, RetryOutputParser from pydantic import BaseModel, Field, validator from typing import List template = """Based on the user question, provide an Action and Action Input for what step should be taken. {format_instructions} Question: {query} Response:""" class Action(BaseModel): action: str = Field(description="action to take") action_input: str = Field(description="input to the action") parser = PydanticOutputParser(pydantic_object=Action) prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()} ) prompt_value = prompt.format_prompt(query="who is leo di caprios gf?") bad_response = '{"action": "search"}' If we try to parse this response as is, we will get an error parser.parse(bad_response) --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:24, in PydanticOutputParser.parse(self, text) 23 json_object = json.loads(json_str) ---> 24 return self.pydantic_object.parse_obj(json_object) 26 except (json.JSONDecodeError, ValidationError) as e: File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj() File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for Action action_input field required (type=value_error.missing) During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[6], line 1 ----> 1 parser.parse(bad_response) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text) 27 name = self.pydantic_object.__name__ 28 msg = f"Failed to parse {name} from completion {text}. Got: {e}" ---> 29 raise OutputParserException(msg) OutputParserException: Failed to parse Action from completion {"action": "search"}. Got: 1 validation error for Action action_input field required (type=value_error.missing) If we try to use the OutputFixingParser to fix this error, it will be confused - namely, it doesn’t know what to actually put for action input. fix_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI()) fix_parser.parse(bad_response) Action(action='search', action_input='') Instead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response. from langchain.output_parsers import RetryWithErrorOutputParser retry_parser = RetryWithErrorOutputParser.from_llm(parser=parser, llm=OpenAI(temperature=0)) retry_parser.parse_with_prompt(bad_response, prompt_value) Action(action='search', action_input='who is leo di caprios gf?') previous PydanticOutputParser next Structured Output Parser By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/retry.html
a7787a172610-0
.ipynb .pdf Datetime Datetime# This OutputParser shows out to parse LLM output into datetime format. from langchain.prompts import PromptTemplate from langchain.output_parsers import DatetimeOutputParser from langchain.chains import LLMChain from langchain.llms import OpenAI output_parser = DatetimeOutputParser() template = """Answer the users question: {question} {format_instructions}""" prompt = PromptTemplate.from_template(template, partial_variables={"format_instructions": output_parser.get_format_instructions()}) chain = LLMChain(prompt=prompt, llm=OpenAI()) output = chain.run("around when was bitcoin founded?") output '\n\n2008-01-03T18:15:05.000000Z' output_parser.parse(output) datetime.datetime(2008, 1, 3, 18, 15, 5) previous CommaSeparatedListOutputParser next Enum Output Parser By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/datetime.html
a75840c8a3a0-0
.ipynb .pdf CommaSeparatedListOutputParser CommaSeparatedListOutputParser# Here’s another parser strictly less powerful than Pydantic/JSON parsing. from langchain.output_parsers import CommaSeparatedListOutputParser from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI output_parser = CommaSeparatedListOutputParser() format_instructions = output_parser.get_format_instructions() prompt = PromptTemplate( template="List five {subject}.\n{format_instructions}", input_variables=["subject"], partial_variables={"format_instructions": format_instructions} ) model = OpenAI(temperature=0) _input = prompt.format(subject="ice cream flavors") output = model(_input) output_parser.parse(output) ['Vanilla', 'Chocolate', 'Strawberry', 'Mint Chocolate Chip', 'Cookies and Cream'] previous Output Parsers next Datetime By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/comma_separated.html
cbf1f18c47da-0
.ipynb .pdf OutputFixingParser OutputFixingParser# This output parser wraps another output parser and tries to fix any mistakes The Pydantic guardrail simply tries to parse the LLM response. If it does not parse correctly, then it errors. But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it. For this example, we’ll use the above OutputParser. Here’s what happens if we pass it a result that does not comply with the schema: from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.output_parsers import PydanticOutputParser from pydantic import BaseModel, Field, validator from typing import List class Actor(BaseModel): name: str = Field(description="name of an actor") film_names: List[str] = Field(description="list of names of films they starred in") actor_query = "Generate the filmography for a random actor." parser = PydanticOutputParser(pydantic_object=Actor) misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}" parser.parse(misformatted) --------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:23, in PydanticOutputParser.parse(self, text) 22 json_str = match.group() ---> 23 json_object = json.loads(json_str) 24 return self.pydantic_object.parse_obj(json_object) File ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 343 if (cls is None and object_hook is None and 344 parse_int is None and parse_float is None and 345 parse_constant is None and object_pairs_hook is None and not kw): --> 346 return _default_decoder.decode(s) 347 if cls is None: File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:337, in JSONDecoder.decode(self, s, _w) 333 """Return the Python representation of ``s`` (a ``str`` instance 334 containing a JSON document). 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx) 352 try: --> 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[6], line 1 ----> 1 parser.parse(misformatted) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text) 27 name = self.pydantic_object.__name__ 28 msg = f"Failed to parse {name} from completion {text}. Got: {e}" ---> 29 raise OutputParserException(msg) OutputParserException: Failed to parse Actor from completion {'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) Now we can construct and use a OutputFixingParser. This output parser takes as an argument another output parser but also an LLM with which to try to correct any formatting mistakes. from langchain.output_parsers import OutputFixingParser new_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI()) new_parser.parse(misformatted) Actor(name='Tom Hanks', film_names=['Forrest Gump']) previous Enum Output Parser next PydanticOutputParser By Harrison Chase
https://langchain.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
cbf1f18c47da-1
previous Enum Output Parser next PydanticOutputParser By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
e7f50ecba501-0
.rst .pdf Welcome to LangChain Contents Getting Started Modules Use Cases Reference Docs Ecosystem Additional Resources Welcome to LangChain# LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be: Data-aware: connect a language model to other sources of data Agentic: allow a language model to interact with its environment The LangChain framework is designed around these principles. This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here. Getting Started# How to get started using LangChain to create an Language Model application. Quickstart Guide Concepts and terminology. Concepts and terminology Tutorials created by community experts and presented on YouTube. Tutorials Modules# These modules are the core abstractions which we view as the building blocks of any LLM-powered application. For each module LangChain provides standard, extendable interfaces. LangChain also provides external integrations and even end-to-end implementations for off-the-shelf use. The docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides. The modules are (from least to most complex): Models: Supported model types and integrations. Prompts: Prompt management, optimization, and serialization. Memory: Memory refers to state that is persisted between calls of a chain/agent. Indexes: Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data. Chains: Chains are structured sequences of calls (to an LLM or to a different utility). Agents: An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete. Callbacks: Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application. Use Cases# Best practices and built-in implementations for common LangChain use cases: Autonomous Agents: Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI. Agent Simulations: Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities. Personal Assistants: One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data. Question Answering: Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer. Chatbots: Language models love to chat, making this a very natural use of them. Querying Tabular Data: Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc). Code Understanding: Recommended reading if you want to use language models to analyze code. Interacting with APIs: Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions. Extraction: Extract structured information from text. Summarization: Compressing longer documents. A type of Data-Augmented Generation. Evaluation: Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation. Reference Docs# Full documentation on all methods, classes, installation methods, and integration setups for LangChain. LangChain Installation Reference Documentation Ecosystem# LangChain integrates a lot of different LLMs, systems, and products. From the other side, many systems and products depend on LangChain. It creates a vibrant and thriving ecosystem. Integrations: Guides for how other products can be used with LangChain. Dependents: List of repositories that use LangChain. Deployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps. Additional Resources# Additional resources we think may be useful as you develop your application! LangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents. Gallery: A collection of great projects that use Langchain, compiled by the folks at Kyrolabs. Useful for finding inspiration and example implementations. Deploying LLMs in Production: A collection of best practices and tutorials for deploying LLMs in production.
https://langchain.readthedocs.io/en/latest/langchain/index.html
e7f50ecba501-1
Tracing: A guide on using tracing in LangChain to visualize the execution of chains and agents. Model Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so. Discord: Join us on our Discord to discuss all things LangChain! YouTube: A collection of the LangChain tutorials and videos. Production Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel. next Quickstart Guide Contents Getting Started Modules Use Cases Reference Docs Ecosystem Additional Resources By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/langchain/index.html
3911e74c4326-0
.md .pdf Deployments Contents Anyscale Streamlit Gradio (on Hugging Face) Chainlit Beam Vercel FastAPI + Vercel Kinsta Fly.io Digitalocean App Platform Google Cloud Run SteamShip Langchain-serve BentoML Databutton Deployments# So, you’ve created a really cool chain - now what? How do you deploy it and make it easily shareable with the world? This section covers several options for that. Note that these options are meant for quick deployment of prototypes and demos, not for production systems. If you need help with the deployment of a production system, please contact us directly. What follows is a list of template GitHub repositories designed to be easily forked and modified to use your chain. This list is far from exhaustive, and we are EXTREMELY open to contributions here. Anyscale# Anyscale is a unified compute platform that makes it easy to develop, deploy, and manage scalable LLM applications in production using Ray. With Anyscale you can scale the most challenging LLM-based workloads and both develop and deploy LLM-based apps on a single compute platform. Streamlit# This repo serves as a template for how to deploy a LangChain with Streamlit. It implements a chatbot interface. It also contains instructions for how to deploy this app on the Streamlit platform. Gradio (on Hugging Face)# This repo serves as a template for how deploy a LangChain with Gradio. It implements a chatbot interface, with a “Bring-Your-Own-Token” approach (nice for not wracking up big bills). It also contains instructions for how to deploy this app on the Hugging Face platform. This is heavily influenced by James Weaver’s excellent examples. Chainlit# This repo is a cookbook explaining how to visualize and deploy LangChain agents with Chainlit. You create ChatGPT-like UIs with Chainlit. Some of the key features include intermediary steps visualisation, element management & display (images, text, carousel, etc.) as well as cloud deployment. Chainlit doc on the integration with LangChain Beam# This repo serves as a template for how deploy a LangChain with Beam. It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API. Vercel# A minimal example on how to run LangChain on Vercel using Flask. FastAPI + Vercel# A minimal example on how to run LangChain on Vercel using FastAPI and LangCorn/Uvicorn. Kinsta# A minimal example on how to deploy LangChain to Kinsta using Flask. Fly.io# A minimal example of how to deploy LangChain to Fly.io using Flask. Digitalocean App Platform# A minimal example on how to deploy LangChain to DigitalOcean App Platform. Google Cloud Run# A minimal example on how to deploy LangChain to Google Cloud Run. SteamShip# This repository contains LangChain adapters for Steamship, enabling LangChain developers to rapidly deploy their apps on Steamship. This includes: production-ready endpoints, horizontal scaling across dependencies, persistent storage of app state, multi-tenancy support, etc. Langchain-serve# This repository allows users to serve local chains and agents as RESTful, gRPC, or WebSocket APIs, thanks to Jina. Deploy your chains & agents with ease and enjoy independent scaling, serverless and autoscaling APIs, as well as a Streamlit playground on Jina AI Cloud. BentoML# This repository provides an example of how to deploy a LangChain application with BentoML. BentoML is a framework that enables the containerization of machine learning applications as standard OCI images. BentoML also allows for the automatic generation of OpenAPI and gRPC endpoints. With BentoML, you can integrate models from all popular ML frameworks and deploy them as microservices running on the most optimal hardware and scaling independently. Databutton# These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examples include a Chatbot interface with conversational memory, a Personal search engine, and a starter template for LangChain apps. Deploying and sharing is just one click away. previous Dependents next Deploying LLMs in Production Contents Anyscale Streamlit Gradio (on Hugging Face) Chainlit Beam Vercel FastAPI + Vercel Kinsta Fly.io
https://langchain.readthedocs.io/en/latest/ecosystem/deployments.html
3911e74c4326-1
Beam Vercel FastAPI + Vercel Kinsta Fly.io Digitalocean App Platform Google Cloud Run SteamShip Langchain-serve BentoML Databutton By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/ecosystem/deployments.html
cf5217c4d644-0
.md .pdf Tracing Contents Tracing Walkthrough Changing Sessions Tracing# By enabling tracing in your LangChain runs, you’ll be able to more effectively visualize, step through, and debug your chains and agents. First, you should install tracing and set up your environment properly. You can use either a locally hosted version of this (uses Docker) or a cloud hosted version (in closed alpha). If you’re interested in using the hosted platform, please fill out the form here. Locally Hosted Setup Cloud Hosted Setup Tracing Walkthrough# When you first access the UI, you should see a page with your tracing sessions. An initial one “default” should already be created for you. A session is just a way to group traces together. If you click on a session, it will take you to a page with no recorded traces that says “No Runs.” You can create a new session with the new session form. If we click on the default session, we can see that to start we have no traces stored. If we now start running chains and agents with tracing enabled, we will see data show up here. To do so, we can run this notebook as an example. After running it, we will see an initial trace show up. From here we can explore the trace at a high level by clicking on the arrow to show nested runs. We can keep on clicking further and further down to explore deeper and deeper. We can also click on the “Explore” button of the top level run to dive even deeper. Here, we can see the inputs and outputs in full, as well as all the nested traces. We can keep on exploring each of these nested traces in more detail. For example, here is the lowest level trace with the exact inputs/outputs to the LLM. Changing Sessions# To initially record traces to a session other than "default", you can set the LANGCHAIN_SESSION environment variable to the name of the session you want to record to: import os os.environ["LANGCHAIN_TRACING"] = "true" os.environ["LANGCHAIN_SESSION"] = "my_session" # Make sure this session actually exists. You can create a new session in the UI. To switch sessions mid-script or mid-notebook, do NOT set the LANGCHAIN_SESSION environment variable. Instead: langchain.set_tracing_callback_manager(session_name="my_session") previous Deploying LLMs in Production next Model Comparison Contents Tracing Walkthrough Changing Sessions By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/additional_resources/tracing.html
cd8629718a26-0
.rst .pdf Deploying LLMs in Production Contents Outline Designing a Robust LLM Application Service Monitoring Fault tolerance Zero down time upgrade Load balancing Maintaining Cost-Efficiency and Scalability Self-hosting models Resource Management and Auto-Scaling Utilizing Spot Instances Independent Scaling Batching requests Ensuring Rapid Iteration Model composition Cloud providers Infrastructure as Code (IaC) CI/CD Deploying LLMs in Production# In today’s fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it’s crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories: Case 1: Utilizing External LLM Providers (OpenAI, Anthropic, etc.)In this scenario, most of the computational burden is handled by the LLM providers, while LangChain simplifies the implementation of business logic around these services. This approach includes features such as prompt templating, chat message generation, caching, vector embedding database creation, preprocessing, etc. Case 2: Self-hosted Open-Source ModelsAlternatively, developers can opt to use smaller, yet comparably capable, self-hosted open-source LLM models. This approach can significantly decrease costs, latency, and privacy concerns associated with transferring data to external LLM providers. Regardless of the framework that forms the backbone of your product, deploying LLM applications comes with its own set of challenges. It’s vital to understand the trade-offs and key considerations when evaluating serving frameworks. Outline# This guide aims to provide a comprehensive overview of the requirements for deploying LLMs in a production setting, focusing on: Designing a Robust LLM Application Service Maintaining Cost-Efficiency Ensuring Rapid Iteration Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include: Ray Serve BentoML Modal These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs. Designing a Robust LLM Application Service# When deploying an LLM service in production, it’s imperative to provide a seamless user experience free from outages. Achieving 24/7 service availability involves creating and maintaining several sub-systems surrounding your application. Monitoring# Monitoring forms an integral part of any system running in a production environment. In the context of LLMs, it is essential to monitor both performance and quality metrics. Performance Metrics: These metrics provide insights into the efficiency and capacity of your model. Here are some key examples: Query per second (QPS): This measures the number of queries your model processes in a second, offering insights into its utilization. Latency: This metric quantifies the delay from when your client sends a request to when they receive a response. Tokens Per Second (TPS): This represents the number of tokens your model can generate in a second. Quality Metrics: These metrics are typically customized according to the business use-case. For instance, how does the output of your system compare to a baseline, such as a previous version? Although these metrics can be calculated offline, you need to log the necessary data to use them later. Fault tolerance# Your application may encounter errors such as exceptions in your model inference or business logic code, causing failures and disrupting traffic. Other potential issues could arise from the machine running your application, such as unexpected hardware breakdowns or loss of spot-instances during high-demand periods. One way to mitigate these risks is by increasing redundancy through replica scaling and implementing recovery mechanisms for failed replicas. However, model replicas aren’t the only potential points of failure. It’s essential to build resilience against various failures that could occur at any point in your stack. Zero down time upgrade# System upgrades are often necessary but can result in service disruptions if not handled correctly. One way to prevent downtime during upgrades is by implementing a smooth transition process from the old version to the new one. Ideally, the new version of your LLM service is deployed, and traffic gradually shifts from the old to the new version, maintaining a constant QPS throughout the process. Load balancing# Load balancing, in simple terms, is a technique to distribute work evenly across multiple computers, servers, or other resources to optimize the utilization of the system, maximize throughput, minimize response time, and avoid overload of any single resource. Think of it as a traffic officer directing cars (requests) to different roads (servers) so that no single road becomes too congested.
https://langchain.readthedocs.io/en/latest/additional_resources/deploy_llms.html
cd8629718a26-1
There are several strategies for load balancing. For example, one common method is the Round Robin strategy, where each request is sent to the next server in line, cycling back to the first when all servers have received a request. This works well when all servers are equally capable. However, if some servers are more powerful than others, you might use a Weighted Round Robin or Least Connections strategy, where more requests are sent to the more powerful servers, or to those currently handling the fewest active requests. Let’s imagine you’re running a LLM chain. If your application becomes popular, you could have hundreds or even thousands of users asking questions at the same time. If one server gets too busy (high load), the load balancer would direct new requests to another server that is less busy. This way, all your users get a timely response and the system remains stable. Maintaining Cost-Efficiency and Scalability# Deploying LLM services can be costly, especially when you’re handling a large volume of user interactions. Charges by LLM providers are usually based on tokens used, making a chat system inference on these models potentially expensive. However, several strategies can help manage these costs without compromising the quality of the service. Self-hosting models# Several smaller and open-source LLMs are emerging to tackle the issue of reliance on LLM providers. Self-hosting allows you to maintain similar quality to LLM provider models while managing costs. The challenge lies in building a reliable, high-performing LLM serving system on your own machines. Resource Management and Auto-Scaling# Computational logic within your application requires precise resource allocation. For instance, if part of your traffic is served by an OpenAI endpoint and another part by a self-hosted model, it’s crucial to allocate suitable resources for each. Auto-scaling—adjusting resource allocation based on traffic—can significantly impact the cost of running your application. This strategy requires a balance between cost and responsiveness, ensuring neither resource over-provisioning nor compromised application responsiveness. Utilizing Spot Instances# On platforms like AWS, spot instances offer substantial cost savings, typically priced at about a third of on-demand instances. The trade-off is a higher crash rate, necessitating a robust fault-tolerance mechanism for effective use. Independent Scaling# When self-hosting your models, you should consider independent scaling. For example, if you have two translation models, one fine-tuned for French and another for Spanish, incoming requests might necessitate different scaling requirements for each. Batching requests# In the context of Large Language Models, batching requests can enhance efficiency by better utilizing your GPU resources. GPUs are inherently parallel processors, designed to handle multiple tasks simultaneously. If you send individual requests to the model, the GPU might not be fully utilized as it’s only working on a single task at a time. On the other hand, by batching requests together, you’re allowing the GPU to work on multiple tasks at once, maximizing its utilization and improving inference speed. This not only leads to cost savings but can also improve the overall latency of your LLM service. In summary, managing costs while scaling your LLM services requires a strategic approach. Utilizing self-hosting models, managing resources effectively, employing auto-scaling, using spot instances, independently scaling models, and batching requests are key strategies to consider. Open-source libraries such as Ray Serve and BentoML are designed to deal with these complexities. Ensuring Rapid Iteration# The LLM landscape is evolving at an unprecedented pace, with new libraries and model architectures being introduced constantly. Consequently, it’s crucial to avoid tying yourself to a solution specific to one particular framework. This is especially relevant in serving, where changes to your infrastructure can be time-consuming, expensive, and risky. Strive for infrastructure that is not locked into any specific machine learning library or framework, but instead offers a general-purpose, scalable serving layer. Here are some aspects where flexibility plays a key role: Model composition# Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feed back the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together. Cloud providers# Many hosted solutions are restricted to a single cloud provider, which can limit your options in today’s multi-cloud world. Depending on where your other infrastructure components are built, you might prefer to stick with your chosen cloud provider. Infrastructure as Code (IaC)#
https://langchain.readthedocs.io/en/latest/additional_resources/deploy_llms.html
cd8629718a26-2
Infrastructure as Code (IaC)# Rapid iteration also involves the ability to recreate your infrastructure quickly and reliably. This is where Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Kubernetes YAML files come into play. They allow you to define your infrastructure in code files, which can be version controlled and quickly deployed, enabling faster and more reliable iterations. CI/CD# In a fast-paced environment, implementing CI/CD pipelines can significantly speed up the iteration process. They help automate the testing and deployment of your LLM applications, reducing the risk of errors and enabling faster feedback and iteration. previous Deployments next Tracing Contents Outline Designing a Robust LLM Application Service Monitoring Fault tolerance Zero down time upgrade Load balancing Maintaining Cost-Efficiency and Scalability Self-hosting models Resource Management and Auto-Scaling Utilizing Spot Instances Independent Scaling Batching requests Ensuring Rapid Iteration Model composition Cloud providers Infrastructure as Code (IaC) CI/CD By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/additional_resources/deploy_llms.html
a2f0049aa52d-0
.ipynb .pdf Model Comparison Model Comparison# Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way. LangChain provides the concept of a ModelLaboratory to test out and try different models. from langchain import LLMChain, OpenAI, Cohere, HuggingFaceHub, PromptTemplate from langchain.model_laboratory import ModelLaboratory llms = [ OpenAI(temperature=0), Cohere(model="command-xlarge-20221108", max_tokens=20, temperature=0), HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":1}) ] model_lab = ModelLaboratory.from_llms(llms) model_lab.compare("What color is a flamingo?") Input: What color is a flamingo? OpenAI Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1} Flamingos are pink. Cohere Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0} Pink HuggingFaceHub Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1} pink prompt = PromptTemplate(template="What is the capital of {state}?", input_variables=["state"]) model_lab_with_prompt = ModelLaboratory.from_llms(llms, prompt=prompt) model_lab_with_prompt.compare("New York") Input: New York OpenAI Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1} The capital of New York is Albany. Cohere Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0} The capital of New York is Albany. HuggingFaceHub Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1} st john s from langchain import SelfAskWithSearchChain, SerpAPIWrapper open_ai_llm = OpenAI(temperature=0) search = SerpAPIWrapper() self_ask_with_search_openai = SelfAskWithSearchChain(llm=open_ai_llm, search_chain=search, verbose=True) cohere_llm = Cohere(temperature=0, model="command-xlarge-20221108") search = SerpAPIWrapper() self_ask_with_search_cohere = SelfAskWithSearchChain(llm=cohere_llm, search_chain=search, verbose=True) chains = [self_ask_with_search_openai, self_ask_with_search_cohere] names = [str(open_ai_llm), str(cohere_llm)] model_lab = ModelLaboratory(chains, names=names) model_lab.compare("What is the hometown of the reigning men's U.S. Open champion?") Input: What is the hometown of the reigning men's U.S. Open champion? OpenAI Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1} > Entering new chain... What is the hometown of the reigning men's U.S. Open champion? Are follow up questions needed here: Yes. Follow up: Who is the reigning men's U.S. Open champion? Intermediate answer: Carlos Alcaraz. Follow up: Where is Carlos Alcaraz from? Intermediate answer: El Palmar, Spain. So the final answer is: El Palmar, Spain > Finished chain. So the final answer is: El Palmar, Spain
https://langchain.readthedocs.io/en/latest/additional_resources/model_laboratory.html
a2f0049aa52d-1
> Finished chain. So the final answer is: El Palmar, Spain Cohere Params: {'model': 'command-xlarge-20221108', 'max_tokens': 256, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0} > Entering new chain... What is the hometown of the reigning men's U.S. Open champion? Are follow up questions needed here: Yes. Follow up: Who is the reigning men's U.S. Open champion? Intermediate answer: Carlos Alcaraz. So the final answer is: Carlos Alcaraz > Finished chain. So the final answer is: Carlos Alcaraz previous Tracing next YouTube By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/additional_resources/model_laboratory.html
93d8bb37c16b-0
.md .pdf YouTube Contents ⛓️Official LangChain YouTube channel⛓️ Introduction to LangChain with Harrison Chase, creator of LangChain Videos (sorted by views) YouTube# This is a collection of LangChain videos on YouTube. ⛓️Official LangChain YouTube channel⛓️# Introduction to LangChain with Harrison Chase, creator of LangChain# Building the Future with LLMs, LangChain, & Pinecone by Pinecone LangChain and Weaviate with Harrison Chase and Bob van Luijt - Weaviate Podcast #36 by Weaviate • Vector Database LangChain Demo + Q&A with Harrison Chase by Full Stack Deep Learning LangChain Agents: Build Personal Assistants For Your Data (Q&A with Harrison Chase and Mayo Oshin) by Chat with data ⛓️ LangChain “Agents in Production” Webinar by LangChain Videos (sorted by views)# Building AI LLM Apps with LangChain (and more?) - LIVE STREAM by Nicholas Renotte First look - ChatGPT + WolframAlpha (GPT-3.5 and Wolfram|Alpha via LangChain by James Weaver) by Dr Alan D. Thompson LangChain explained - The hottest new Python framework by AssemblyAI Chatbot with INFINITE MEMORY using OpenAI & Pinecone - GPT-3, Embeddings, ADA, Vector DB, Semantic by David Shapiro ~ AI LangChain for LLMs is… basically just an Ansible playbook by David Shapiro ~ AI Build your own LLM Apps with LangChain & GPT-Index by 1littlecoder BabyAGI - New System of Autonomous AI Agents with LangChain by 1littlecoder Run BabyAGI with Langchain Agents (with Python Code) by 1littlecoder How to Use Langchain With Zapier | Write and Send Email with GPT-3 | OpenAI API Tutorial by StarMorph AI Use Your Locally Stored Files To Get Response From GPT - OpenAI | Langchain | Python by Shweta Lodha Langchain JS | How to Use GPT-3, GPT-4 to Reference your own Data | OpenAI Embeddings Intro by StarMorph AI The easiest way to work with large language models | Learn LangChain in 10min by Sophia Yang 4 Autonomous AI Agents: “Westworld” simulation BabyAGI, AutoGPT, Camel, LangChain by Sophia Yang AI CAN SEARCH THE INTERNET? Langchain Agents + OpenAI ChatGPT by tylerwhatsgood Query Your Data with GPT-4 | Embeddings, Vector Databases | Langchain JS Knowledgebase by StarMorph AI Weaviate + LangChain for LLM apps presented by Erika Cardenas by Weaviate • Vector Database Langchain Overview — How to Use Langchain & ChatGPT by Python In Office Langchain Overview - How to Use Langchain & ChatGPT by Python In Office Custom langchain Agent & Tools with memory. Turn any Python function into langchain tool with Gpt 3 by echohive LangChain: Run Language Models Locally - Hugging Face Models by Prompt Engineering ChatGPT with any YouTube video using langchain and chromadb by echohive How to Talk to a PDF using LangChain and ChatGPT by Automata Learning Lab Langchain Document Loaders Part 1: Unstructured Files by Merk LangChain - Prompt Templates (what all the best prompt engineers use) by Nick Daigler LangChain. Crear aplicaciones Python impulsadas por GPT by Jesús Conde Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial by Rachel Woods BabyAGI + GPT-4 Langchain Agent with Internet Access by tylerwhatsgood Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI by Arnoldas Kemeklis Get Started with LangChain in Node.js by Developers Digest LangChain + OpenAI tutorial: Building a Q&A system w/ own text data by Samuel Chan Langchain + Zapier Agent by Merk Connecting the Internet with ChatGPT (LLMs) using Langchain And Answers Your Questions by Kamalraj M M Build More Powerful LLM Applications for Business’s with LangChain (Beginners Guide) by No Code Blackbox ⛓️ LangFlow LLM Agent Demo for 🦜🔗LangChain by Cobus Greyling ⛓️ Chatbot Factory: Streamline Python Chatbot Creation with LLMs and Langchain by Finxter ⛓️ LangChain Tutorial - ChatGPT mit eigenen Daten by Coding Crashkurse
https://langchain.readthedocs.io/en/latest/additional_resources/youtube.html
93d8bb37c16b-1
⛓️ LangChain Tutorial - ChatGPT mit eigenen Daten by Coding Crashkurse ⛓️ Chat with a CSV | LangChain Agents Tutorial (Beginners) by GoDataProf ⛓️ Introdução ao Langchain - #Cortes - Live DataHackers by Prof. João Gabriel Lima ⛓️ LangChain: Level up ChatGPT !? | LangChain Tutorial Part 1 by Code Affinity ⛓️ KI schreibt krasses Youtube Skript 😲😳 | LangChain Tutorial Deutsch by SimpleKI ⛓️ Chat with Audio: Langchain, Chroma DB, OpenAI, and Assembly AI by AI Anytime ⛓️ QA over documents with Auto vector index selection with Langchain router chains by echohive ⛓️ Build your own custom LLM application with Bubble.io & Langchain (No Code & Beginner friendly) by No Code Blackbox ⛓️ Simple App to Question Your Docs: Leveraging Streamlit, Hugging Face Spaces, LangChain, and Claude! by Chris Alexiuk ⛓️ LANGCHAIN AI- ConstitutionalChainAI + Databutton AI ASSISTANT Web App by Avra ⛓️ LANGCHAIN AI AUTONOMOUS AGENT WEB APP - 👶 BABY AGI 🤖 with EMAIL AUTOMATION using DATABUTTON by Avra ⛓️ The Future of Data Analysis: Using A.I. Models in Data Analysis (LangChain) by Absent Data ⛓️ Memory in LangChain | Deep dive (python) by Eden Marco ⛓️ 9 LangChain UseCases | Beginner’s Guide | 2023 by Data Science Basics ⛓️ Use Large Language Models in Jupyter Notebook | LangChain | Agents & Indexes by Abhinaw Tiwari ⛓️ How to Talk to Your Langchain Agent | 11 Labs + Whisper by VRSEN ⛓️ LangChain Deep Dive: 5 FUN AI App Ideas To Build Quickly and Easily by James NoCode ⛓️ BEST OPEN Alternative to OPENAI’s EMBEDDINGs for Retrieval QA: LangChain by Prompt Engineering ⛓️ LangChain 101: Models by Mckay Wrigley ⛓️ LangChain with JavaScript Tutorial #1 | Setup & Using LLMs by Leon van Zyl ⛓️ LangChain Overview & Tutorial for Beginners: Build Powerful AI Apps Quickly & Easily (ZERO CODE) by James NoCode ⛓️ LangChain In Action: Real-World Use Case With Step-by-Step Tutorial by Rabbitmetrics ⛓️ Summarizing and Querying Multiple Papers with LangChain by Automata Learning Lab ⛓️ Using Langchain (and Replit) through Tana, ask Google/Wikipedia/Wolfram Alpha to fill out a table by Stian Håklev ⛓️ Langchain PDF App (GUI) | Create a ChatGPT For Your PDF in Python by Alejandro AO - Software & Ai ⛓️ Auto-GPT with LangChain 🔥 | Create Your Own Personal AI Assistant by Data Science Basics ⛓️ Create Your OWN Slack AI Assistant with Python & LangChain by Dave Ebbelaar ⛓️ How to Create LOCAL Chatbots with GPT4All and LangChain [Full Guide] by Liam Ottley ⛓️ Build a Multilingual PDF Search App with LangChain, Cohere and Bubble by Menlo Park Lab ⛓️ Building a LangChain Agent (code-free!) Using Bubble and Flowise by Menlo Park Lab ⛓️ Build a LangChain-based Semantic PDF Search App with No-Code Tools Bubble and Flowise by Menlo Park Lab ⛓️ LangChain Memory Tutorial | Building a ChatGPT Clone in Python by Alejandro AO - Software & Ai ⛓️ ChatGPT For Your DATA | Chat with Multiple Documents Using LangChain by Data Science Basics ⛓️ Llama Index: Chat with Documentation using URL Loader by Merk ⛓️ Using OpenAI, LangChain, and Gradio to Build Custom GenAI Applications by David Hundley ⛓ icon marks a new video [last update 2023-05-15] previous Model Comparison Contents ⛓️Official LangChain YouTube channel⛓️ Introduction to LangChain with Harrison Chase, creator of LangChain Videos (sorted by views) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/additional_resources/youtube.html