id
stringlengths 14
15
| text
stringlengths 27
2.12k
| source
stringlengths 49
118
|
---|---|---|
39dc43079643-3 | {}, 'model_name': 'text-davinci-003'})PreviousSerializationNextTracking token usageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/models/llms/streaming_llm |
fdd7a14f1358-0 | Caching | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/models/llms/llm_caching |
fdd7a14f1358-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsAsync APICustom LLMFake LLMHuman input LLMCachingSerializationStreamingTracking token usageChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsLLMsCachingCachingLangChain provides an optional caching layer for LLMs. This is useful for two reasons:It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. | https://python.langchain.com/docs/modules/model_io/models/llms/llm_caching |
fdd7a14f1358-2 | It can speed up your application by reducing the number of API calls you make to the LLM provider.import langchainfrom langchain.llms import OpenAI# To make the caching really obvious, lets use a slower model.llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)In Memory Cache​from langchain.cache import InMemoryCachelangchain.llm_cache = InMemoryCache()# The first time, it is not yet in cache, so it should take longerllm.predict("Tell me a joke") CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms Wall time: 4.83 s "\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!"# The second time it is, so it goes fasterllm.predict("Tell me a joke") CPU times: user 238 µs, sys: 143 µs, total: 381 µs Wall time: 1.76 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'SQLite Cache​rm .langchain.db# We can do the same thing with a SQLite cachefrom langchain.cache import SQLiteCachelangchain.llm_cache = SQLiteCache(database_path=".langchain.db")# The first time, it is not yet in cache, so it should take longerllm.predict("Tell me a joke") CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms Wall time: 825 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# The | https://python.langchain.com/docs/modules/model_io/models/llms/llm_caching |
fdd7a14f1358-3 | did the chicken cross the road?\n\nTo get to the other side.'# The second time it is, so it goes fasterllm.predict("Tell me a joke") CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms Wall time: 2.67 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'Optional Caching in Chains​You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.llm = OpenAI(model_name="text-davinci-002")no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)from langchain.text_splitter import CharacterTextSplitterfrom langchain.chains.mapreduce import MapReduceChaintext_splitter = CharacterTextSplitter()with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read()texts = text_splitter.split_text(state_of_the_union)from langchain.docstore.document import Documentdocs = [Document(page_content=t) for t in texts[:3]]from langchain.chains.summarize import load_summarize_chainchain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm)chain.run(docs) CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms Wall time: 5.09 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will | https://python.langchain.com/docs/modules/model_io/models/llms/llm_caching |
fdd7a14f1358-4 | '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.'When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step.chain.run(docs) CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms Wall time: 1.04 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.'rm .langchain.db sqlite.dbPreviousHuman input LLMNextSerializationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/models/llms/llm_caching |
4bd05156a8fc-0 | Async API | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/models/llms/async_llm |
4bd05156a8fc-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsAsync APICustom LLMFake LLMHuman input LLMCachingSerializationStreamingTracking token usageChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsLLMsAsync APIAsync APILangChain provides async support for LLMs by leveraging the asyncio library.Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, OpenAI, PromptLayerOpenAI, ChatOpenAI and Anthropic are supported, but async support for other LLMs is on the roadmap.You can use the agenerate method to call an OpenAI LLM asynchronously.import timeimport asynciofrom langchain.llms import OpenAIdef generate_serially(): llm = OpenAI(temperature=0.9) for _ in range(10): resp = llm.generate(["Hello, how are you?"]) print(resp.generations[0][0].text)async def async_generate(llm): resp = await llm.agenerate(["Hello, how are you?"]) print(resp.generations[0][0].text)async def generate_concurrently(): llm = OpenAI(temperature=0.9) tasks = [async_generate(llm) for _ in range(10)] await asyncio.gather(*tasks)s = time.perf_counter()# If running this outside of Jupyter, use asyncio.run(generate_concurrently())await generate_concurrently()elapsed = time.perf_counter() - | https://python.langchain.com/docs/modules/model_io/models/llms/async_llm |
4bd05156a8fc-2 | asyncio.run(generate_concurrently())await generate_concurrently()elapsed = time.perf_counter() - sprint("\033[1m" + f"Concurrent executed in {elapsed:0.2f} seconds." + "\033[0m")s = time.perf_counter()generate_serially()elapsed = time.perf_counter() - sprint("\033[1m" + f"Serial executed in {elapsed:0.2f} seconds." + "\033[0m") I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, how about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about yourself? I'm doing well, thank you! How about you? I'm doing well, thank you. How about you? I'm doing well, thank you! How about you? I'm doing well, thank you. How about you? Concurrent executed in 1.39 seconds. I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. | https://python.langchain.com/docs/modules/model_io/models/llms/async_llm |
4bd05156a8fc-3 | How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about yourself? I'm doing well, thanks for asking. How about you? I'm doing well, thanks! How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about yourself? I'm doing well, thanks for asking. How about you? Serial executed in 5.77 seconds.PreviousLLMsNextCustom LLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/models/llms/async_llm |
748f54307791-0 | Custom LLM | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/models/llms/custom_llm |
748f54307791-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsAsync APICustom LLMFake LLMHuman input LLMCachingSerializationStreamingTracking token usageChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsLLMsCustom LLMCustom LLMThis notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.There is only one required thing that a custom LLM needs to implement:A _call method that takes in a string, some optional stop words, and returns a stringThere is a second optional thing it can implement:An _identifying_params property that is used to help with printing of this class. Should return a dictionary.Let's implement a very simple custom LLM that just returns the first N characters of the input.from typing import Any, List, Mapping, Optionalfrom langchain.callbacks.manager import CallbackManagerForLLMRunfrom langchain.llms.base import LLMclass CustomLLM(LLM): n: int @property def _llm_type(self) -> str: return "custom" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> str: if stop is not None: | https://python.langchain.com/docs/modules/model_io/models/llms/custom_llm |
748f54307791-2 | if stop is not None: raise ValueError("stop kwargs are not permitted.") return prompt[: self.n] @property def _identifying_params(self) -> Mapping[str, Any]: """Get the identifying parameters.""" return {"n": self.n}We can now use this as an any other LLM.llm = CustomLLM(n=10)llm("This is a foobar thing") 'This is a 'We can also print the LLM and see its custom print.print(llm) CustomLLM Params: {'n': 10}PreviousAsync APINextFake LLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/models/llms/custom_llm |
3ef201ea458b-0 | Fake LLM | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsAsync APICustom LLMFake LLMHuman input LLMCachingSerializationStreamingTracking token usageChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsLLMsFake LLMFake LLMWe expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.In this notebook we go over how to use this.We start this with using the FakeLLM in an agent.from langchain.llms.fake import FakeListLLMfrom langchain.agents import load_toolsfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypetools = load_tools(["python_repl"])responses = ["Action: Python REPL\nAction Input: print(2 + 2)", "Final Answer: 4"]llm = FakeListLLM(responses=responses)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent.run("whats 2 + 2") > Entering new AgentExecutor chain... Action: Python REPL Action Input: print(2 + 2) Observation: 4 Thought:Final Answer: 4 > Finished chain. '4'PreviousCustom LLMNextHuman input LLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/models/llms/fake_llm |
1ffd35d5b49e-0 | Tracking token usage | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/models/llms/token_usage_tracking |
1ffd35d5b49e-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsAsync APICustom LLMFake LLMHuman input LLMCachingSerializationStreamingTracking token usageChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsLLMsTracking token usageTracking token usageThis notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.Let's first look at an extremely simple example of tracking token usage for a single LLM call.from langchain.llms import OpenAIfrom langchain.callbacks import get_openai_callbackllm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)with get_openai_callback() as cb: result = llm("Tell me a joke") print(cb) Tokens Used: 42 Prompt Tokens: 4 Completion Tokens: 38 Successful Requests: 1 Total Cost (USD): $0.00084Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence.with get_openai_callback() as cb: result = llm("Tell me a joke") result2 = llm("Tell me a joke") print(cb.total_tokens) 91If a chain or agent with multiple steps in it is used, it will track all those steps.from langchain.agents import load_toolsfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypefrom langchain.llms import | https://python.langchain.com/docs/modules/model_io/models/llms/token_usage_tracking |
1ffd35d5b49e-2 | import initialize_agentfrom langchain.agents import AgentTypefrom langchain.llms import OpenAIllm = OpenAI(temperature=0)tools = load_tools(["serpapi", "llm-math"], llm=llm)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)with get_openai_callback() as cb: response = agent.run( "Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?" ) print(f"Total Tokens: {cb.total_tokens}") print(f"Prompt Tokens: {cb.prompt_tokens}") print(f"Completion Tokens: {cb.completion_tokens}") print(f"Total Cost (USD): ${cb.total_cost}") > Entering new AgentExecutor chain... I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power. Action: Search Action Input: "Olivia Wilde boyfriend" Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Thought: I need to find out Harry Styles' age. Action: Search Action Input: "Harry Styles age" Observation: 29 years Thought: I need to calculate 29 raised to the 0.23 power. Action: Calculator Action Input: 29^0.23 | https://python.langchain.com/docs/modules/model_io/models/llms/token_usage_tracking |
1ffd35d5b49e-3 | power. Action: Calculator Action Input: 29^0.23 Observation: Answer: 2.169459462491557 Thought: I now know the final answer. Final Answer: Harry Styles, Olivia Wilde's boyfriend, is 29 years old and his age raised to the 0.23 power is 2.169459462491557. > Finished chain. Total Tokens: 1506 Prompt Tokens: 1350 Completion Tokens: 156 Total Cost (USD): $0.03012PreviousStreamingNextChat modelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/models/llms/token_usage_tracking |
2a13942a7f10-0 | Serialization | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/models/llms/llm_serialization |
2a13942a7f10-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsAsync APICustom LLMFake LLMHuman input LLMCachingSerializationStreamingTracking token usageChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsLLMsSerializationOn this pageSerializationThis notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc).from langchain.llms import OpenAIfrom langchain.llms.loading import load_llmLoading​First, lets go over loading an LLM from disk. LLMs can be saved on disk in two formats: json or yaml. No matter the extension, they are loaded in the same way.cat llm.json { "model_name": "text-davinci-003", "temperature": 0.7, "max_tokens": 256, "top_p": 1.0, "frequency_penalty": 0.0, "presence_penalty": 0.0, "n": 1, "best_of": 1, "request_timeout": null, "_type": "openai" }llm = load_llm("llm.json")cat llm.yaml | https://python.langchain.com/docs/modules/model_io/models/llms/llm_serialization |
2a13942a7f10-2 | }llm = load_llm("llm.json")cat llm.yaml _type: openai best_of: 1 frequency_penalty: 0.0 max_tokens: 256 model_name: text-davinci-003 n: 1 presence_penalty: 0.0 request_timeout: null temperature: 0.7 top_p: 1.0llm = load_llm("llm.yaml")Saving​If you want to go from an LLM in memory to a serialized version of it, you can do so easily by calling the .save method. Again, this supports both json and yaml.llm.save("llm.json")llm.save("llm.yaml")PreviousCachingNextStreamingLoadingSavingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/models/llms/llm_serialization |
67d4c143adc6-0 | Prompts | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPromptsThe new way of programming models is through prompts.
A prompt refers to the input to the model.
This input is often constructed from multiple components.
LangChain provides several classes and functions to make constructing and working with prompts easy.Prompt templates: Parametrize model inputsExample selectors: Dynamically select examples to include in promptsPreviousModel I/ONextPrompt templatesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/ |
ed2d9b3812b9-0 | Prompt templates | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/ |
ed2d9b3812b9-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesOn this pagePrompt templatesLanguage models take text as input - that text is commonly referred to as a prompt.
Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/ |
ed2d9b3812b9-2 | LangChain provides several classes and functions to make constructing and working with prompts easy.What is a prompt template?​A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt.A prompt template can contain:instructions to the language model,a set of few shot examples to help the language model generate a better response,a question to the language model.Here's the simplest example:from langchain import PromptTemplatetemplate = """\You are a naming consultant for new companies.What is a good name for a company that makes {product}?"""prompt = PromptTemplate.from_template(template)prompt.format(product="colorful socks")You are a naming consultant for new companies.What is a good name for a company that makes colorful socks?Create a prompt template​You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.from langchain import PromptTemplate# An example prompt with no input variablesno_input_prompt = PromptTemplate(input_variables=[], template="Tell me a joke.")no_input_prompt.format()# -> "Tell me a joke."# An example prompt with one input variableone_input_prompt = PromptTemplate(input_variables=["adjective"], template="Tell me a {adjective} joke.")one_input_prompt.format(adjective="funny")# -> "Tell me a funny joke."# An example prompt with multiple input variablesmultiple_input_prompt = PromptTemplate( input_variables=["adjective", "content"], template="Tell me a {adjective} joke about {content}.")multiple_input_prompt.format(adjective="funny", content="chickens")# -> "Tell me a funny joke about chickens."If you do not wish to specify input_variables manually, you can also create a PromptTemplate using from_template | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/ |
ed2d9b3812b9-3 | you do not wish to specify input_variables manually, you can also create a PromptTemplate using from_template class method. langchain will automatically infer the input_variables based on the template passed.template = "Tell me a {adjective} joke about {content}."prompt_template = PromptTemplate.from_template(template)prompt_template.input_variables# -> ['adjective', 'content']prompt_template.format(adjective="funny", content="chickens")# -> Tell me a funny joke about chickens.You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates.Chat prompt template​Chat Models take a list of chat messages as input - this list commonly referred to as a prompt. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/ |
ed2d9b3812b9-4 | These chat messages differ from raw string (which you would pass into a LLM model) in that every message is associated with a role.For example, in OpenAI Chat Completion API, a chat message can be associated with the AI, human or system role. The model is supposed to follow instruction from system chat message more closely.LangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of PromptTemplate when querying chat models to fully exploit the potential of underlying chat model.from langchain.prompts import ( ChatPromptTemplate, PromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import ( AIMessage, HumanMessage, SystemMessage)To create a message template associated with a role, you use MessagePromptTemplate.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:template="You are a helpful assistant that translates {input_language} to {output_language}."system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template="{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg:prompt=PromptTemplate( template="You are a helpful assistant that translates {input_language} to {output_language}.", input_variables=["input_language", "output_language"],)system_message_prompt_2 = SystemMessagePromptTemplate(prompt=prompt)assert system_message_prompt == system_message_prompt_2After that, you can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/ |
ed2d9b3812b9-5 | You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages() [SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})]PreviousPromptsNextConnecting to a Feature StoreWhat is a prompt template?CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/ |
61740173cc04-0 | Connecting to a Feature Store | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/connecting_to_a_feature_store |
61740173cc04-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreOn this pageConnecting to a Feature StoreFeature stores are a concept from traditional machine learning that make sure data fed into models is up-to-date and relevant. For more on this, see here.This concept is extremely relevant when considering putting LLM applications in production. In order to personalize LLM applications, you may want to combine LLMs with up-to-date information about particular users. Feature stores can be a great way to keep that data fresh, and LangChain provides an easy way to combine that data with LLMs.In this notebook we will show how to connect prompt templates to feature stores. The basic idea is to call a feature store from inside a prompt template to retrieve values that are then formatted into the prompt.Feast​To start, we will use the popular open source feature store framework Feast.This assumes you have already run the steps in the README around getting started. We will build of off that example in getting started, and create and LLMChain to write a note to a specific driver regarding their up-to-date statistics.Load Feast Store​Again, this should be set up according to the instructions in the Feast READMEfrom feast import FeatureStore# You may need to update the path depending on where you stored itfeast_repo_path = "../../../../../my_feature_repo/feature_repo/"store = | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/connecting_to_a_feature_store |
61740173cc04-2 | where you stored itfeast_repo_path = "../../../../../my_feature_repo/feature_repo/"store = FeatureStore(repo_path=feast_repo_path)Prompts​Here we will set up a custom FeastPromptTemplate. This prompt template will take in a driver id, look up their stats, and format those stats into a prompt.Note that the input to this prompt template is just driver_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).from langchain.prompts import PromptTemplate, StringPromptTemplatetemplate = """Given the driver's up to date stats, write them note relaying those stats to them.If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel betterHere are the drivers stats:Conversation rate: {conv_rate}Acceptance rate: {acc_rate}Average Daily Trips: {avg_daily_trips}Your response:"""prompt = PromptTemplate.from_template(template)class FeastPromptTemplate(StringPromptTemplate): def format(self, **kwargs) -> str: driver_id = kwargs.pop("driver_id") feature_vector = store.get_online_features( features=[ "driver_hourly_stats:conv_rate", "driver_hourly_stats:acc_rate", "driver_hourly_stats:avg_daily_trips", ], entity_rows=[{"driver_id": driver_id}], ).to_dict() | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/connecting_to_a_feature_store |
61740173cc04-3 | driver_id}], ).to_dict() kwargs["conv_rate"] = feature_vector["conv_rate"][0] kwargs["acc_rate"] = feature_vector["acc_rate"][0] kwargs["avg_daily_trips"] = feature_vector["avg_daily_trips"][0] return prompt.format(**kwargs)prompt_template = FeastPromptTemplate(input_variables=["driver_id"])print(prompt_template.format(driver_id=1001)) Given the driver's up to date stats, write them note relaying those stats to them. If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better Here are the drivers stats: Conversation rate: 0.4745151400566101 Acceptance rate: 0.055561766028404236 Average Daily Trips: 936 Your response:Use in a chain​We can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature storefrom langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainchain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)chain.run(1001) "Hi there! I wanted to update you on your current stats. Your acceptance rate is 0.055561766028404236 and your average daily trips are 936. While your conversation rate is currently 0.4745151400566101, I have no doubt that with a little extra effort, you'll be able to exceed that .5 mark! Keep up the great work! And remember, even chickens can't always cross the | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/connecting_to_a_feature_store |
61740173cc04-4 | .5 mark! Keep up the great work! And remember, even chickens can't always cross the road, but they still give it their best shot."Tecton​Above, we showed how you could use Feast, a popular open source and self-managed feature store, with LangChain. Our examples below will show a similar integration using Tecton. Tecton is a fully managed feature platform built to orchestrate the complete ML feature lifecycle, from transformation to online serving, with enterprise-grade SLAs.Prerequisites​Tecton Deployment (sign up at https://tecton.ai)TECTON_API_KEY environment variable set to a valid Service Account keyDefine and Load Features​We will use the user_transaction_counts Feature View from the Tecton tutorial as part of a Feature Service. For simplicity, we are only using a single Feature View; however, more sophisticated applications may require more feature views to retrieve the features needed for its prompt.user_transaction_metrics = FeatureService( name = "user_transaction_metrics", features = [user_transaction_counts])The above Feature Service is expected to be applied to a live workspace. For this example, we will be using the "prod" workspace.import tectonworkspace = tecton.get_workspace("prod")feature_service = workspace.get_feature_service("user_transaction_metrics")Prompts​Here we will set up a custom TectonPromptTemplate. This prompt template will take in a user_id , look up their stats, and format those stats into a prompt.Note that the input to this prompt template is just user_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).from langchain.prompts import PromptTemplate, StringPromptTemplatetemplate = """Given the vendor's up to date transaction stats, write them a note based on the following rules:1. If they had a transaction in the | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/connecting_to_a_feature_store |
61740173cc04-5 | write them a note based on the following rules:1. If they had a transaction in the last day, write a short congratulations message on their recent sales2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more.3. Always add a silly joke about chickens at the endHere are the vendor's stats:Number of Transactions Last Day: {transaction_count_1d}Number of Transactions Last 30 Days: {transaction_count_30d}Your response:"""prompt = PromptTemplate.from_template(template)class TectonPromptTemplate(StringPromptTemplate): def format(self, **kwargs) -> str: user_id = kwargs.pop("user_id") feature_vector = feature_service.get_online_features( join_keys={"user_id": user_id} ).to_dict() kwargs["transaction_count_1d"] = feature_vector[ "user_transaction_counts.transaction_count_1d_1d" ] kwargs["transaction_count_30d"] = feature_vector[ "user_transaction_counts.transaction_count_30d_1d" ] return prompt.format(**kwargs)prompt_template = TectonPromptTemplate(input_variables=["user_id"])print(prompt_template.format(user_id="user_469998441571")) Given the vendor's up to date transaction stats, write them a note based on the following rules: 1. If they had a transaction in the last day, write a short congratulations message on their recent sales | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/connecting_to_a_feature_store |
61740173cc04-6 | If they had a transaction in the last day, write a short congratulations message on their recent sales 2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more. 3. Always add a silly joke about chickens at the end Here are the vendor's stats: Number of Transactions Last Day: 657 Number of Transactions Last 30 Days: 20326 Your response:Use in a chain​We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Tecton Feature Platformfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainchain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)chain.run("user_469998441571") 'Wow, congratulations on your recent sales! Your business is really soaring like a chicken on a hot air balloon! Keep up the great work!'Featureform​Finally, we will use Featureform an open-source and enterprise-grade feature store to run the same example. Featureform allows you to work with your infrastructure like Spark or locally to define your feature transformations.Initialize Featureform​You can follow in the instructions in the README to initialize your transformations and features in Featureform.import featureform as ffclient = ff.Client(host="demo.featureform.com")Prompts​Here we will set up a custom FeatureformPromptTemplate. This prompt template will take in the average amount a user pays per transactions.Note that the input to this prompt template is just avg_transaction, since that is the only user defined piece (all other variables are looked up inside the prompt template).from langchain.prompts import PromptTemplate, StringPromptTemplatetemplate = """Given the amount a | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/connecting_to_a_feature_store |
61740173cc04-7 | langchain.prompts import PromptTemplate, StringPromptTemplatetemplate = """Given the amount a user spends on average per transaction, let them know if they are a high roller. Otherwise, make a silly joke about chickens at the end to make them feel betterHere are the user's stats:Average Amount per Transaction: ${avg_transcation}Your response:"""prompt = PromptTemplate.from_template(template)class FeatureformPromptTemplate(StringPromptTemplate): def format(self, **kwargs) -> str: user_id = kwargs.pop("user_id") fpf = client.features([("avg_transactions", "quickstart")], {"user": user_id}) return prompt.format(**kwargs)prompt_template = FeatureformPrompTemplate(input_variables=["user_id"])print(prompt_template.format(user_id="C1410926"))Use in a chain​We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Featureform Feature Platformfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainchain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)chain.run("C1410926")PreviousPrompt templatesNextCustom prompt templateFeastLoad Feast StorePromptsUse in a chainTectonPrerequisitesDefine and Load FeaturesPromptsUse in a chainFeatureformInitialize FeatureformPromptsUse in a chainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/connecting_to_a_feature_store |
b0ce5bd30973-0 | Format template output | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/format_output |
b0ce5bd30973-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesFormat template outputFormat template outputThe output of the format method is available as string, list of messages and ChatPromptValueAs string:output = chat_prompt.format(input_language="English", output_language="French", text="I love programming.")output 'System: You are a helpful assistant that translates English to French.\nHuman: I love programming.'# or alternativelyoutput_2 = chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_string()assert output == output_2As ChatPromptValuechat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.") ChatPromptValue(messages=[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})])As list of Message objectschat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages() [SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})]PreviousFew shot examples for chat modelsNextTemplate FormatsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/format_output |
9c918c3ebaf5-0 | Types of MessagePromptTemplate | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/msg_prompt_templates |
9c918c3ebaf5-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesTypes of MessagePromptTemplateTypes of MessagePromptTemplateLangChain provides different types of MessagePromptTemplate. The most commonly used are AIMessagePromptTemplate, SystemMessagePromptTemplate and HumanMessagePromptTemplate, which create an AI message, system message and human message respectively.However, in cases where the chat model supports taking chat message with arbitrary role, you can use ChatMessagePromptTemplate, which allows user to specify the role name.from langchain.prompts import ChatMessagePromptTemplateprompt = "May the {subject} be with you"chat_message_prompt = ChatMessagePromptTemplate.from_template(role="Jedi", template=prompt)chat_message_prompt.format(subject="force") ChatMessage(content='May the force be with you', additional_kwargs={}, role='Jedi')LangChain also provides MessagesPlaceholder, which gives you full control of what messages to be rendered during formatting. This can be useful when you are uncertain of what role you should be using for your message prompt templates or when you wish to insert a list of messages during formatting.from langchain.prompts import MessagesPlaceholderhuman_prompt = "Summarize our conversation so far in {word_count} words."human_message_template = HumanMessagePromptTemplate.from_template(human_prompt)chat_prompt = ChatPromptTemplate.from_messages([MessagesPlaceholder(variable_name="conversation"), human_message_template])human_message = | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/msg_prompt_templates |
9c918c3ebaf5-2 | ChatPromptTemplate.from_messages([MessagesPlaceholder(variable_name="conversation"), human_message_template])human_message = HumanMessage(content="What is the best way to learn programming?")ai_message = AIMessage(content="""\1. Choose a programming language: Decide on a programming language that you want to learn.2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.3. Practice, practice, practice: The best way to learn programming is through hands-on experience\""")chat_prompt.format_prompt(conversation=[human_message, ai_message], word_count="10").to_messages() [HumanMessage(content='What is the best way to learn programming?', additional_kwargs={}), AIMessage(content='1. Choose a programming language: Decide on a programming language that you want to learn. \n\n2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.\n\n3. Practice, practice, practice: The best way to learn programming is through hands-on experience', additional_kwargs={}), HumanMessage(content='Summarize our conversation so far in 10 words.', additional_kwargs={})]PreviousTemplate FormatsNextPartial prompt templatesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/msg_prompt_templates |
ac3f67941400-0 | Custom prompt template | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/custom_prompt_template |
ac3f67941400-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesCustom prompt templateOn this pageCustom prompt templateLet's suppose we want the LLM to generate English language explanations of a function given its name. To achieve this task, we will create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.Why are custom prompt templates needed?​LangChain provides a set of default prompt templates that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template.Take a look at the current set of default prompt templates here.Creating a Custom Prompt Template​There are essentially two distinct prompt templates available - string prompt templates and chat prompt templates. String prompt templates provides a simple prompt in string format, while chat prompt templates produces a more structured prompt to be used with a chat API.In this guide, we will create a custom prompt using a string prompt template. To create a custom string prompt template, there are two requirements:It has an input_variables attribute that exposes what input variables the prompt template expects.It exposes a format method that takes in keyword arguments corresponding to the expected input_variables and returns | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/custom_prompt_template |
ac3f67941400-2 | template expects.It exposes a format method that takes in keyword arguments corresponding to the expected input_variables and returns the formatted prompt.We will create a custom prompt template that takes in the function name as input and formats the prompt to provide the source code of the function. To achieve this, let's first create a function that will return the source code of a function given its name.import inspectdef get_source_code(function_name): # Get the source code of the function return inspect.getsource(function_name)Next, we'll create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.from langchain.prompts import StringPromptTemplatefrom pydantic import BaseModel, validatorPROMPT = """\Given the function name and source code, generate an English language explanation of the function.Function Name: {function_name}Source Code:{source_code}Explanation:"""class FunctionExplainerPromptTemplate(StringPromptTemplate, BaseModel): """A custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.""" @validator("input_variables") def validate_input_variables(cls, v): """Validate that the input variables are correct.""" if len(v) != 1 or "function_name" not in v: raise ValueError("function_name must be the only input_variable.") return v def format(self, **kwargs) -> str: # Get the source code of the function source_code = get_source_code(kwargs["function_name"]) # Generate the prompt to be sent to the language model prompt = PROMPT.format( | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/custom_prompt_template |
ac3f67941400-3 | to be sent to the language model prompt = PROMPT.format( function_name=kwargs["function_name"].__name__, source_code=source_code ) return prompt def _prompt_type(self): return "function-explainer"Use the custom prompt template​Now that we have created a custom prompt template, we can use it to generate prompts for our task.fn_explainer = FunctionExplainerPromptTemplate(input_variables=["function_name"])# Generate a prompt for the function "get_source_code"prompt = fn_explainer.format(function_name=get_source_code)print(prompt) Given the function name and source code, generate an English language explanation of the function. Function Name: get_source_code Source Code: def get_source_code(function_name): # Get the source code of the function return inspect.getsource(function_name) Explanation: PreviousConnecting to a Feature StoreNextFew-shot prompt templatesWhy are custom prompt templates needed?Creating a Custom Prompt TemplateUse the custom prompt templateCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/custom_prompt_template |
f6a3b83733e5-0 | Validate template | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesValidate templateValidate templateBy default, PromptTemplate will validate the template string by checking whether the input_variables match the variables defined in template. You can disable this behavior by setting validate_template to Falsetemplate = "I am learning langchain because {reason}."prompt_template = PromptTemplate(template=template, input_variables=["reason", "foo"]) # ValueError due to extra variablesprompt_template = PromptTemplate(template=template, input_variables=["reason", "foo"], validate_template=False) # No errorPreviousPrompt PipeliningNextExample selectorsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/validate |
6233cbd94ca2-0 | Serialization | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_serialization |
6233cbd94ca2-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesSerializationOn this pageSerializationIt is often preferrable to store prompts not as python code but as files. This can make it easy to share, store, and version prompts. This notebook covers how to do that in LangChain, walking through all the different types of prompts and the different serialization options.At a high level, the following design principles are applied to serialization:Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported.We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both.There is also a single entry point to load prompts from disk, making it easy to load any type of prompt.# All prompts are loaded through the `load_prompt` function.from langchain.prompts import load_promptPromptTemplate​This section covers examples for loading a PromptTemplate.Loading from YAML​This shows an example of loading a PromptTemplate from | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_serialization |
6233cbd94ca2-2 | PromptTemplate.Loading from YAML​This shows an example of loading a PromptTemplate from YAML.cat simple_prompt.yaml _type: prompt input_variables: ["adjective", "content"] template: Tell me a {adjective} joke about {content}.prompt = load_prompt("simple_prompt.yaml")print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens.Loading from JSON​This shows an example of loading a PromptTemplate from JSON.cat simple_prompt.json { "_type": "prompt", "input_variables": ["adjective", "content"], "template": "Tell me a {adjective} joke about {content}." }prompt = load_prompt("simple_prompt.json")print(prompt.format(adjective="funny", content="chickens"))Tell me a funny joke about chickens.Loading Template from a File​This shows an example of storing the template in a separate file and then referencing it in the config. Notice that the key changes from template to template_path.cat simple_template.txt Tell me a {adjective} joke about {content}.cat simple_prompt_with_template_file.json { "_type": "prompt", "input_variables": ["adjective", "content"], "template_path": "simple_template.txt" }prompt = load_prompt("simple_prompt_with_template_file.json")print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens.FewShotPromptTemplate​This section covers examples for | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_serialization |
6233cbd94ca2-3 | me a funny joke about chickens.FewShotPromptTemplate​This section covers examples for loading few shot prompt templates.Examples​This shows an example of what examples stored as json might look like.cat examples.json [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"} ]And here is what the same examples stored as yaml might look like.cat examples.yaml - input: happy output: sad - input: tall output: shortLoading from YAML​This shows an example of loading a few shot example from YAML.cat few_shot_prompt.yaml _type: few_shot input_variables: ["adjective"] prefix: Write antonyms for the following words. example_prompt: _type: prompt input_variables: ["input", "output"] template: "Input: {input}\nOutput: {output}" examples: examples.json suffix: "Input: {adjective}\nOutput:"prompt = load_prompt("few_shot_prompt.yaml")print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output:The same | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_serialization |
6233cbd94ca2-4 | Output: short Input: funny Output:The same would work if you loaded examples from the yaml file.cat few_shot_prompt_yaml_examples.yaml _type: few_shot input_variables: ["adjective"] prefix: Write antonyms for the following words. example_prompt: _type: prompt input_variables: ["input", "output"] template: "Input: {input}\nOutput: {output}" examples: examples.yaml suffix: "Input: {adjective}\nOutput:"prompt = load_prompt("few_shot_prompt_yaml_examples.yaml")print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output:Loading from JSON​This shows an example of loading a few shot example from JSON.cat few_shot_prompt.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt": { "_type": "prompt", "input_variables": ["input", "output"], | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_serialization |
6233cbd94ca2-5 | "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }, "examples": "examples.json", "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt.json")print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output:Examples in the Config​This shows an example of referencing the examples directly in the config.cat few_shot_prompt_examples_in.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt": { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }, "examples": [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"} ], | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_serialization |
6233cbd94ca2-6 | "tall", "output": "short"} ], "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt_examples_in.json")print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output:Example Prompt from a File​This shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from example_prompt to example_prompt_path.cat example_prompt.json { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }cat few_shot_prompt_example_prompt.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt_path": "example_prompt.json", "examples": "examples.json", "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt_example_prompt.json")print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_serialization |
6233cbd94ca2-7 | Input: happy Output: sad Input: tall Output: short Input: funny Output:PromptTempalte with OutputParser​This shows an example of loading a prompt along with an OutputParser from a file.cat prompt_with_output_parser.json { "input_variables": [ "question", "student_answer" ], "output_parser": { "regex": "(.*?)\\nScore: (.*)", "output_keys": [ "answer", "score" ], "default_output_key": null, "_type": "regex_parser" }, "partial_variables": {}, "template": "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:", "template_format": "f-string", "validate_template": true, "_type": "prompt" }prompt = load_prompt("prompt_with_output_parser.json")prompt.output_parser.parse( "George Washington was | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_serialization |
6233cbd94ca2-8 | "George Washington was born in 1732 and died in 1799.\nScore: 1/2") {'answer': 'George Washington was born in 1732 and died in 1799.', 'score': '1/2'}PreviousCompositionNextPrompt PipeliningPromptTemplateLoading from YAMLLoading from JSONLoading Template from a FileFewShotPromptTemplateExamplesLoading from YAMLLoading from JSONExamples in the ConfigExample Prompt from a FilePromptTempalte with OutputParserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_serialization |
9d6acd037388-0 | Partial prompt templates | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/partial |
9d6acd037388-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesPartial prompt templatesPartial prompt templatesLike other methods, it can make sense to "partial" a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.LangChain supports this in two ways:Partial formatting with string values.Partial formatting with functions that return string values.These two different ways support different use cases. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.Partial With Strings​One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:from langchain.prompts import PromptTemplateprompt = PromptTemplate(template="{foo}{bar}", input_variables=["foo", "bar"])partial_prompt = prompt.partial(foo="foo");print(partial_prompt.format(bar="baz")) | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/partial |
9d6acd037388-2 | = prompt.partial(foo="foo");print(partial_prompt.format(bar="baz")) foobazYou can also just initialize the prompt with the partialed variables.prompt = PromptTemplate(template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"})print(prompt.format(bar="baz")) foobazPartial With Functions​The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables is a bit annoying. In this case, it's very handy to be able to partial the prompt with a function that always returns the current date.from datetime import datetimedef _get_datetime(): now = datetime.now() return now.strftime("%m/%d/%Y, %H:%M:%S")prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective", "date"]);partial_prompt = prompt.partial(date=_get_datetime)print(partial_prompt.format(adjective="funny")) Tell me a funny joke about the day 02/27/2023, 22:15:16You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow.prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective"], partial_variables={"date": _get_datetime});print(prompt.format(adjective="funny")) Tell me a funny joke about the day | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/partial |
9d6acd037388-3 | Tell me a funny joke about the day 02/27/2023, 22:15:16PreviousTypes of MessagePromptTemplateNextCompositionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/partial |
284552ce45be-0 | Page Not Found | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKPage Not FoundWe could not find what you were looking for.Please contact the owner of the site that linked you to the original URL and let them know their link is broken.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/custom_prompt_template.html |
c7cbbc38ccf7-0 | Few shot examples for chat models | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples_chat |
c7cbbc38ccf7-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesFew shot examples for chat modelsOn this pageFew shot examples for chat modelsThis notebook covers how to use few shot examples in chat models.There does not appear to be solid consensus on how best to do few shot prompting. As a result, we are not solidifying any abstractions around this yet but rather using existing abstractions.Alternating Human/AI messages​The first way of doing few shot prompting relies on using alternating human/ai messages. See an example of this below.from langchain.chat_models import ChatOpenAIfrom langchain import PromptTemplate, LLMChainfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessagechat = ChatOpenAI(temperature=0)template = "You are a helpful assistant that translates english to pirate."system_message_prompt = SystemMessagePromptTemplate.from_template(template)example_human = HumanMessagePromptTemplate.from_template("Hi")example_ai = AIMessagePromptTemplate.from_template("Argh me mateys")human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples_chat |
c7cbbc38ccf7-2 | = ChatPromptTemplate.from_messages( [system_message_prompt, example_human, example_ai, human_message_prompt])chain = LLMChain(llm=chat, prompt=chat_prompt)# get a chat completion from the formatted messageschain.run("I love programming.") "I be lovin' programmin', me hearty!"System Messages​OpenAI provides an optional name parameter that they also recommend using in conjunction with system messages to do few shot prompting. Here is an example of how to do that below.template = "You are a helpful assistant that translates english to pirate."system_message_prompt = SystemMessagePromptTemplate.from_template(template)example_human = SystemMessagePromptTemplate.from_template( "Hi", additional_kwargs={"name": "example_user"})example_ai = SystemMessagePromptTemplate.from_template( "Argh me mateys", additional_kwargs={"name": "example_assistant"})human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, example_human, example_ai, human_message_prompt])chain = LLMChain(llm=chat, prompt=chat_prompt)# get a chat completion from the formatted messageschain.run("I love programming.") "I be lovin' programmin', me hearty."PreviousFew-shot prompt templatesNextFormat template outputAlternating Human/AI messagesSystem MessagesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples_chat |
b06e3621626c-0 | Composition | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_composition |
b06e3621626c-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesCompositionCompositionThis notebook goes over how to compose multiple prompts together. This can be useful when you want to reuse parts of prompts. This can be done with a PipelinePrompt. A PipelinePrompt consists of two main parts:Final prompt: This is the final prompt that is returnedPipeline prompts: This is a list of tuples, consisting of a string name and a prompt template. Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.from langchain.prompts.pipeline import PipelinePromptTemplatefrom langchain.prompts.prompt import PromptTemplatefull_template = """{introduction}{example}{start}"""full_prompt = PromptTemplate.from_template(full_template)introduction_template = """You are impersonating {person}."""introduction_prompt = PromptTemplate.from_template(introduction_template)example_template = """Here's an example of an interaction: Q: {example_q}A: {example_a}"""example_prompt = PromptTemplate.from_template(example_template)start_template = """Now, do this for real!Q: {input}A:"""start_prompt = PromptTemplate.from_template(start_template)input_prompts = [ ("introduction", introduction_prompt), ("example", example_prompt), ("start", start_prompt)]pipeline_prompt = PipelinePromptTemplate(final_prompt=full_prompt, | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_composition |
b06e3621626c-2 | ("start", start_prompt)]pipeline_prompt = PipelinePromptTemplate(final_prompt=full_prompt, pipeline_prompts=input_prompts)pipeline_prompt.input_variables ['example_a', 'person', 'example_q', 'input']print(pipeline_prompt.format( person="Elon Musk", example_q="What's your favorite car?", example_a="Tesla", input="What's your favorite social media site?")) You are impersonating Elon Musk. Here's an example of an interaction: Q: What's your favorite car? A: Tesla Now, do this for real! Q: What's your favorite social media site? A: PreviousPartial prompt templatesNextSerializationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_composition |
435e761c8d69-0 | Prompt Pipelining | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining |
435e761c8d69-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesPrompt PipeliningOn this pagePrompt PipeliningThe idea behind prompt pipelining is to expose a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components.String Prompt Pipelining​When working with string prompts, each template is joined togther. You can work with either prompts directly or strings (the first element in the list needs to be a prompt).from langchain.prompts import PromptTemplate /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.12) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(prompt = ( PromptTemplate.from_template("Tell me a joke about {topic}") + ", make it funny" + "\n\nand in {language}")prompt PromptTemplate(input_variables=['language', 'topic'], output_parser=None, partial_variables={}, template='Tell me a joke about {topic}, make it | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining |
435e761c8d69-2 | output_parser=None, partial_variables={}, template='Tell me a joke about {topic}, make it funny\n\nand in {language}', template_format='f-string', validate_template=True)prompt.format(topic="sports", language="spanish") 'Tell me a joke about sports, make it funny\n\nand in spanish'You can also use it in an LLMChain, just like before.from langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainmodel = ChatOpenAI()chain = LLMChain(llm=model, prompt=prompt)chain.run(topic="sports", language="spanish") '¿Por qué el futbolista llevaba un paraguas al partido?\n\nPorque pronosticaban lluvia de goles.'Chat Prompt Pipelining​A chat prompt is made up a of a list of messages. Purely for developer experience, we've added a convinient way to create these prompts. In this pipeline, each new element is a new message in the final prompt.from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.schema import HumanMessage, AIMessage, SystemMessage /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.10) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(First, let's initialize the base ChatPromptTemplate with a system message. It doesn't have to start with a system, but it's often good practiceprompt = SystemMessage(content="You are a nice pirate")You can then easily create a pipeline combining it with other messages OR message templates. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining |
435e761c8d69-3 | Use a Message when there is no variables to be formatted, use a MessageTemplate when there are variables to be formatted. You can also use just a string -> note that this will automatically get inferred as a HumanMessagePromptTemplate.new_prompt = ( prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}")Under the hood, this creates an instance of the ChatPromptTemplate class, so you can use it just as you did before!new_prompt.format_messages(input="i said hi") [SystemMessage(content='You are a nice pirate', additional_kwargs={}), HumanMessage(content='hi', additional_kwargs={}, example=False), AIMessage(content='what?', additional_kwargs={}, example=False), HumanMessage(content='i said hi', additional_kwargs={}, example=False)]You can also use it in an LLMChain, just like beforefrom langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainmodel = ChatOpenAI()chain = LLMChain(llm=model, prompt=new_prompt)chain.run("i said hi") 'Oh, hello! How can I assist you today?'PreviousSerializationNextValidate templateString Prompt PipeliningChat Prompt PipeliningCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining |
bc41991bc783-0 | Few-shot prompt templates | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples |
bc41991bc783-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesFew-shot prompt templatesFew-shot prompt templatesIn this tutorial, we'll learn how to create a prompt template that uses few shot examples. A few shot prompt template can be constructed from either a set of examples, or from an Example Selector object.Use Case​In this tutorial, we'll configure few shot examples for self-ask with search.Using an example set​Create the example set​To get started, create a list of few shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables.from langchain.prompts.few_shot import FewShotPromptTemplatefrom langchain.prompts.prompt import PromptTemplateexamples = [ { "question": "Who lived longer, Muhammad Ali or Alan Turing?", "answer": """Are follow up questions needed here: Yes.Follow up: How old was Muhammad Ali when he died?Intermediate answer: Muhammad Ali was 74 years old when he died.Follow up: How old was Alan Turing when he died?Intermediate answer: Alan Turing was 41 years old when he died.So the final answer is: Muhammad Ali""" }, { "question": "When was the founder of craigslist born?", "answer": """Are follow up questions | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples |
bc41991bc783-2 | was the founder of craigslist born?", "answer": """Are follow up questions needed here: Yes.Follow up: Who was the founder of craigslist?Intermediate answer: Craigslist was founded by Craig Newmark.Follow up: When was Craig Newmark born?Intermediate answer: Craig Newmark was born on December 6, 1952.So the final answer is: December 6, 1952""" }, { "question": "Who was the maternal grandfather of George Washington?", "answer":"""Are follow up questions needed here: Yes.Follow up: Who was the mother of George Washington?Intermediate answer: The mother of George Washington was Mary Ball Washington.Follow up: Who was the father of Mary Ball Washington?Intermediate answer: The father of Mary Ball Washington was Joseph Ball.So the final answer is: Joseph Ball""" }, { "question": "Are both the directors of Jaws and Casino Royale from the same country?", "answer":"""Are follow up questions needed here: Yes.Follow up: Who is the director of Jaws?Intermediate Answer: The director of Jaws is Steven Spielberg.Follow up: Where is Steven Spielberg from?Intermediate Answer: The United States.Follow up: Who is the director of Casino Royale?Intermediate Answer: The director of Casino Royale is Martin Campbell.Follow up: Where is Martin Campbell from?Intermediate Answer: New Zealand.So the final answer is: No""" }]Create a formatter for the few shot examples​Configure a formatter that will format the few shot examples into a string. This formatter should be a PromptTemplate object.example_prompt = PromptTemplate(input_variables=["question", "answer"], template="Question: {question}\n{answer}")print(example_prompt.format(**examples[0])) Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples |
bc41991bc783-3 | lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali Feed examples and formatter to FewShotPromptTemplate​Finally, create a FewShotPromptTemplate object. This object takes in the few shot examples and the formatter for the few shot examples.prompt = FewShotPromptTemplate( examples=examples, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"])print(prompt.format(input="Who was the father of Mary Ball Washington?")) Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali Question: When was the founder of craigslist born? Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples |
bc41991bc783-4 | When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Question: Are both the directors of Jaws and Casino Royale from the same country? Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: No Question: Who was the father of Mary Ball Washington?Using an example selector​Feed examples into ExampleSelector​We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the FewShotPromptTemplate object, we will feed them into an ExampleSelector object.In this tutorial, we will use the SemanticSimilarityExampleSelector | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples |
bc41991bc783-5 | feed them into an ExampleSelector object.In this tutorial, we will use the SemanticSimilarityExampleSelector class. This class selects few shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few shot examples, as well as a vector store to perform the nearest neighbor search.from langchain.prompts.example_selector import SemanticSimilarityExampleSelectorfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsexample_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # This is the number of examples to produce. k=1)# Select the most similar example to the input.question = "Who was the father of Mary Ball Washington?"selected_examples = example_selector.select_examples({"question": question})print(f"Examples most similar to the input: {question}")for example in selected_examples: print("\n") for k, v in example.items(): print(f"{k}: {v}") Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Examples most similar to the input: Who was the father of Mary Ball Washington? question: Who was the maternal grandfather of George Washington? answer: Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples |
bc41991bc783-6 | Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Feed example selector into FewShotPromptTemplate​Finally, create a FewShotPromptTemplate object. This object takes in the example selector and the formatter for the few shot examples.prompt = FewShotPromptTemplate( example_selector=example_selector, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"])print(prompt.format(input="Who was the father of Mary Ball Washington?")) Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Question: Who was the father of Mary Ball Washington?PreviousCustom prompt templateNextFew shot examples for chat modelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples |
0893f37a1cef-0 | Template Formats | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate FormatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt PipeliningValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsPrompt templatesTemplate FormatsTemplate FormatsPromptTemplate by default uses Python f-string as its template format. However, it can also use other formats like jinja2, specified through the template_format argument.To use the jinja2 template:from langchain.prompts import PromptTemplatejinja2_template = "Tell me a {{ adjective }} joke about {{ content }}"prompt = PromptTemplate.from_template(jinja2_template, template_format="jinja2")prompt.format(adjective="funny", content="chickens")# Output: Tell me a funny joke about chickens.To use the Python f-string template:from langchain.prompts import PromptTemplatefstring_template = """Tell me a {adjective} joke about {content}"""prompt = PromptTemplate.from_template(fstring_template)prompt.format(adjective="funny", content="chickens")# Output: Tell me a funny joke about chickens.Currently, only jinja2 and f-string are supported. For other formats, kindly raise an issue on the Github page.PreviousFormat template outputNextTypes of MessagePromptTemplateCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/formats |
eaf7a9ee61d1-0 | Example selectors | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsCustom example selectorSelect by lengthSelect by maximal marginal relevance (MMR)Select by n-gram overlapSelect by similarityLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsExample selectorsExample selectorsIf you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so.The base interface is defined as below:class BaseExampleSelector(ABC): """Interface for selecting examples to include in prompts.""" @abstractmethod def select_examples(self, input_variables: Dict[str, str]) -> List[dict]: """Select which examples to use based on the inputs."""The only method it needs to expose is a select_examples method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected. Let's take a look at some below.PreviousValidate templateNextCustom example selectorCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/ |
372d6076e13c-0 | Select by length | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/length_based |
372d6076e13c-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsCustom example selectorSelect by lengthSelect by maximal marginal relevance (MMR)Select by n-gram overlapSelect by similarityLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsExample selectorsSelect by lengthSelect by lengthThis example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.from langchain.prompts import PromptTemplatefrom langchain.prompts import FewShotPromptTemplatefrom langchain.prompts.example_selector import LengthBasedExampleSelector# These are a lot of examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)example_selector = LengthBasedExampleSelector( # These are the examples it has available to choose from. examples=examples, # This is the PromptTemplate being used to format the examples. example_prompt=example_prompt, # | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/length_based |
372d6076e13c-2 | used to format the examples. example_prompt=example_prompt, # This is the maximum length that the formatted examples should be. # Length is measured by the get_text_length function below. max_length=25, # This is the function used to get the length of a string, which is used # to determine which examples to include. It is commented out because # it is provided as a default value if none is specified. # get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x)))dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],)# An example with small input, so it selects all examples.print(dynamic_prompt.format(adjective="big")) Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output:# An example with long input, so it selects only one example.long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"print(dynamic_prompt.format(adjective=long_string)) Give the antonym of every input | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/length_based |
372d6076e13c-3 | Give the antonym of every input Input: happy Output: sad Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else Output:# You can add an example to an example selector as well.new_example = {"input": "big", "output": "small"}dynamic_prompt.example_selector.add_example(new_example)print(dynamic_prompt.format(adjective="enthusiastic")) Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: small Input: enthusiastic Output:PreviousCustom example selectorNextSelect by maximal marginal relevance (MMR)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/length_based |
a76093271449-0 | Select by maximal marginal relevance (MMR) | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/mmr |
a76093271449-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsCustom example selectorSelect by lengthSelect by maximal marginal relevance (MMR)Select by n-gram overlapSelect by similarityLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsExample selectorsSelect by maximal marginal relevance (MMR)Select by maximal marginal relevance (MMR)The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.from langchain.prompts.example_selector import ( MaxMarginalRelevanceExampleSelector, SemanticSimilarityExampleSelector,)from langchain.vectorstores import FAISSfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.prompts import FewShotPromptTemplate, PromptTemplateexample_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)# These are a lot of examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},]example_selector = | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/mmr |
a76093271449-2 | {"input": "windy", "output": "calm"},]example_selector = MaxMarginalRelevanceExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # This is the number of examples to produce. k=2,)mmr_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],)# Input is a feeling, so should select the happy/sad example as the first oneprint(mmr_prompt.format(adjective="worried")) Give the antonym of every input Input: happy Output: sad Input: windy Output: calm Input: worried Output:# Let's compare this to what we would just get if we went solely off of similarity,# by using SemanticSimilarityExampleSelector instead of MaxMarginalRelevanceExampleSelector.example_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/mmr |
a76093271449-3 | OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # This is the number of examples to produce. k=2,)similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],)print(similar_prompt.format(adjective="worried")) Give the antonym of every input Input: happy Output: sad Input: sunny Output: gloomy Input: worried Output:PreviousSelect by lengthNextSelect by n-gram overlapCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/mmr |
6ca93fc6b7f1-0 | Select by similarity | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/similarity |
6ca93fc6b7f1-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsCustom example selectorSelect by lengthSelect by maximal marginal relevance (MMR)Select by n-gram overlapSelect by similarityLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsExample selectorsSelect by similaritySelect by similarityThis object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.from langchain.prompts.example_selector import SemanticSimilarityExampleSelectorfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.prompts import FewShotPromptTemplate, PromptTemplateexample_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)# These are a lot of examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},]example_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/similarity |
6ca93fc6b7f1-2 | similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # This is the number of examples to produce. k=1)similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.# Input is a feeling, so should select the happy/sad exampleprint(similar_prompt.format(adjective="worried")) Give the antonym of every input Input: happy Output: sad Input: worried Output:# Input is a measurement, so should select the tall/short exampleprint(similar_prompt.format(adjective="fat")) Give the antonym of every input Input: happy Output: sad Input: fat Output:# You can add new examples to the SemanticSimilarityExampleSelector as wellsimilar_prompt.example_selector.add_example({"input": "enthusiastic", "output": "apathetic"})print(similar_prompt.format(adjective="joyful")) Give the antonym of every input Input: happy Output: sad Input: joyful Output:PreviousSelect by n-gram overlapNextLanguage | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/similarity |
6ca93fc6b7f1-3 | Input: joyful Output:PreviousSelect by n-gram overlapNextLanguage modelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/similarity |
00f140b6640f-0 | Select by n-gram overlap | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/ngram_overlap |
00f140b6640f-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsCustom example selectorSelect by lengthSelect by maximal marginal relevance (MMR)Select by n-gram overlapSelect by similarityLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsExample selectorsSelect by n-gram overlapSelect by n-gram overlapThe NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive. The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input.from langchain.prompts import PromptTemplatefrom langchain.prompts.example_selector.ngram_overlap import NGramOverlapExampleSelectorfrom langchain.prompts import FewShotPromptTemplate, PromptTemplateexample_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)# These are a lot of examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/ngram_overlap |
00f140b6640f-2 | "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},]# These are examples of a fictional translation task.examples = [ {"input": "See Spot run.", "output": "Ver correr a Spot."}, {"input": "My dog barks.", "output": "Mi perro ladra."}, {"input": "Spot can run.", "output": "Spot puede correr."},]example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)example_selector = NGramOverlapExampleSelector( # These are the examples it has available to choose from. examples=examples, # This is the PromptTemplate being used to format the examples. example_prompt=example_prompt, # This is the threshold, at which selector stops. # It is set to -1.0 by default. threshold=-1.0, # For negative threshold: # Selector sorts examples by ngram overlap score, and excludes none. # For threshold greater than 1.0: # Selector excludes all examples, and returns an empty list. # For threshold equal to 0.0: # Selector sorts examples by ngram overlap score, # and excludes those with no ngram overlap with input.)dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the Spanish translation of every input", | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/ngram_overlap |
00f140b6640f-3 | prefix="Give the Spanish translation of every input", suffix="Input: {sentence}\nOutput:", input_variables=["sentence"],)# An example input with large ngram overlap with "Spot can run."# and no overlap with "My dog barks."print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: See Spot run. Output: Ver correr a Spot. Input: My dog barks. Output: Mi perro ladra. Input: Spot can run fast. Output:# You can add examples to NGramOverlapExampleSelector as well.new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."}example_selector.add_example(new_example)print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: See Spot run. Output: Ver correr a Spot. Input: Spot plays fetch. Output: Spot juega a buscar. Input: My dog barks. Output: Mi perro ladra. Input: Spot can run fast. Output:# You can set a threshold at which examples are excluded.# For example, setting threshold equal to 0.0# excludes examples with no ngram overlaps with input.# Since "My dog barks." has no ngram overlaps with "Spot can | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/ngram_overlap |
00f140b6640f-4 | overlaps with input.# Since "My dog barks." has no ngram overlaps with "Spot can run fast."# it is excluded.example_selector.threshold = 0.0print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: See Spot run. Output: Ver correr a Spot. Input: Spot plays fetch. Output: Spot juega a buscar. Input: Spot can run fast. Output:# Setting small nonzero thresholdexample_selector.threshold = 0.09print(dynamic_prompt.format(sentence="Spot can play fetch.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: Spot plays fetch. Output: Spot juega a buscar. Input: Spot can play fetch. Output:# Setting threshold greater than 1.0example_selector.threshold = 1.0 + 1e-9print(dynamic_prompt.format(sentence="Spot can play fetch.")) Give the Spanish translation of every input Input: Spot can play fetch. Output:PreviousSelect by maximal marginal relevance (MMR)NextSelect by similarityCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/ngram_overlap |
418fc2d33403-0 | Custom example selector | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/custom_example_selector |
418fc2d33403-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsCustom example selectorSelect by lengthSelect by maximal marginal relevance (MMR)Select by n-gram overlapSelect by similarityLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OPromptsExample selectorsCustom example selectorOn this pageCustom example selectorIn this tutorial, we'll create a custom example selector that selects every alternate example from a given list of examples.An ExampleSelector must implement two methods:An add_example method which takes in an example and adds it into the ExampleSelectorA select_examples method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few shot prompt.Let's implement a custom ExampleSelector that just selects two examples at random.:::{note}
Take a look at the current set of example selector implementations supported in LangChain here. | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/custom_example_selector |
418fc2d33403-2 | Take a look at the current set of example selector implementations supported in LangChain here.
:::Implement custom example selector​from langchain.prompts.example_selector.base import BaseExampleSelectorfrom typing import Dict, Listimport numpy as npclass CustomExampleSelector(BaseExampleSelector): def __init__(self, examples: List[Dict[str, str]]): self.examples = examples def add_example(self, example: Dict[str, str]) -> None: """Add new example to store for a key.""" self.examples.append(example) def select_examples(self, input_variables: Dict[str, str]) -> List[dict]: """Select which examples to use based on the inputs.""" return np.random.choice(self.examples, size=2, replace=False)Use custom example selector​examples = [ {"foo": "1"}, {"foo": "2"}, {"foo": "3"}]# Initialize example selector.example_selector = CustomExampleSelector(examples)# Select examplesexample_selector.select_examples({"foo": "foo"})# -> array([{'foo': '2'}, {'foo': '3'}], dtype=object)# Add new example to the set of examplesexample_selector.add_example({"foo": "4"})example_selector.examples# -> [{'foo': '1'}, {'foo': '2'}, {'foo': '3'}, {'foo': '4'}]# Select examplesexample_selector.select_examples({"foo": "foo"})# -> array([{'foo': '1'}, {'foo': '4'}], dtype=object)PreviousExample selectorsNextSelect by lengthImplement custom example selectorUse custom example selectorCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/custom_example_selector |
d9c61ea199c9-0 | Callbacks | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/callbacks/ |
Subsets and Splits