id
stringlengths
14
15
text
stringlengths
27
2.12k
source
stringlengths
49
118
a22db2b26326-4
retrieversCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/multi_prompt_router
229ad3361e96-0
OpenAPI chain | ЁЯжЬя╕ПЁЯФЧ Langchain
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-1
Skip to main contentЁЯжЬя╕ПЁЯФЧ LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/тАЛOData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalOpenAPI chainOn this pageOpenAPI chainThis notebook shows an example of using an OpenAPI chain to call an endpoint in natural language, and get back a response in natural language.from langchain.tools import OpenAPISpec, APIOperationfrom langchain.chains import OpenAPIEndpointChainfrom langchain.requests import Requestsfrom langchain.llms import OpenAILoad the specтАЛLoad a wrapper of the spec (so we can work with it more easily). You can load from a url or from a local file.spec = OpenAPISpec.from_url( "https://www.klarna.com/us/shopping/public/openai/v0/api-docs/") Attempting to load an OpenAPI 3.0.1 spec. This may result in
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-2
Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.# Alternative loading from file# spec = OpenAPISpec.from_file("openai_openapi.yaml")Select the OperationтАЛIn order to provide a focused on modular chain, we create a chain specifically only for one of the endpoints. Here we get an API operation from a specified endpoint and method.operation = APIOperation.from_openapi_spec(spec, "/public/openai/v0/products", "get")Construct the chainтАЛWe can now construct a chain to interact with it. In order to construct such a chain, we will pass in:The operation endpointA requests wrapper (can be used to handle authentication, etc)The LLM to use to interact with itllm = OpenAI() # Load a Language Modelchain = OpenAPIEndpointChain.from_api_operation( operation, llm, requests=Requests(), verbose=True, return_intermediate_steps=True, # Return request and response text)output = chain("whats the most expensive shirt?") > Entering new OpenAPIEndpointChain chain... > Entering new APIRequesterChain chain... Prompt after formatting: You are a helpful AI Assistant. Please provide JSON arguments to agentFunc() based on the user's instructions. API_SCHEMA: ```typescript /* API for fetching Klarna product information */ type productsUsingGET = (_: { /* A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-3
find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started. */ q: string, /* number of products returned */ size?: number, /* (Optional) Minimum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for. */ min_price?: number, /* (Optional) Maximum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for. */ max_price?: number, }) => any; ``` USER_INSTRUCTIONS: "whats the most expensive shirt?" Your arguments must be plain json provided in a markdown block: ARGS: ```json {valid json conforming to API_SCHEMA} ``` Example ----- ARGS: ```json {"foo": "bar", "baz": {"qux": "quux"}} ``` The block must be no more than 1 line long, and all arguments must be valid JSON.
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-4
The block must be no more than 1 line long, and all arguments must be valid JSON. All string arguments must be wrapped in double quotes. You MUST strictly comply to the types indicated by the provided schema, including all required args. If you don't have sufficient information to call the function due to things like requiring specific uuid's, you can reply with the following message: Message: ```text Concise response requesting the additional information that would make calling the function successful. ``` Begin ----- ARGS: > Finished chain. {"q": "shirt", "size": 1, "max_price": null} {"products":[{"name":"Burberry Check Poplin Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin","price":"$360.00","attributes":["Material:Cotton","Target Group:Man","Color:Gray,Blue,Beige","Properties:Pockets","Pattern:Checkered"]}]} > Entering new APIResponderChain chain... Prompt after formatting: You are a helpful AI assistant trained to answer user queries from API responses. You attempted to call an API, which resulted in: API_RESPONSE: {"products":[{"name":"Burberry Check Poplin
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-5
resulted in: API_RESPONSE: {"products":[{"name":"Burberry Check Poplin Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin","price":"$360.00","attributes":["Material:Cotton","Target Group:Man","Color:Gray,Blue,Beige","Properties:Pockets","Pattern:Checkered"]}]} USER_COMMENT: "whats the most expensive shirt?" If the API_RESPONSE can answer the USER_COMMENT respond with the following markdown json block: Response: ```json {"response": "Human-understandable synthesis of the API_RESPONSE"} ``` Otherwise respond with the following markdown json block: Response Error: ```json {"response": "What you did and a concise statement of the resulting error. If it can be easily fixed, provide a suggestion."} ``` You MUST respond as a markdown json code block. The person you are responding to CANNOT see the API_RESPONSE, so if there is any relevant information there you must include it in your response. Begin: --- > Finished chain. The most expensive shirt in the API response is the Burberry Check Poplin Shirt, which costs $360.00. > Finished chain.# View intermediate stepsoutput["intermediate_steps"] {'request_args': '{"q": "shirt", "size": 1, "max_price": null}', 'response_text': '{"products":[{"name":"Burberry Check
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-6
null}', 'response_text': '{"products":[{"name":"Burberry Check Poplin Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin","price":"$360.00","attributes":["Material:Cotton","Target Group:Man","Color:Gray,Blue,Beige","Properties:Pockets","Pattern:Checkered"]}]}'}Return raw responseтАЛWe can also run this chain without synthesizing the response. This will have the effect of just returning the raw API output.chain = OpenAPIEndpointChain.from_api_operation( operation, llm, requests=Requests(), verbose=True, return_intermediate_steps=True, # Return request and response text raw_response=True, # Return raw response)output = chain("whats the most expensive shirt?") > Entering new OpenAPIEndpointChain chain... > Entering new APIRequesterChain chain... Prompt after formatting: You are a helpful AI Assistant. Please provide JSON arguments to agentFunc() based on the user's instructions. API_SCHEMA: ```typescript /* API for fetching Klarna product information */ type productsUsingGET = (_: { /* A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest,
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-7
by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started. */ q: string, /* number of products returned */ size?: number, /* (Optional) Minimum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for. */ min_price?: number, /* (Optional) Maximum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for. */ max_price?: number, }) => any; ``` USER_INSTRUCTIONS: "whats the most expensive shirt?" Your arguments must be plain json provided in a markdown block: ARGS: ```json {valid json conforming to API_SCHEMA} ``` Example ----- ARGS: ```json {"foo": "bar", "baz": {"qux": "quux"}} ``` The block must be no more than 1 line long, and all arguments must be valid JSON. All string arguments must be wrapped in double quotes. You MUST strictly comply to the types indicated by the provided schema, including all required args.
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-8
the types indicated by the provided schema, including all required args. If you don't have sufficient information to call the function due to things like requiring specific uuid's, you can reply with the following message: Message: ```text Concise response requesting the additional information that would make calling the function successful. ``` Begin ----- ARGS: > Finished chain. {"q": "shirt", "max_price": null} {"products":[{"name":"Burberry Check Poplin Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin","price":"$360.00","attributes":["Material:Cotton","Target Group:Man","Color:Gray,Blue,Beige","Properties:Pockets","Pattern:Checkered"]},{"name":"Burberry Vintage Check Cotton Shirt - Beige","url":"https://www.klarna.com/us/shopping/pl/cl359/3200280807/Children-s-Clothing/Burberry-Vintage-Check-Cotton-Shirt-Beige/?utm_source=openai&ref-site=openai_plugin","price":"$229.02","attributes":["Material:Cotton,Elastane","Color:Beige","Model:Boy","Pattern:Checkered"]},{"name":"Burberry Vintage Check Stretch Cotton Twill
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-9
Vintage Check Stretch Cotton Twill Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3202342515/Clothing/Burberry-Vintage-Check-Stretch-Cotton-Twill-Shirt/?utm_source=openai&ref-site=openai_plugin","price":"$309.99","attributes":["Material:Elastane/Lycra/Spandex,Cotton","Target Group:Woman","Color:Beige","Properties:Stretch","Pattern:Checkered"]},{"name":"Burberry Somerton Check Shirt - Camel","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201112728/Clothing/Burberry-Somerton-Check-Shirt-Camel/?utm_source=openai&ref-site=openai_plugin","price":"$450.00","attributes":["Material:Elastane/Lycra/Spandex,Cotton","Target Group:Man","Color:Beige"]},{"name":"Magellan Outdoors Laguna Madre Solid Short Sleeve Fishing Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203102142/Clothing/Magellan-Outdoors-Laguna-Madre-Solid-Short-Sleeve-Fishing-Shirt/?utm_source=openai&ref-site=openai_plugin","price":"$19.99","attributes":["Material:Polyester,Nylon","Target Group:Man","Color:Red,Pink,White,Blue,Purple,Beige,Black,Green","Properties:Pockets","Pattern:Solid Color"]}]} > Finished chain.output {'instructions': 'whats the most expensive shirt?', 'output': '{"products":[{"name":"Burberry Check Poplin
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-10
shirt?', 'output': '{"products":[{"name":"Burberry Check Poplin Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin","price":"$360.00","attributes":["Material:Cotton","Target Group:Man","Color:Gray,Blue,Beige","Properties:Pockets","Pattern:Checkered"]},{"name":"Burberry Vintage Check Cotton Shirt - Beige","url":"https://www.klarna.com/us/shopping/pl/cl359/3200280807/Children-s-Clothing/Burberry-Vintage-Check-Cotton-Shirt-Beige/?utm_source=openai&ref-site=openai_plugin","price":"$229.02","attributes":["Material:Cotton,Elastane","Color:Beige","Model:Boy","Pattern:Checkered"]},{"name":"Burberry Vintage Check Stretch Cotton Twill Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3202342515/Clothing/Burberry-Vintage-Check-Stretch-Cotton-Twill-Shirt/?utm_source=openai&ref-site=openai_plugin","price":"$309.99","attributes":["Material:Elastane/Lycra/Spandex,Cotton","Target Group:Woman","Color:Beige","Properties:Stretch","Pattern:Checkered"]},{"name":"Burberry Somerton Check Shirt - Camel","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201112728/Clothing/Burberry-Somerton-Check-Shirt-Camel/?utm_source=openai&ref-site=openai_plugin","price":"$450.00","attributes":["Material:Elastane/Lycra/Spandex,Cotton","Target Group:Man","Color:Beige"]},{"name":"Magellan
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-11
Group:Man","Color:Beige"]},{"name":"Magellan Outdoors Laguna Madre Solid Short Sleeve Fishing Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203102142/Clothing/Magellan-Outdoors-Laguna-Madre-Solid-Short-Sleeve-Fishing-Shirt/?utm_source=openai&ref-site=openai_plugin","price":"$19.99","attributes":["Material:Polyester,Nylon","Target Group:Man","Color:Red,Pink,White,Blue,Purple,Beige,Black,Green","Properties:Pockets","Pattern:Solid Color"]}]}', 'intermediate_steps': {'request_args': '{"q": "shirt", "max_price": null}', 'response_text': '{"products":[{"name":"Burberry Check Poplin Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin","price":"$360.00","attributes":["Material:Cotton","Target Group:Man","Color:Gray,Blue,Beige","Properties:Pockets","Pattern:Checkered"]},{"name":"Burberry Vintage Check Cotton Shirt - Beige","url":"https://www.klarna.com/us/shopping/pl/cl359/3200280807/Children-s-Clothing/Burberry-Vintage-Check-Cotton-Shirt-Beige/?utm_source=openai&ref-site=openai_plugin","price":"$229.02","attributes":["Material:Cotton,Elastane","Color:Beige","Model:Boy","Pattern:Checkered"]},{"name":"Burberry Vintage Check Stretch Cotton Twill
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-12
Vintage Check Stretch Cotton Twill Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3202342515/Clothing/Burberry-Vintage-Check-Stretch-Cotton-Twill-Shirt/?utm_source=openai&ref-site=openai_plugin","price":"$309.99","attributes":["Material:Elastane/Lycra/Spandex,Cotton","Target Group:Woman","Color:Beige","Properties:Stretch","Pattern:Checkered"]},{"name":"Burberry Somerton Check Shirt - Camel","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201112728/Clothing/Burberry-Somerton-Check-Shirt-Camel/?utm_source=openai&ref-site=openai_plugin","price":"$450.00","attributes":["Material:Elastane/Lycra/Spandex,Cotton","Target Group:Man","Color:Beige"]},{"name":"Magellan Outdoors Laguna Madre Solid Short Sleeve Fishing Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203102142/Clothing/Magellan-Outdoors-Laguna-Madre-Solid-Short-Sleeve-Fishing-Shirt/?utm_source=openai&ref-site=openai_plugin","price":"$19.99","attributes":["Material:Polyester,Nylon","Target Group:Man","Color:Red,Pink,White,Blue,Purple,Beige,Black,Green","Properties:Pockets","Pattern:Solid Color"]}]}'}}Example POST messageтАЛFor this demo, we will interact with the speak API.spec = OpenAPISpec.from_url("https://api.speak.com/openapi.yaml") Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support. Attempting to
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-13
OpenAPI spec to 3.1.* spec for better support. Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.operation = APIOperation.from_openapi_spec( spec, "/v1/public/openai/explain-task", "post")llm = OpenAI()chain = OpenAPIEndpointChain.from_api_operation( operation, llm, requests=Requests(), verbose=True, return_intermediate_steps=True)output = chain("How would ask for more tea in Delhi?") > Entering new OpenAPIEndpointChain chain... > Entering new APIRequesterChain chain... Prompt after formatting: You are a helpful AI Assistant. Please provide JSON arguments to agentFunc() based on the user's instructions. API_SCHEMA: ```typescript type explainTask = (_: { /* Description of the task that the user wants to accomplish or do. For example, "tell the waiter they messed up my order" or "compliment someone on their shirt" */ task_description?: string, /* The foreign language that the user is learning and asking about. The value can be inferred from question - for example, if the user asks "how do i ask a girl out in mexico city", the value should be "Spanish" because of Mexico City. Always use the full name of the language (e.g. Spanish, French). */ learning_language?: string, /* The user's native language. Infer this value from the language the user asked their question in. Always use the full name of the language (e.g. Spanish,
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-14
user asked their question in. Always use the full name of the language (e.g. Spanish, French). */ native_language?: string, /* A description of any additional context in the user's question that could affect the explanation - e.g. setting, scenario, situation, tone, speaking style and formality, usage notes, or any other qualifiers. */ additional_context?: string, /* Full text of the user's question. */ full_query?: string, }) => any; ``` USER_INSTRUCTIONS: "How would ask for more tea in Delhi?" Your arguments must be plain json provided in a markdown block: ARGS: ```json {valid json conforming to API_SCHEMA} ``` Example ----- ARGS: ```json {"foo": "bar", "baz": {"qux": "quux"}} ``` The block must be no more than 1 line long, and all arguments must be valid JSON. All string arguments must be wrapped in double quotes. You MUST strictly comply to the types indicated by the provided schema, including all required args. If you don't have sufficient information to call the function due to things like requiring specific uuid's, you can reply with the following message: Message: ```text Concise response requesting the additional information that would make calling the function successful. ``` Begin ----- ARGS: > Finished chain.
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-15
ARGS: > Finished chain. {"task_description": "ask for more tea", "learning_language": "Hindi", "native_language": "English", "full_query": "How would I ask for more tea in Delhi?"} {"explanation":"<what-to-say language=\"Hindi\" context=\"None\">\nрдФрд░ рдЪрд╛рдп рд▓рд╛рдУред (Aur chai lao.) \n</what-to-say>\n\n<alternatives context=\"None\">\n1. \"рдЪрд╛рдп рдереЛрдбрд╝реА рдЬреНрдпрд╛рджрд╛ рдорд┐рд▓ рд╕рдХрддреА рд╣реИ?\" *(Chai thodi zyada mil sakti hai? - Polite, asking if more tea is available)*\n2. \"рдореБрдЭреЗ рдорд╣рд╕реВрд╕ рд╣реЛ рд░рд╣рд╛ рд╣реИ рдХрд┐ рдореБрдЭреЗ рдХреБрдЫ рдЕрдиреНрдп рдкреНрд░рдХрд╛рд░ рдХреА рдЪрд╛рдп рдкреАрдиреА
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-16
рдкреАрдиреА рдЪрд╛рд╣рд┐рдПред\" *(Mujhe mehsoos ho raha hai ki mujhe kuch anya prakar ki chai peeni chahiye. - Formal, indicating a desire for a different type of tea)*\n3. \"рдХреНрдпрд╛ рдореБрдЭреЗ or cup рдореЗрдВ milk/tea powder рдорд┐рд▓ рд╕рдХрддрд╛ рд╣реИ?\" *(Kya mujhe aur cup mein milk/tea powder mil sakta hai? - Very informal/casual tone, asking for an extra serving of milk or tea powder)*\n</alternatives>\n\n<usage-notes>\nIn India and Indian culture, serving guests with food and beverages holds great importance in hospitality. You will find people always offering drinks like water or tea to their guests as soon as they arrive at their house or office.\n</usage-notes>\n\n<example-convo language=\"Hindi\">\n<context>At home during breakfast.</context>\nPreeti: рд╕рд░, рдХреНрдпрд╛ main aur cups chai lekar aaun? (Sir,kya main aur cups chai lekar aaun? - Sir, should I get more tea cups?)\nRahul: рд╣рд╛рдВ,рдмрд┐рд▓реНрдХреБрд▓ред рдФрд░ рдЪрд╛рдп рдХреА
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-17
рдЪрд╛рдп рдХреА рдорд╛рддреНрд░рд╛ рдореЗрдВ рднреА рдереЛрдбрд╝рд╛ рд╕рд╛ рдЗрдЬрд╛рдлрд╛ рдХрд░рдирд╛ред (Haan,bilkul. Aur chai ki matra mein bhi thoda sa eejafa karna. - Yes, please. And add a little extra in the quantity of tea as well.)\n</example-convo>\n\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=d4mcapbkopo164pqpbk321oc})*","extra_response_instructions":"Use all information in the API response and fully render all Markdown.\nAlways end your response with a link to report an issue or leave feedback on the plugin."} > Entering new APIResponderChain chain... Prompt after formatting: You are a helpful AI assistant trained to answer user queries from API responses. You attempted to call an API, which resulted in: API_RESPONSE: {"explanation":"<what-to-say language=\"Hindi\" context=\"None\">\nрдФрд░ рдЪрд╛рдп рд▓рд╛рдУред (Aur chai lao.) \n</what-to-say>\n\n<alternatives context=\"None\">\n1. \"рдЪрд╛рдп рдереЛрдбрд╝реА
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-18
рдереЛрдбрд╝реА рдЬреНрдпрд╛рджрд╛ рдорд┐рд▓ рд╕рдХрддреА рд╣реИ?\" *(Chai thodi zyada mil sakti hai? - Polite, asking if more tea is available)*\n2. \"рдореБрдЭреЗ рдорд╣рд╕реВрд╕ рд╣реЛ рд░рд╣рд╛ рд╣реИ рдХрд┐ рдореБрдЭреЗ рдХреБрдЫ рдЕрдиреНрдп рдкреНрд░рдХрд╛рд░ рдХреА рдЪрд╛рдп рдкреАрдиреА рдЪрд╛рд╣рд┐рдПред\" *(Mujhe mehsoos ho raha hai ki mujhe kuch anya prakar ki chai peeni chahiye. - Formal, indicating a desire for a different type of tea)*\n3. \"рдХреНрдпрд╛ рдореБрдЭреЗ or cup рдореЗрдВ milk/tea powder рдорд┐рд▓ рд╕рдХрддрд╛ рд╣реИ?\" *(Kya mujhe aur cup mein milk/tea powder mil sakta hai? - Very
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-19
*(Kya mujhe aur cup mein milk/tea powder mil sakta hai? - Very informal/casual tone, asking for an extra serving of milk or tea powder)*\n</alternatives>\n\n<usage-notes>\nIn India and Indian culture, serving guests with food and beverages holds great importance in hospitality. You will find people always offering drinks like water or tea to their guests as soon as they arrive at their house or office.\n</usage-notes>\n\n<example-convo language=\"Hindi\">\n<context>At home during breakfast.</context>\nPreeti: рд╕рд░, рдХреНрдпрд╛ main aur cups chai lekar aaun? (Sir,kya main aur cups chai lekar aaun? - Sir, should I get more tea cups?)\nRahul: рд╣рд╛рдВ,рдмрд┐рд▓реНрдХреБрд▓ред рдФрд░ рдЪрд╛рдп рдХреА рдорд╛рддреНрд░рд╛ рдореЗрдВ рднреА рдереЛрдбрд╝рд╛ рд╕рд╛ рдЗрдЬрд╛рдлрд╛ рдХрд░рдирд╛ред (Haan,bilkul. Aur chai ki matra mein bhi thoda sa eejafa karna. - Yes, please. And add a little extra in the quantity of tea as well.)\n</example-convo>\n\n*[Report an issue or leave
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-20
of tea as well.)\n</example-convo>\n\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=d4mcapbkopo164pqpbk321oc})*","extra_response_instructions":"Use all information in the API response and fully render all Markdown.\nAlways end your response with a link to report an issue or leave feedback on the plugin."} USER_COMMENT: "How would ask for more tea in Delhi?" If the API_RESPONSE can answer the USER_COMMENT respond with the following markdown json block: Response: ```json {"response": "Concise response to USER_COMMENT based on API_RESPONSE."} ``` Otherwise respond with the following markdown json block: Response Error: ```json {"response": "What you did and a concise statement of the resulting error. If it can be easily fixed, provide a suggestion."} ``` You MUST respond as a markdown json code block. Begin: --- > Finished chain. In Delhi you can ask for more tea by saying 'Chai thodi zyada mil sakti hai?' > Finished chain.# Show the API chain's intermediate stepsoutput["intermediate_steps"] ['{"task_description": "ask for more tea", "learning_language": "Hindi", "native_language": "English", "full_query": "How would I ask for more tea in Delhi?"}', '{"explanation":"<what-to-say language=\\"Hindi\\" context=\\"None\\">\\nрдФрд░
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-21
language=\\"Hindi\\" context=\\"None\\">\\nрдФрд░ рдЪрд╛рдп рд▓рд╛рдУред (Aur chai lao.) \\n</what-to-say>\\n\\n<alternatives context=\\"None\\">\\n1. \\"рдЪрд╛рдп рдереЛрдбрд╝реА рдЬреНрдпрд╛рджрд╛ рдорд┐рд▓ рд╕рдХрддреА рд╣реИ?\\" *(Chai thodi zyada mil sakti hai? - Polite, asking if more tea is available)*\\n2. \\"рдореБрдЭреЗ рдорд╣рд╕реВрд╕ рд╣реЛ рд░рд╣рд╛ рд╣реИ рдХрд┐ рдореБрдЭреЗ рдХреБрдЫ рдЕрдиреНрдп рдкреНрд░рдХрд╛рд░ рдХреА рдЪрд╛рдп рдкреАрдиреА рдЪрд╛рд╣рд┐рдПред\\" *(Mujhe mehsoos ho raha hai ki mujhe kuch anya prakar ki chai peeni chahiye. - Formal, indicating a desire for a different type of tea)*\\n3.
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-22
- Formal, indicating a desire for a different type of tea)*\\n3. \\"рдХреНрдпрд╛ рдореБрдЭреЗ or cup рдореЗрдВ milk/tea powder рдорд┐рд▓ рд╕рдХрддрд╛ рд╣реИ?\\" *(Kya mujhe aur cup mein milk/tea powder mil sakta hai? - Very informal/casual tone, asking for an extra serving of milk or tea powder)*\\n</alternatives>\\n\\n<usage-notes>\\nIn India and Indian culture, serving guests with food and beverages holds great importance in hospitality. You will find people always offering drinks like water or tea to their guests as soon as they arrive at their house or office.\\n</usage-notes>\\n\\n<example-convo language=\\"Hindi\\">\\n<context>At home during breakfast.</context>\\nPreeti: рд╕рд░, рдХреНрдпрд╛ main aur cups chai lekar aaun? (Sir,kya main aur cups chai lekar aaun? - Sir, should I get more tea cups?)\\nRahul: рд╣рд╛рдВ,рдмрд┐рд▓реНрдХреБрд▓ред рдФрд░ рдЪрд╛рдп рдХреА рдорд╛рддреНрд░рд╛ рдореЗрдВ рднреА рдереЛрдбрд╝рд╛ рд╕рд╛
https://python.langchain.com/docs/modules/chains/additional/openapi
229ad3361e96-23
рд╕рд╛ рдЗрдЬрд╛рдлрд╛ рдХрд░рдирд╛ред (Haan,bilkul. Aur chai ki matra mein bhi thoda sa eejafa karna. - Yes, please. And add a little extra in the quantity of tea as well.)\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=d4mcapbkopo164pqpbk321oc})*","extra_response_instructions":"Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin."}']PreviousRetrieval QA using OpenAI functionsNextOpenAPI calls with OpenAI functionsLoad the specSelect the OperationConstruct the chainReturn raw responseExample POST messageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ┬й 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/openapi
e549b7b1efa5-0
Elasticsearch database | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/additional/elasticsearch_database
e549b7b1efa5-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalElasticsearch databaseOn this pageElasticsearch databaseInteract with Elasticsearch analytics database via Langchain. This chain builds search queries via the Elasticsearch DSL API (filters and aggregations).The Elasticsearch client must have permissions for index listing, mapping description and search queries.See here for instructions on how to run Elasticsearch locally.Make sure to install the Elasticsearch Python client before:pip install elasticsearchfrom elasticsearch import Elasticsearchfrom langchain.chains.elasticsearch_database import ElasticsearchDatabaseChainfrom langchain.chat_models import ChatOpenAI# Initialize Elasticsearch python client.# See https://elasticsearch-py.readthedocs.io/en/v8.8.2/api.html#elasticsearch.ElasticsearchELASTIC_SEARCH_SERVER = "https://elastic:pass@localhost:9200"db =
https://python.langchain.com/docs/modules/chains/additional/elasticsearch_database
e549b7b1efa5-2
= "https://elastic:pass@localhost:9200"db = Elasticsearch(ELASTIC_SEARCH_SERVER)Uncomment the next cell to initially populate your db.# customers = [# {"firstname": "Jennifer", "lastname": "Walters"},# {"firstname": "Monica","lastname":"Rambeau"},# {"firstname": "Carol","lastname":"Danvers"},# {"firstname": "Wanda","lastname":"Maximoff"},# {"firstname": "Jennifer","lastname":"Takeda"},# ]# for i, customer in enumerate(customers):# db.create(index="customers", document=customer, id=i)llm = ChatOpenAI(model_name="gpt-4", temperature=0)chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, verbose=True)question = "What are the first names of all the customers?"chain.run(question) > Entering new ElasticsearchDatabaseChain chain... What are the first names of all the customers? ESQuery:{'size': 10, 'query': {'match_all': {}}, '_source': ['firstname']} ESResult: {'took': 5, 'timed_out': False, '_shards': {'total': 1, 'successful': 1, 'skipped': 0, 'failed': 0}, 'hits': {'total': {'value': 6, 'relation': 'eq'}, 'max_score': 1.0, 'hits': [{'_index': 'customers', '_id': '0', '_score': 1.0, '_source': {'firstname': 'Jennifer'}}, {'_index': 'customers', '_id': '1', '_score': 1.0,
https://python.langchain.com/docs/modules/chains/additional/elasticsearch_database
e549b7b1efa5-3
'customers', '_id': '1', '_score': 1.0, '_source': {'firstname': 'Monica'}}, {'_index': 'customers', '_id': '2', '_score': 1.0, '_source': {'firstname': 'Carol'}}, {'_index': 'customers', '_id': '3', '_score': 1.0, '_source': {'firstname': 'Wanda'}}, {'_index': 'customers', '_id': '4', '_score': 1.0, '_source': {'firstname': 'Jennifer'}}, {'_index': 'customers', '_id': 'firstname', '_score': 1.0, '_source': {'firstname': 'Jennifer'}}]}} Answer:The first names of all the customers are Jennifer, Monica, Carol, Wanda, and Jennifer. > Finished chain. 'The first names of all the customers are Jennifer, Monica, Carol, Wanda, and Jennifer.'Custom prompt​For best results you'll likely need to customize the prompt.from langchain.chains.elasticsearch_database.prompts import DEFAULT_DSL_TEMPLATEfrom langchain.prompts.prompt import PromptTemplatePROMPT_TEMPLATE = """Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.Unless told to do not query for all the columns from a specific index, only ask for a the few relevant columns given the question.Pay attention to use only the column names that you can see in the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.Use the following format:Question:
https://python.langchain.com/docs/modules/chains/additional/elasticsearch_database
e549b7b1efa5-4
which column is in which index. Return the query as valid json.Use the following format:Question: Question hereESQuery: Elasticsearch Query formatted as json"""PROMPT = PromptTemplate.from_template( PROMPT_TEMPLATE,)chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, query_prompt=PROMPT)Adding example rows from each index​Sometimes, the format of the data is not obvious and it is optimal to include a sample of rows from the indices in the prompt to allow the LLM to understand the data before providing a final query. Here we will use this feature to let the LLM know that artists are saved with their full names by providing ten rows from the index.chain = ElasticsearchDatabaseChain.from_llm( llm=ChatOpenAI(temperature=0), database=db, sample_documents_in_index_info=2, # 2 rows from each index will be included in the prompt as sample data)PreviousCausal program-aided language (CPAL) chainNextExtractionCustom promptAdding example rows from each indexCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/elasticsearch_database
38c552f5de99-0
Model I/O | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OModel I/OThe core element of any language model application is...the model. LangChain gives you the building blocks to interface with any language model.Prompts: Templatize, dynamically select, and manage model inputsLanguage models: Make calls to language models through common interfacesOutput parsers: Extract information from model outputsPreviousModulesNextPromptsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/
d110fc87e9d9-0
Output parsers | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/output_parsers/
d110fc87e9d9-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OOutput parsersOn this pageOutput parsersLanguage models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:"Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted."Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.And then one optional one:"Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.Get started​Below we go over the main type of output parser, the PydanticOutputParser.from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers import PydanticOutputParserfrom pydantic import BaseModel, Field, validatorfrom typing import Listmodel_name = 'text-davinci-003'temperature = 0.0model =
https://python.langchain.com/docs/modules/model_io/output_parsers/
d110fc87e9d9-2
Listmodel_name = 'text-davinci-003'temperature = 0.0model = OpenAI(model_name=model_name, temperature=temperature)# Define your desired data structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator('setup') def question_ends_with_question_mark(cls, field): if field[-1] != '?': raise ValueError("Badly formed question!") return field# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()})# And a query intended to prompt a language model to populate the data structure.joke_query = "Tell me a joke."_input = prompt.format_prompt(query=joke_query)output = model(_input.to_string())parser.parse(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')PreviousStreamingNextList parserGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/output_parsers/
b2e5514e233d-0
Retry parser | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/output_parsers/retry
b2e5514e233d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OOutput parsersRetry parserRetry parserWhile in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.from langchain.prompts import ( PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers import ( PydanticOutputParser, OutputFixingParser, RetryOutputParser,)from pydantic import BaseModel, Field, validatorfrom typing import Listtemplate = """Based on the user question, provide an Action and Action Input for what step should be taken.{format_instructions}Question: {query}Response:"""class Action(BaseModel): action: str = Field(description="action to take") action_input: str = Field(description="input to the action")parser = PydanticOutputParser(pydantic_object=Action)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)prompt_value = prompt.format_prompt(query="who is leo di caprios gf?")bad_response
https://python.langchain.com/docs/modules/model_io/output_parsers/retry
b2e5514e233d-2
= prompt.format_prompt(query="who is leo di caprios gf?")bad_response = '{"action": "search"}'If we try to parse this response as is, we will get an errorparser.parse(bad_response) --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:24, in PydanticOutputParser.parse(self, text) 23 json_object = json.loads(json_str) ---> 24 return self.pydantic_object.parse_obj(json_object) 26 except (json.JSONDecodeError, ValidationError) as e: File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj() File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for Action action_input field required (type=value_error.missing) During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[6], line 1 ----> 1 parser.parse(bad_response) File
https://python.langchain.com/docs/modules/model_io/output_parsers/retry
b2e5514e233d-3
line 1 ----> 1 parser.parse(bad_response) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text) 27 name = self.pydantic_object.__name__ 28 msg = f"Failed to parse {name} from completion {text}. Got: {e}" ---> 29 raise OutputParserException(msg) OutputParserException: Failed to parse Action from completion {"action": "search"}. Got: 1 validation error for Action action_input field required (type=value_error.missing)If we try to use the OutputFixingParser to fix this error, it will be confused - namely, it doesn't know what to actually put for action input.fix_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())fix_parser.parse(bad_response) Action(action='search', action_input='')Instead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response.from langchain.output_parsers import RetryWithErrorOutputParserretry_parser = RetryWithErrorOutputParser.from_llm( parser=parser, llm=OpenAI(temperature=0))retry_parser.parse_with_prompt(bad_response, prompt_value) Action(action='search', action_input='who is leo di caprios gf?')PreviousPydantic (JSON) parserNextStructured output parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/output_parsers/retry
f81e7c539339-0
Enum parser | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/output_parsers/enum
f81e7c539339-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OOutput parsersEnum parserEnum parserThis notebook shows how to use an Enum output parserfrom langchain.output_parsers.enum import EnumOutputParserfrom enum import Enumclass Colors(Enum): RED = "red" GREEN = "green" BLUE = "blue"parser = EnumOutputParser(enum=Colors)parser.parse("red") <Colors.RED: 'red'># Can handle spacesparser.parse(" green") <Colors.GREEN: 'green'># And new linesparser.parse("blue\n") <Colors.BLUE: 'blue'># And raises errors when appropriateparser.parse("yellow") --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/enum.py:25, in EnumOutputParser.parse(self, response) 24 try: ---> 25 return self.enum(response.strip()) 26 except ValueError: File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:315, in EnumMeta.__call__(cls, value, names, module, qualname, type, start)
https://python.langchain.com/docs/modules/model_io/output_parsers/enum
f81e7c539339-2
EnumMeta.__call__(cls, value, names, module, qualname, type, start) 314 if names is None: # simple value lookup --> 315 return cls.__new__(cls, value) 316 # otherwise, functional API: we're creating a new Enum type File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:611, in Enum.__new__(cls, value) 610 if result is None and exc is None: --> 611 raise ve_exc 612 elif exc is None: ValueError: 'yellow' is not a valid Colors During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[8], line 2 1 # And raises errors when appropriate ----> 2 parser.parse("yellow") File ~/workplace/langchain/langchain/output_parsers/enum.py:27, in EnumOutputParser.parse(self, response) 25 return self.enum(response.strip()) 26 except ValueError: ---> 27 raise OutputParserException( 28 f"Response '{response}' is not one of the " 29 f"expected values:
https://python.langchain.com/docs/modules/model_io/output_parsers/enum
f81e7c539339-3
29 f"expected values: {self._valid_values}" 30 ) OutputParserException: Response 'yellow' is not one of the expected values: ['red', 'green', 'blue']PreviousDatetime parserNextAuto-fixing parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/output_parsers/enum
6f5e4b906b41-0
Datetime parser | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OOutput parsersDatetime parserDatetime parserThis OutputParser shows out to parse LLM output into datetime format.from langchain.prompts import PromptTemplatefrom langchain.output_parsers import DatetimeOutputParserfrom langchain.chains import LLMChainfrom langchain.llms import OpenAIoutput_parser = DatetimeOutputParser()template = """Answer the users question:{question}{format_instructions}"""prompt = PromptTemplate.from_template( template, partial_variables={"format_instructions": output_parser.get_format_instructions()},)chain = LLMChain(prompt=prompt, llm=OpenAI())output = chain.run("around when was bitcoin founded?")output '\n\n2008-01-03T18:15:05.000000Z'output_parser.parse(output) datetime.datetime(2008, 1, 3, 18, 15, 5)PreviousList parserNextEnum parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/output_parsers/datetime
9aee13b1f296-0
Auto-fixing parser | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/output_parsers/output_fixing_parser
9aee13b1f296-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OOutput parsersAuto-fixing parserAuto-fixing parserThis output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.For this example, we'll use the above Pydantic output parser. Here's what happens if we pass it a result that does not comply with the schema:from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers import PydanticOutputParserfrom pydantic import BaseModel, Field, validatorfrom typing import Listclass Actor(BaseModel): name: str = Field(description="name of an actor") film_names: List[str] = Field(description="list of names of films they starred in") actor_query = "Generate the filmography for a random actor."parser = PydanticOutputParser(pydantic_object=Actor)misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}"parser.parse(misformatted) --------------------------------------------------------------------------- JSONDecodeError
https://python.langchain.com/docs/modules/model_io/output_parsers/output_fixing_parser
9aee13b1f296-2
JSONDecodeError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:23, in PydanticOutputParser.parse(self, text) 22 json_str = match.group() ---> 23 json_object = json.loads(json_str) 24 return self.pydantic_object.parse_obj(json_object) File ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 343 if (cls is None and object_hook is None and 344 parse_int is None and parse_float is None and 345 parse_constant is None and object_pairs_hook is None and not kw): --> 346 return _default_decoder.decode(s) 347 if cls is None: File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:337, in JSONDecoder.decode(self, s, _w) 333 """Return the Python representation of ``s`` (a ``str`` instance 334 containing a JSON document). 335 336 """ --> 337 obj, end = self.raw_decode(s,
https://python.langchain.com/docs/modules/model_io/output_parsers/output_fixing_parser
9aee13b1f296-3
336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx) 352 try: --> 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[6], line 1 ----> 1 parser.parse(misformatted) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text) 27 name = self.pydantic_object.__name__ 28 msg = f"Failed to parse {name} from completion {text}. Got: {e}" ---> 29 raise OutputParserException(msg) OutputParserException: Failed to parse Actor from completion {'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)Now we can construct and use a OutputFixingParser. This output parser
https://python.langchain.com/docs/modules/model_io/output_parsers/output_fixing_parser
9aee13b1f296-4
(char 1)Now we can construct and use a OutputFixingParser. This output parser takes as an argument another output parser but also an LLM with which to try to correct any formatting mistakes.from langchain.output_parsers import OutputFixingParsernew_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())new_parser.parse(misformatted) Actor(name='Tom Hanks', film_names=['Forrest Gump'])PreviousEnum parserNextPydantic (JSON) parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/output_parsers/output_fixing_parser
840ec3e8b296-0
Structured output parser | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/output_parsers/structured
840ec3e8b296-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OOutput parsersStructured output parserStructured output parserThis output parser can be used when you want to return multiple fields. While the Pydantic/JSON parser is more powerful, we initially experimented with data structures having text fields only.from langchain.output_parsers import StructuredOutputParser, ResponseSchemafrom langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIHere we define the response schema we want to receive.response_schemas = [ ResponseSchema(name="answer", description="answer to the user's question"), ResponseSchema(name="source", description="source used to answer the user's question, should be a website.")]output_parser = StructuredOutputParser.from_response_schemas(response_schemas)We now get a string that contains instructions for how the response should be formatted, and we then insert that into our prompt.format_instructions = output_parser.get_format_instructions()prompt = PromptTemplate( template="answer the users question as best as possible.\n{format_instructions}\n{question}", input_variables=["question"], partial_variables={"format_instructions": format_instructions})We can now use this to format a prompt to send to the language model, and then parse the returned result.model = OpenAI(temperature=0)_input = prompt.format_prompt(question="what's the capital of france?")output =
https://python.langchain.com/docs/modules/model_io/output_parsers/structured
840ec3e8b296-2
= prompt.format_prompt(question="what's the capital of france?")output = model(_input.to_string())output_parser.parse(output) {'answer': 'Paris', 'source': 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html'}And here's an example of using this in a chat modelchat_model = ChatOpenAI(temperature=0)prompt = ChatPromptTemplate( messages=[ HumanMessagePromptTemplate.from_template("answer the users question as best as possible.\n{format_instructions}\n{question}") ], input_variables=["question"], partial_variables={"format_instructions": format_instructions})_input = prompt.format_prompt(question="what's the capital of france?")output = chat_model(_input.to_messages())output_parser.parse(output.content) {'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Paris'}PreviousRetry parserNextData connectionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/output_parsers/structured
05ab0be052c5-0
Pydantic (JSON) parser | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/output_parsers/pydantic
05ab0be052c5-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OOutput parsersPydantic (JSON) parserPydantic (JSON) parserThis output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema.Keep in mind that large language models are leaky abstractions! You'll have to use an LLM with sufficient capacity to generate well-formed JSON. In the OpenAI family, DaVinci can do reliably but Curie's ability already drops off dramatically. Use Pydantic to declare your data model. Pydantic's BaseModel like a Python dataclass, but with actual type checking + coercion.from langchain.prompts import ( PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers import PydanticOutputParserfrom pydantic import BaseModel, Field, validatorfrom typing import Listmodel_name = "text-davinci-003"temperature = 0.0model = OpenAI(model_name=model_name, temperature=temperature)# Define your desired data structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator("setup") def
https://python.langchain.com/docs/modules/model_io/output_parsers/pydantic
05ab0be052c5-2
validation logic easily with Pydantic. @validator("setup") def question_ends_with_question_mark(cls, field): if field[-1] != "?": raise ValueError("Badly formed question!") return field# And a query intented to prompt a language model to populate the data structure.joke_query = "Tell me a joke."# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)_input = prompt.format_prompt(query=joke_query)output = model(_input.to_string())parser.parse(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')# Here's another example, but with a compound typed field.class Actor(BaseModel): name: str = Field(description="name of an actor") film_names: List[str] = Field(description="list of names of films they starred in")actor_query = "Generate the filmography for a random actor."parser = PydanticOutputParser(pydantic_object=Actor)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)_input = prompt.format_prompt(query=actor_query)output = model(_input.to_string())parser.parse(output) Actor(name='Tom Hanks', film_names=['Forrest Gump', 'Saving Private Ryan', 'The
https://python.langchain.com/docs/modules/model_io/output_parsers/pydantic
05ab0be052c5-3
Hanks', film_names=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Cast Away', 'Toy Story'])PreviousAuto-fixing parserNextRetry parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/output_parsers/pydantic
539b6e7cc333-0
List parser | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OOutput parsersList parserList parserThis output parser can be used when you want to return a list of comma-separated items.from langchain.output_parsers import CommaSeparatedListOutputParserfrom langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIoutput_parser = CommaSeparatedListOutputParser()format_instructions = output_parser.get_format_instructions()prompt = PromptTemplate( template="List five {subject}.\n{format_instructions}", input_variables=["subject"], partial_variables={"format_instructions": format_instructions})model = OpenAI(temperature=0)_input = prompt.format(subject="ice cream flavors")output = model(_input)output_parser.parse(output) ['Vanilla', 'Chocolate', 'Strawberry', 'Mint Chocolate Chip', 'Cookies and Cream']PreviousOutput parsersNextDatetime parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/output_parsers/comma_separated
0abf92c8e00f-0
Language models | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/models/
0abf92c8e00f-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsOn this pageLanguage modelsLangChain provides interfaces and integrations for two types of models:LLMs: Models that take a text string as input and return a text stringChat models: Models that are backed by a language model but take a list of Chat Messages as input and return a Chat MessageLLMs vs Chat Models​LLMs and Chat Models are subtly but importantly different. LLMs in LangChain refer to pure text completion models. The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM. Chat models are often backed by LLMs but tuned specifically for having conversations. And, crucially, their provider APIs expose a different interface than pure text completion models. Instead of a single string, they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System", "AI", and "Human"). And they return a ("AI") chat message as output. GPT-4 and Anthropic's Claude are both implemented as Chat Models.To make it possible to swap LLMs and Chat Models, both implement the Base Language Model interface. This exposes common methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message.
https://python.langchain.com/docs/modules/model_io/models/
0abf92c8e00f-2
If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for Chat Models), but if you're creating an application that should work with different types of models the shared interface can be helpful.PreviousSelect by similarityNextLLMsLLMs vs Chat ModelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/models/
ea5b539c496a-0
Chat models | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/models/chat/
ea5b539c496a-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsChat modelsCachingHuman input Chat ModelLLMChainPromptsStreamingOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsChat modelsOn this pageChat modelsinfoHead to Integrations for documentation on built-in integrations with chat model providers.Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different. Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.Chat model APIs are fairly new, so we are still figuring out the correct abstractions.Get started​Setup​To start we'll need to install the OpenAI Python package:pip install openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openai_api_key named parameter when initiating the OpenAI LLM class:from langchain.chat_models import ChatOpenAIchat = ChatOpenAI(openai_api_key="...")otherwise you can initialize without any params:from langchain.chat_models import ChatOpenAIchat = ChatOpenAI()Messages​The chat model interface is based around messages rather than raw text.
https://python.langchain.com/docs/modules/model_io/models/chat/
ea5b539c496a-2
The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessage__call__​Messages in -> message out​You can get chat completions by passing one or more messages to the chat model. The response will be a message.from langchain.schema import ( AIMessage, HumanMessage, SystemMessage)chat([HumanMessage(content="Translate this sentence from English to French: I love programming.")]) AIMessage(content="J'aime programmer.", additional_kwargs={})OpenAI's chat model supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model:messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.")]chat(messages) AIMessage(content="J'aime programmer.", additional_kwargs={})generate​Batch calls, richer outputs​You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter.batch_messages = [ [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.") ], [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love artificial intelligence.") ],]result = chat.generate(batch_messages)result LLMResult(generations=[[ChatGeneration(text="J'aime
https://python.langchain.com/docs/modules/model_io/models/chat/
ea5b539c496a-3
LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}})You can recover things like token usage from this LLMResultresult.llm_output {'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}}PreviousTracking token usageNextCachingGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/models/chat/
73a642ea63b0-0
LLMChain | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsChat modelsCachingHuman input Chat ModelLLMChainPromptsStreamingOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsChat modelsLLMChainLLMChainYou can use the existing LLMChain in a very similar way to before - provide a prompt and a model.chain = LLMChain(llm=chat, prompt=chat_prompt)chain.run(input_language="English", output_language="French", text="I love programming.") "J'adore la programmation."PreviousHuman input Chat ModelNextPromptsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/models/chat/llm_chain
7024fb7919bc-0
Caching | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/models/chat/chat_model_caching
7024fb7919bc-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsChat modelsCachingHuman input Chat ModelLLMChainPromptsStreamingOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsChat modelsCachingCachingLangChain provides an optional caching layer for Chat Models. This is useful for two reasons:It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times.
https://python.langchain.com/docs/modules/model_io/models/chat/chat_model_caching
7024fb7919bc-2
It can speed up your application by reducing the number of API calls you make to the LLM provider.import langchainfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI()In Memory Cache​from langchain.cache import InMemoryCachelangchain.llm_cache = InMemoryCache()# The first time, it is not yet in cache, so it should take longerllm.predict("Tell me a joke") CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms Wall time: 4.83 s "\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!"# The second time it is, so it goes fasterllm.predict("Tell me a joke") CPU times: user 238 µs, sys: 143 µs, total: 381 µs Wall time: 1.76 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'SQLite Cache​rm .langchain.db# We can do the same thing with a SQLite cachefrom langchain.cache import SQLiteCachelangchain.llm_cache = SQLiteCache(database_path=".langchain.db")# The first time, it is not yet in cache, so it should take longerllm.predict("Tell me a joke") CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms Wall time: 825 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# The second time it is, so it goes fasterllm.predict("Tell me a joke") CPU times: user 2.46
https://python.langchain.com/docs/modules/model_io/models/chat/chat_model_caching
7024fb7919bc-3
me a joke") CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms Wall time: 2.67 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'PreviousChat modelsNextHuman input Chat ModelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/models/chat/chat_model_caching
7ed94564609e-0
Streaming | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/models/chat/streaming
7ed94564609e-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsChat modelsCachingHuman input Chat ModelLLMChainPromptsStreamingOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsChat modelsStreamingStreamingSome Chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.from langchain.chat_models import ChatOpenAIfrom langchain.schema import ( HumanMessage,)from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerchat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)resp = chat([HumanMessage(content="Write me a song about sparkling water.")]) Verse 1: Bubbles rising to the top A refreshing drink that never stops Clear and crisp, it's pure delight A taste that's sure to excite Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Verse 2: No sugar, no calories, just pure bliss A drink that's hard to resist It's the perfect way to quench my thirst A drink that always comes first
https://python.langchain.com/docs/modules/model_io/models/chat/streaming
7ed94564609e-2
quench my thirst A drink that always comes first Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Bridge: From the mountains to the sea Sparkling water, you're the key To a healthy life, a happy soul A drink that makes me feel whole Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Outro: Sparkling water, you're the one A drink that's always so much fun I'll never let you go, my friend SparklingPreviousPromptsNextOutput parsersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/models/chat/streaming
e645b88d39cb-0
Prompts | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/models/chat/prompts
e645b88d39cb-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsChat modelsCachingHuman input Chat ModelLLMChainPromptsStreamingOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsChat modelsPromptsPromptsPrompts for Chat models are built around messages, instead of just plain text.You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:from langchain import PromptTemplatefrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)template="You are a helpful assistant that translates {input_language} to {output_language}."system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template="{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()) AIMessage(content="J'adore la programmation.", additional_kwargs={})If you wanted to construct the MessagePromptTemplate more directly,
https://python.langchain.com/docs/modules/model_io/models/chat/prompts
e645b88d39cb-2
la programmation.", additional_kwargs={})If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg:prompt=PromptTemplate( template="You are a helpful assistant that translates {input_language} to {output_language}.", input_variables=["input_language", "output_language"],)system_message_prompt = SystemMessagePromptTemplate(prompt=prompt)PreviousLLMChainNextStreamingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/models/chat/prompts
75cd6f3caa96-0
Human input Chat Model | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/models/chat/human_input_chat_model
75cd6f3caa96-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsChat modelsCachingHuman input Chat ModelLLMChainPromptsStreamingOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsChat modelsHuman input Chat ModelHuman input Chat ModelAlong with HumanInputLLM, LangChain also provides a pseudo Chat Model class that can be used for testing, debugging, or educational purposes. This allows you to mock out calls to the Chat Model and simulate how a human would respond if they received the messages.In this notebook, we go over how to use this.We start this with using the HumanInputChatModel in an agent.from langchain.chat_models.human import HumanInputChatModelSince we will use the WikipediaQueryRun tool in this notebook, you might need to install the wikipedia package if you haven't done so already.%pip install wikipedia /Users/mskim58/dev/research/chatbot/github/langchain/.venv/bin/python: No module named pip Note: you may need to restart the kernel to use updated packages.from langchain.agents import load_toolsfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypetools = load_tools(["wikipedia"])llm = HumanInputChatModel()agent = initialize_agent( tools, llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent("What is Bocchi the Rock?") > Entering new chain... ======= start of message ======= type: system
https://python.langchain.com/docs/modules/model_io/models/chat/human_input_chat_model
75cd6f3caa96-2
message ======= type: system data: content: "Answer the following questions as best you can. You have access to the following tools:\n\nWikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.\n\nThe way you use the tools is by specifying a json blob.\nSpecifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n\nThe only values that should be in the \"action\" field are: Wikipedia\n\nThe $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\n\n```\n{\n \"action\": $TOOL_NAME,\n \"action_input\": $INPUT\n}\n```\n\nALWAYS use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction:\n```\n$JSON_BLOB\n```\nObservation: the result of the action\n... (this Thought/Action/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin! Reminder to always use the exact characters `Final Answer` when responding." additional_kwargs: {} ======= end of message ======= ======= start of message ======= type: human data:
https://python.langchain.com/docs/modules/model_io/models/chat/human_input_chat_model
75cd6f3caa96-3
type: human data: content: 'What is Bocchi the Rock? ' additional_kwargs: {} example: false ======= end of message ======= Action: ``` { "action": "Wikipedia", "action_input": "What is Bocchi the Rock?" } ``` Observation: Page: Bocchi the Rock! Summary: Bocchi the Rock! (���・�・���!, Botchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tank�bon volumes as of November 2022. An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. Page: Hitori Bocchi no Marumaru Seikatsu Summary: Hitori Bocchi no Marumaru Seikatsu (Japanese: ��り����○○生活, lit. "Bocchi Hitori's ____ Life" or "The
https://python.langchain.com/docs/modules/model_io/models/chat/human_input_chat_model
75cd6f3caa96-4
lit. "Bocchi Hitori's ____ Life" or "The ____ Life of Being Alone") is a Japanese yonkoma manga series written and illustrated by Katsuwo. It was serialized in ASCII Media Works' Comic Dengeki Daioh "g" magazine from September 2013 to April 2021. Eight tank�bon volumes have been released. An anime television series adaptation by C2C aired from April to June 2019. Page: Kessoku Band (album) Summary: Kessoku Band (Japanese: ���ンド, Hepburn: Kessoku Bando) is the debut studio album by Kessoku Band, a fictional musical group from the anime television series Bocchi the Rock!, released digitally on December 25, 2022, and physically on CD on December 28 by Aniplex. Featuring vocals from voice actresses Yoshino Aoyama, Sayumi Suzushiro, Saku Mizuno, and Ikumi Hasegawa, the album consists of 14 tracks previously heard in the anime, including a cover of Asian Kung-Fu Generation's "Rockn' Roll, Morning Light Falls on You", as well as newly recorded songs; nine singles preceded the album's physical release. Commercially, Kessoku Band peaked at number one on the Billboard Japan Hot Albums Chart and Oricon Albums Chart, and was certified gold by the Recording Industry Association of Japan. Thought: ======= start of message ======= type: system data: content: "Answer the following questions as best you can. You have access to the following tools:\n\nWikipedia: A wrapper around Wikipedia. Useful
https://python.langchain.com/docs/modules/model_io/models/chat/human_input_chat_model
75cd6f3caa96-5
You have access to the following tools:\n\nWikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.\n\nThe way you use the tools is by specifying a json blob.\nSpecifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n\nThe only values that should be in the \"action\" field are: Wikipedia\n\nThe $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\n\n```\n{\n \"action\": $TOOL_NAME,\n \"action_input\": $INPUT\n}\n```\n\nALWAYS use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction:\n```\n$JSON_BLOB\n```\nObservation: the result of the action\n... (this Thought/Action/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin! Reminder to always use the exact characters `Final Answer` when responding." additional_kwargs: {} ======= end of message ======= ======= start of message ======= type: human data: content: "What is Bocchi the Rock?\n\nThis was your previous work (but I haven't seen any of it! I only see what you return as final
https://python.langchain.com/docs/modules/model_io/models/chat/human_input_chat_model
75cd6f3caa96-6
previous work (but I haven't seen any of it! I only see what you return as final answer):\nAction:\n```\n{\n \"action\": \"Wikipedia\",\n \"action_input\": \"What is Bocchi the Rock?\"\n}\n```\nObservation: Page: Bocchi the Rock!\nSummary: Bocchi the Rock! (���・�・���!, Botchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tank�bon volumes as of November 2022.\nAn anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim.\n\nPage: Hitori Bocchi no Marumaru Seikatsu\nSummary: Hitori Bocchi no Marumaru Seikatsu (Japanese: ��り����○○生活, lit. \"Bocchi Hitori's ____ Life\" or \"The ____ Life of Being Alone\") is a Japanese yonkoma manga series written and illustrated by Katsuwo. It was serialized in ASCII Media Works' Comic Dengeki Daioh \"g\" magazine from September 2013 to April 2021. Eight tank�bon volumes have been released. An anime television series adaptation by C2C aired from April to June 2019.\n\nPage:
https://python.langchain.com/docs/modules/model_io/models/chat/human_input_chat_model
75cd6f3caa96-7
television series adaptation by C2C aired from April to June 2019.\n\nPage: Kessoku Band (album)\nSummary: Kessoku Band (Japanese: ���ンド, Hepburn: Kessoku Bando) is the debut studio album by Kessoku Band, a fictional musical group from the anime television series Bocchi the Rock!, released digitally on December 25, 2022, and physically on CD on December 28 by Aniplex. Featuring vocals from voice actresses Yoshino Aoyama, Sayumi Suzushiro, Saku Mizuno, and Ikumi Hasegawa, the album consists of 14 tracks previously heard in the anime, including a cover of Asian Kung-Fu Generation's \"Rockn' Roll, Morning Light Falls on You\", as well as newly recorded songs; nine singles preceded the album's physical release. Commercially, Kessoku Band peaked at number one on the Billboard Japan Hot Albums Chart and Oricon Albums Chart, and was certified gold by the Recording Industry Association of Japan.\n\n\nThought:" additional_kwargs: {} example: false ======= end of message ======= This finally works. Final Answer: Bocchi the Rock! is a four-panel manga series and anime television series. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. > Finished chain. {'input': 'What is Bocchi the Rock?', 'output': "Bocchi the Rock! is a four-panel manga series and anime television series. The series has been praised for its writing, comedy, characters, and
https://python.langchain.com/docs/modules/model_io/models/chat/human_input_chat_model
75cd6f3caa96-8
series and anime television series. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim."}PreviousCachingNextLLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/models/chat/human_input_chat_model
7e74a09db5e0-0
LLMs | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/models/llms/
7e74a09db5e0-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsAsync APICustom LLMFake LLMHuman input LLMCachingSerializationStreamingTracking token usageChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsLLMsOn this pageLLMsinfoHead to Integrations for documentation on built-in integrations with LLM providers.Large Language Models (LLMs) are a core component of LangChain.
https://python.langchain.com/docs/modules/model_io/models/llms/
7e74a09db5e0-2
LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.Get started​There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the LLM class is designed to provide a standard interface for all of them.In this walkthrough we'll work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.Setup​To start we'll need to install the OpenAI Python package:pip install openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openai_api_key named parameter when initiating the OpenAI LLM class:from langchain.llms import OpenAIllm = OpenAI(openai_api_key="...")otherwise you can initialize without any params:from langchain.llms import OpenAIllm = OpenAI()__call__: string in -> string out​The simplest way to use an LLM is a callable: pass in a string, get a string completion.llm("Tell me a joke") 'Why did the chicken cross the road?\n\nTo get to the other side.'generate: batch calls, richer outputs​generate lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can includes things like multiple top responses and other LLM provider-specific information:llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)len(llm_result.generations) 30llm_result.generations[0]
https://python.langchain.com/docs/modules/model_io/models/llms/
7e74a09db5e0-3
30llm_result.generations[0] [Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'), Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side.')]llm_result.generations[-1] [Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just feel what love is for us\n\nAnd we love each other with all our heart\n\nWe just don't know how\n\nHow it will go\n\nBut we know that love is something strong\n\nAnd we'll always have each other\n\nIn our lives."), Generation(text='\n\nOnce upon a time\n\nThere was a love so pure and true\n\nIt lasted for centuries\n\nAnd never became stale or dry\n\nIt was moving and alive\n\nAnd the heart of the love-ick\n\nIs still beating strong and true.')]You can also access provider specific information that is returned. This information is NOT standardized across providers.llm_result.llm_output {'token_usage': {'completion_tokens': 3903, 'total_tokens': 4023, 'prompt_tokens': 120}}PreviousLanguage modelsNextAsync APIGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/models/llms/
e1316f72e864-0
Human input LLM | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/models/llms/human_input_llm
e1316f72e864-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsAsync APICustom LLMFake LLMHuman input LLMCachingSerializationStreamingTracking token usageChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsLLMsHuman input LLMHuman input LLMSimilar to the fake LLM, LangChain provides a pseudo LLM class that can be used for testing, debugging, or educational purposes. This allows you to mock out calls to the LLM and simulate how a human would respond if they received the prompts.In this notebook, we go over how to use this.We start this with using the HumanInputLLM in an agent.from langchain.llms.human import HumanInputLLMfrom langchain.agents import load_toolsfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypeSince we will use the WikipediaQueryRun tool in this notebook, you might need to install the wikipedia package if you haven't done so already.%pip install wikipediatools = load_tools(["wikipedia"])llm = HumanInputLLM( prompt_func=lambda prompt: print( f"\n===PROMPT====\n{prompt}\n=====END OF PROMPT======" ))agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent.run("What is 'Bocchi the Rock!'?") > Entering new AgentExecutor chain... ===PROMPT==== Answer the following questions as
https://python.langchain.com/docs/modules/model_io/models/llms/human_input_llm
e1316f72e864-2
===PROMPT==== Answer the following questions as best you can. You have access to the following tools: Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Wikipedia] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: What is 'Bocchi the Rock!'? Thought: =====END OF PROMPT====== I need to use a tool. Action: Wikipedia Action Input: Bocchi the Rock!, Japanese four-panel manga and anime series. Observation: Page: Bocchi the Rock! Summary: Bocchi the Rock! (���・�・���!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tank�bon volumes as of November 2022.
https://python.langchain.com/docs/modules/model_io/models/llms/human_input_llm
e1316f72e864-3
Its chapters have been collected in five tank�bon volumes as of November 2022. An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. Page: Manga Time Kirara Summary: Manga Time Kirara (�ん�タイム�らら, Manga Taimu Kirara) is a Japanese seinen manga magazine published by Houbunsha which mainly serializes four-panel manga. The magazine is sold on the ninth of each month and was first published as a special edition of Manga Time, another Houbunsha magazine, on May 17, 2002. Characters from this magazine have appeared in a crossover role-playing game called Kirara Fantasia. Page: Manga Time Kirara Max Summary: Manga Time Kirara Max (�ん�タイム�ららMAX) is a Japanese four-panel seinen manga magazine published by Houbunsha. It is the third magazine of the "Kirara" series, after "Manga Time Kirara" and "Manga Time Kirara Carat". The first issue was released on September 29, 2004. Currently the magazine is released on the 19th of each month. Thought: ===PROMPT==== Answer the following questions as best you can. You have access to the following tools: Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or
https://python.langchain.com/docs/modules/model_io/models/llms/human_input_llm
e1316f72e864-4
Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Wikipedia] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: What is 'Bocchi the Rock!'? Thought:I need to use a tool. Action: Wikipedia Action Input: Bocchi the Rock!, Japanese four-panel manga and anime series. Observation: Page: Bocchi the Rock! Summary: Bocchi the Rock! (���・�・���!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tank�bon volumes as of November 2022. An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. Page: Manga
https://python.langchain.com/docs/modules/model_io/models/llms/human_input_llm
e1316f72e864-5
with the anime's visual creativity receiving acclaim. Page: Manga Time Kirara Summary: Manga Time Kirara (�ん�タイム�らら, Manga Taimu Kirara) is a Japanese seinen manga magazine published by Houbunsha which mainly serializes four-panel manga. The magazine is sold on the ninth of each month and was first published as a special edition of Manga Time, another Houbunsha magazine, on May 17, 2002. Characters from this magazine have appeared in a crossover role-playing game called Kirara Fantasia. Page: Manga Time Kirara Max Summary: Manga Time Kirara Max (�ん�タイム�ららMAX) is a Japanese four-panel seinen manga magazine published by Houbunsha. It is the third magazine of the "Kirara" series, after "Manga Time Kirara" and "Manga Time Kirara Carat". The first issue was released on September 29, 2004. Currently the magazine is released on the 19th of each month. Thought: =====END OF PROMPT====== These are not relevant articles. Action: Wikipedia Action Input: Bocchi the Rock!, Japanese four-panel manga series written and illustrated by Aki Hamaji. Observation: Page: Bocchi the Rock! Summary: Bocchi the Rock! (���・�・���!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated
https://python.langchain.com/docs/modules/model_io/models/llms/human_input_llm
e1316f72e864-6
Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tank�bon volumes as of November 2022. An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. Thought: ===PROMPT==== Answer the following questions as best you can. You have access to the following tools: Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Wikipedia] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: What is 'Bocchi the Rock!'? Thought:I need to use a tool. Action: Wikipedia Action Input: Bocchi the Rock!, Japanese four-panel manga and anime series. Observation: Page: Bocchi the Rock!
https://python.langchain.com/docs/modules/model_io/models/llms/human_input_llm
e1316f72e864-7
and anime series. Observation: Page: Bocchi the Rock! Summary: Bocchi the Rock! (���・�・���!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tank�bon volumes as of November 2022. An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. Page: Manga Time Kirara Summary: Manga Time Kirara (�ん�タイム�らら, Manga Taimu Kirara) is a Japanese seinen manga magazine published by Houbunsha which mainly serializes four-panel manga. The magazine is sold on the ninth of each month and was first published as a special edition of Manga Time, another Houbunsha magazine, on May 17, 2002. Characters from this magazine have appeared in a crossover role-playing game called Kirara Fantasia. Page: Manga Time Kirara Max Summary: Manga Time Kirara Max (�ん�タイム�ららMAX) is a Japanese four-panel seinen manga magazine published by Houbunsha. It is the third magazine of the "Kirara" series, after "Manga Time
https://python.langchain.com/docs/modules/model_io/models/llms/human_input_llm
e1316f72e864-8
It is the third magazine of the "Kirara" series, after "Manga Time Kirara" and "Manga Time Kirara Carat". The first issue was released on September 29, 2004. Currently the magazine is released on the 19th of each month. Thought:These are not relevant articles. Action: Wikipedia Action Input: Bocchi the Rock!, Japanese four-panel manga series written and illustrated by Aki Hamaji. Observation: Page: Bocchi the Rock! Summary: Bocchi the Rock! (���・�・���!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tank�bon volumes as of November 2022. An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. Thought: =====END OF PROMPT====== It worked. Final Answer: Bocchi the Rock! is a four-panel manga series and anime television series. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. > Finished chain. "Bocchi the Rock! is a four-panel manga series and anime television series. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual
https://python.langchain.com/docs/modules/model_io/models/llms/human_input_llm
e1316f72e864-9
praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim."PreviousFake LLMNextCachingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/model_io/models/llms/human_input_llm
39dc43079643-0
Streaming | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/model_io/models/llms/streaming_llm
39dc43079643-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsAsync APICustom LLMFake LLMHuman input LLMCachingSerializationStreamingTracking token usageChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesModel I/​OLanguage modelsLLMsStreamingStreamingSome LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.Currently, we support streaming for a broad range of LLM implementations, including but not limited to OpenAI, ChatOpenAI, ChatAnthropic, Hugging Face Text Generation Inference, and Replicate. This feature has been expanded to accommodate most of the models. To utilize streaming, use a CallbackHandler that implements on_llm_new_token. In this example, we are using StreamingStdOutCallbackHandler.from langchain.llms import OpenAIfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)resp = llm("Write me a song about sparkling water.") Verse 1 I'm sippin' on sparkling water, It's so refreshing and light, It's the perfect way to quench my thirst On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated,
https://python.langchain.com/docs/modules/model_io/models/llms/streaming_llm
39dc43079643-2
water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. Verse 2 I'm sippin' on sparkling water, It's so bubbly and bright, It's the perfect way to cool me down On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. Verse 3 I'm sippin' on sparkling water, It's so light and so clear, It's the perfect way to keep me cool On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed.We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.llm.generate(["Tell me a joke."]) Q: What did the fish say when it hit the wall? A: Dam! LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {}, 'model_name': 'text-davinci-003'})PreviousSerializationNextTracking token
https://python.langchain.com/docs/modules/model_io/models/llms/streaming_llm