{"id": "4cecf0a7e954-0", "text": "Debugging | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesDebuggingOn this pageDebuggingIf you're building with LLMs, at some point something will break, and you'll need to debug. A model call will fail, or the model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created.Here's a few different tools and functionalities to aid in debugging.Tracing\u00e2\u20ac\u2039Platforms with tracing capabilities like LangSmith and WandB are the most comprehensive solutions for debugging. These platforms make it easy to not only log and visualize LLM apps, but also to actively debug, test and refine them.For anyone building production-grade LLM applications, we highly recommend using a platform like this.langchain.debug and langchain.verbose\u00e2\u20ac\u2039If you're prototyping in Jupyter Notebooks or running Python scripts, it can be helpful to print out the intermediate steps of a Chain run. There's a number of ways to enable printing at varying degrees of verbosity.Let's suppose we have a simple agent and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:from langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name=\"gpt-4\", temperature=0)tools = load_tools([\"ddg-search\", \"llm-math\"], llm=llm)agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)agent.run(\"Who directed the", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-2", "text": "llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)agent.run(\"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\") 'The director of the 2023 film Oppenheimer is Christopher Nolan and he is approximately 19345 days old in 2023.'langchain.debug = True\u00e2\u20ac\u2039Setting the global debug flag will cause all LangChain components with callback support (chains, models, agents, tools, retrievers) to print the inputs they receive and outputs they generate. This is the most verbose setting and will fully log raw inputs and outputs.import langchainlangchain.debug = Trueagent.run(\"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\")Console output [chain/start] [1:RunTypeEnum.chain:AgentExecutor] Entering Chain run with input: { \"input\": \"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\" } [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { \"input\": \"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\", \"agent_scratchpad\": \"\", \"stop\": [ \"\\nObservation:\", \"\\n\\tObservation:\" ] } [llm/start]", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-3", "text": "] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain > 3:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { \"prompts\": [ \"Human: Answer the following questions as best you can. You have access to the following tools:\\n\\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\\nThought:\" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain > 3:RunTypeEnum.llm:ChatOpenAI] [5.53s] Exiting LLM run with output: { \"generations\": [ [ {", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-4", "text": "[ { \"text\": \"I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\", \"generation_info\": { \"finish_reason\": \"stop\" }, \"message\": { \"lc\": 1, \"type\": \"constructor\", \"id\": [ \"langchain\", \"schema\", \"messages\", \"AIMessage\" ], \"kwargs\": { \"content\": \"I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input:", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-5", "text": "to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\", \"additional_kwargs\": {} } } } ] ], \"llm_output\": { \"token_usage\": { \"prompt_tokens\": 206, \"completion_tokens\": 71, \"total_tokens\": 277 }, \"model_name\": \"gpt-4\" }, \"run\": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain] [5.53s] Exiting Chain run with output: { \"text\": \"I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\" } [tool/start] [1:RunTypeEnum.chain:AgentExecutor > 4:RunTypeEnum.tool:duckduckgo_search] Entering Tool run with input: \"Director of the 2023 film Oppenheimer and their", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-6", "text": "run with input: \"Director of the 2023 film Oppenheimer and their age\" [tool/end] [1:RunTypeEnum.chain:AgentExecutor > 4:RunTypeEnum.tool:duckduckgo_search] [1.51s] Exiting Tool run with output: \"Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\" [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { \"input\": \"Who directed the 2023 film Oppenheimer and what is their age?", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-7", "text": "\"input\": \"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\", \"agent_scratchpad\": \"I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \\\"Oppenheimer,\\\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:\",", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-8", "text": "the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:\", \"stop\": [ \"\\nObservation:\", \"\\n\\tObservation:\" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain > 6:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { \"prompts\": [ \"Human: Answer the following questions as best you can. You have access to the following tools:\\n\\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\\nThought:I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-9", "text": "to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \\\"Oppenheimer,\\\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:\" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain > 6:RunTypeEnum.llm:ChatOpenAI] [4.46s] Exiting LLM run with output: { \"generations\": [ [", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-10", "text": "\"generations\": [ [ { \"text\": \"The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\", \"generation_info\": { \"finish_reason\": \"stop\" }, \"message\": { \"lc\": 1, \"type\": \"constructor\", \"id\": [ \"langchain\", \"schema\", \"messages\", \"AIMessage\" ], \"kwargs\": { \"content\": \"The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\", \"additional_kwargs\": {} }", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-11", "text": "{} } } } ] ], \"llm_output\": { \"token_usage\": { \"prompt_tokens\": 550, \"completion_tokens\": 39, \"total_tokens\": 589 }, \"model_name\": \"gpt-4\" }, \"run\": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain] [4.46s] Exiting Chain run with output: { \"text\": \"The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\" } [tool/start] [1:RunTypeEnum.chain:AgentExecutor > 7:RunTypeEnum.tool:duckduckgo_search] Entering Tool run with input: \"Christopher Nolan age\" [tool/end] [1:RunTypeEnum.chain:AgentExecutor > 7:RunTypeEnum.tool:duckduckgo_search] [1.33s] Exiting Tool run with output: \"Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-12", "text": "storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content \u00e2\u2020\u2019 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \"Dunkirk,\" \"Inception,\" \"Interstellar,\" and the \"Dark Knight\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\" [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { \"input\": \"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\", \"agent_scratchpad\": \"I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-13", "text": "director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \\\"Oppenheimer,\\\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-14", "text": "storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \\\"Dunkirk\\\" \\\"Tenet\\\" \\\"The Prestige\\\" See all related content \u00e2\u2020\u2019 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \\\"Dunkirk,\\\" \\\"Inception,\\\" \\\"Interstellar,\\\" and the \\\"Dark Knight\\\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\\nThought:\", \"stop\": [ \"\\nObservation:\", \"\\n\\tObservation:\" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain > 9:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { \"prompts\": [ \"Human: Answer the following questions as best you can. You have access to the following tools:\\n\\nduckduckgo_search: A wrapper around DuckDuckGo", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-15", "text": "to the following tools:\\n\\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\\nThought:I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \\\"Oppenheimer,\\\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-16", "text": "by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \\\"Dunkirk\\\" \\\"Tenet\\\" \\\"The Prestige\\\" See all related content \u00e2\u2020\u2019 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese /", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-17", "text": "AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \\\"Dunkirk,\\\" \\\"Inception,\\\" \\\"Interstellar,\\\" and the \\\"Dark Knight\\\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\\nThought:\" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain > 9:RunTypeEnum.llm:ChatOpenAI] [2.69s] Exiting LLM run with output: { \"generations\": [ [ { \"text\": \"Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\\nAction: Calculator\\nAction Input: 52*365\", \"generation_info\": { \"finish_reason\": \"stop\" }, \"message\": { \"lc\": 1, \"type\": \"constructor\", \"id\": [", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-18", "text": "\"id\": [ \"langchain\", \"schema\", \"messages\", \"AIMessage\" ], \"kwargs\": { \"content\": \"Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\\nAction: Calculator\\nAction Input: 52*365\", \"additional_kwargs\": {} } } } ] ], \"llm_output\": { \"token_usage\": { \"prompt_tokens\": 868, \"completion_tokens\": 46, \"total_tokens\": 914 }, \"model_name\": \"gpt-4\" }, \"run\": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain]", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-19", "text": "> 8:RunTypeEnum.chain:LLMChain] [2.69s] Exiting Chain run with output: { \"text\": \"Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\\nAction: Calculator\\nAction Input: 52*365\" } [tool/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator] Entering Tool run with input: \"52*365\" [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain] Entering Chain run with input: { \"question\": \"52*365\" } [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { \"question\": \"52*365\", \"stop\": [ \"```output\" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain > 13:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { \"prompts\": [ \"Human: Translate a math problem into a expression", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-20", "text": "[ \"Human: Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question.\\n\\nQuestion: ${Question with math problem.}\\n```text\\n${single line mathematical expression that solves the problem}\\n```\\n...numexpr.evaluate(text)...\\n```output\\n${Output of running the code}\\n```\\nAnswer: ${Answer}\\n\\nBegin.\\n\\nQuestion: What is 37593 * 67?\\n```text\\n37593 * 67\\n```\\n...numexpr.evaluate(\\\"37593 * 67\\\")...\\n```output\\n2518731\\n```\\nAnswer: 2518731\\n\\nQuestion: 37593^(1/5)\\n```text\\n37593**(1/5)\\n```\\n...numexpr.evaluate(\\\"37593**(1/5)\\\")...\\n```output\\n8.222831614237718\\n```\\nAnswer: 8.222831614237718\\n\\nQuestion: 52*365\" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain > 13:RunTypeEnum.llm:ChatOpenAI] [2.89s] Exiting LLM run with output: { \"generations\": [ [ { \"text\": \"```text\\n52*365\\n```\\n...numexpr.evaluate(\\\"52*365\\\")...\\n\", \"generation_info\":", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-21", "text": "\"generation_info\": { \"finish_reason\": \"stop\" }, \"message\": { \"lc\": 1, \"type\": \"constructor\", \"id\": [ \"langchain\", \"schema\", \"messages\", \"AIMessage\" ], \"kwargs\": { \"content\": \"```text\\n52*365\\n```\\n...numexpr.evaluate(\\\"52*365\\\")...\\n\", \"additional_kwargs\": {} } } } ] ], \"llm_output\": { \"token_usage\": { \"prompt_tokens\": 203, \"completion_tokens\": 19,", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-22", "text": "\"completion_tokens\": 19, \"total_tokens\": 222 }, \"model_name\": \"gpt-4\" }, \"run\": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain] [2.89s] Exiting Chain run with output: { \"text\": \"```text\\n52*365\\n```\\n...numexpr.evaluate(\\\"52*365\\\")...\\n\" } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain] [2.90s] Exiting Chain run with output: { \"answer\": \"Answer: 18980\" } [tool/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator] [2.90s] Exiting Tool run with output: \"Answer: 18980\" [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { \"input\": \"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\", \"agent_scratchpad\": \"I need to find out who directed the 2023 film Oppenheimer and their", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-23", "text": "\"I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \\\"Oppenheimer,\\\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\\nObservation: Christopher Edward Nolan CBE (born", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-24", "text": "Input: \\\"Christopher Nolan age\\\"\\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \\\"Dunkirk\\\" \\\"Tenet\\\" \\\"The Prestige\\\" See all related content \u00e2\u2020\u2019 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \\\"Dunkirk,\\\" \\\"Inception,\\\" \\\"Interstellar,\\\" and the \\\"Dark Knight\\\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\\nThought:Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\\nAction: Calculator\\nAction Input: 52*365\\nObservation: Answer: 18980\\nThought:\", \"stop\": [ \"\\nObservation:\", \"\\n\\tObservation:\" ] }", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-25", "text": "\"\\n\\tObservation:\" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain > 15:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { \"prompts\": [ \"Human: Answer the following questions as best you can. You have access to the following tools:\\n\\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\\nThought:I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-26", "text": "the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \\\"Oppenheimer,\\\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-27", "text": "Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \\\"Dunkirk\\\" \\\"Tenet\\\" \\\"The Prestige\\\" See all related content \u00e2\u2020\u2019 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \\\"Dunkirk,\\\" \\\"Inception,\\\" \\\"Interstellar,\\\" and the \\\"Dark Knight\\\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\\nThought:Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\\nAction: Calculator\\nAction Input: 52*365\\nObservation: Answer: 18980\\nThought:\" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain > 15:RunTypeEnum.llm:ChatOpenAI] [3.52s] Exiting LLM run with output: { \"generations\": [ [ { \"text\": \"I now know the final answer\\nFinal Answer: The director of the 2023", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-28", "text": "\"I now know the final answer\\nFinal Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.\", \"generation_info\": { \"finish_reason\": \"stop\" }, \"message\": { \"lc\": 1, \"type\": \"constructor\", \"id\": [ \"langchain\", \"schema\", \"messages\", \"AIMessage\" ], \"kwargs\": { \"content\": \"I now know the final answer\\nFinal Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.\", \"additional_kwargs\": {} } } } ] ],", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-29", "text": "} ] ], \"llm_output\": { \"token_usage\": { \"prompt_tokens\": 926, \"completion_tokens\": 43, \"total_tokens\": 969 }, \"model_name\": \"gpt-4\" }, \"run\": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain] [3.52s] Exiting Chain run with output: { \"text\": \"I now know the final answer\\nFinal Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.\" } [chain/end] [1:RunTypeEnum.chain:AgentExecutor] [21.96s] Exiting Chain run with output: { \"output\": \"The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.\" } 'The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.'langchain.verbose = True\u00e2\u20ac\u2039Setting the verbose flag will print out inputs and outputs in a slightly more readable format and will skip logging certain raw outputs (like the token usage stats for an LLM call) so that you can focus on application", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-30", "text": "outputs (like the token usage stats for an LLM call) so that you can focus on application logic.import langchainlangchain.verbose = Trueagent.run(\"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\")Console output > Entering new AgentExecutor chain... > Entering new LLMChain chain... Prompt after formatting: Answer the following questions as best you can. You have access to the following tools: duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query. Calculator: Useful for when you need to answer questions about math. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [duckduckgo_search, Calculator] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)? Thought: > Finished chain. First, I need to find out who directed the film Oppenheimer in", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-31", "text": "chain. First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age. Action: duckduckgo_search Action Input: \"Director of the 2023 film Oppenheimer\" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named \"Trinity\". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age. Thought: > Entering new LLMChain chain... Prompt after formatting: Answer the following questions as best you can. You have access to the following tools:", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-32", "text": "the following questions as best you can. You have access to the following tools: duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query. Calculator: Useful for when you need to answer questions about math. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [duckduckgo_search, Calculator] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)? Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age. Action: duckduckgo_search Action Input: \"Director of the 2023 film Oppenheimer\" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-33", "text": "J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named \"Trinity\". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age. Thought: > Finished chain. The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age. Action: duckduckgo_search Action Input: \"Christopher Nolan birth date\" Observation: July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content \u00e2\u2020\u2019 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-34", "text": "is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about \"the man who ... Thought: > Entering new LLMChain chain... Prompt after formatting: Answer the following questions as best you can. You have access to the following tools: duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query. Calculator: Useful for when you need to answer questions about math. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [duckduckgo_search, Calculator] Action Input: the input to the action Observation: the result of the action", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-35", "text": "Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)? Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age. Action: duckduckgo_search Action Input: \"Director of the 2023 film Oppenheimer\" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named \"Trinity\". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-36", "text": "American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age. Thought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age. Action: duckduckgo_search Action Input: \"Christopher Nolan birth date\" Observation: July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content \u00e2\u2020\u2019 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-37", "text": "the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about \"the man who ... Thought: > Finished chain. Christopher Nolan was born on July 30, 1970. Now I need to calculate his age in 2023 and then convert it into days. Action: Calculator Action Input: (2023 - 1970) * 365 > Entering new LLMMathChain chain... (2023 - 1970) * 365 > Entering new LLMChain chain... Prompt after formatting: Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question. Question: ${Question with math problem.} ```text ${single line mathematical expression that solves the problem} ``` ...numexpr.evaluate(text)... ```output ${Output of running the code} ``` Answer: ${Answer} Begin. Question: What is 37593 * 67? ```text 37593 * 67 ``` ...numexpr.evaluate(\"37593 * 67\")... ```output 2518731 ``` Answer: 2518731 Question: 37593^(1/5) ```text", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-38", "text": "Question: 37593^(1/5) ```text 37593**(1/5) ``` ...numexpr.evaluate(\"37593**(1/5)\")... ```output 8.222831614237718 ``` Answer: 8.222831614237718 Question: (2023 - 1970) * 365 > Finished chain. ```text (2023 - 1970) * 365 ``` ...numexpr.evaluate(\"(2023 - 1970) * 365\")... Answer: 19345 > Finished chain. Observation: Answer: 19345 Thought: > Entering new LLMChain chain... Prompt after formatting: Answer the following questions as best you can. You have access to the following tools: duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query. Calculator: Useful for when you need to answer questions about math. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [duckduckgo_search, Calculator] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times)", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-39", "text": "... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)? Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age. Action: duckduckgo_search Action Input: \"Director of the 2023 film Oppenheimer\" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named \"Trinity\". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-40", "text": "written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age. Thought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age. Action: duckduckgo_search Action Input: \"Christopher Nolan birth date\" Observation: July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content \u00e2\u2020\u2019 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-41", "text": "release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about \"the man who ... Thought:Christopher Nolan was born on July 30, 1970. Now I need to calculate his age in 2023 and then convert it into days. Action: Calculator Action Input: (2023 - 1970) * 365 Observation: Answer: 19345 Thought: > Finished chain. I now know the final answer Final Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 53 years old in 2023. His age in days is 19345 days. > Finished chain. 'The director of the 2023 film Oppenheimer is Christopher Nolan and he is 53 years old in 2023. His age in days is 19345 days.'Chain(..., verbose=True)\u00e2\u20ac\u2039You can also scope verbosity down to a single object, in which case only the inputs and outputs to that object are printed (along with any additional callbacks calls made specifically by that object).# Passing verbose=True to initialize_agent will pass that along to the AgentExecutor (which is a Chain).agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)agent.run(\"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\")Console output > Entering new AgentExecutor chain... First, I need to find out who directed the film Oppenheimer in", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-42", "text": "chain... First, I need to find out who directed the film Oppenheimer in 2023 and their birth date. Then, I can calculate their age in years and days. Action: duckduckgo_search Action Input: \"Director of 2023 film Oppenheimer\" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named \"Trinity\". A Review of Christopher Nolan's new film 'Oppenheimer' , the story of the man who fathered the Atomic Bomb. Cillian Murphy leads an all star cast ... Release Date: July 21, 2023. Director ... For his new film, \"Oppenheimer,\" starring Cillian Murphy and Emily Blunt, director Christopher Nolan set out to build an entire 1940s western town. Thought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age. Action: duckduckgo_search Action Input: \"Christopher Nolan birth date\" Observation: July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\"", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-43", "text": "(age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content \u00e2\u2020\u2019 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. Date of Birth: 30 July 1970 . ... Christopher Nolan is a British-American film director, producer, and screenwriter. His films have grossed more than US$5 billion worldwide, and have garnered 11 Academy Awards from 36 nominations. ... Thought:Christopher Nolan was born on July 30, 1970. Now I can calculate his age in years and then in days. Action: Calculator Action Input: {\"operation\": \"subtract\", \"operands\": [2023, 1970]} Observation: Answer: 53 Thought:Christopher Nolan is 53 years old in 2023. Now I need to calculate his age in days. Action:", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "4cecf0a7e954-44", "text": "in 2023. Now I need to calculate his age in days. Action: Calculator Action Input: {\"operation\": \"multiply\", \"operands\": [53, 365]} Observation: Answer: 19345 Thought:I now know the final answer Final Answer: The director of the 2023 film Oppenheimer is Christopher Nolan. He is 53 years old in 2023, which is approximately 19345 days. > Finished chain. 'The director of the 2023 film Oppenheimer is Christopher Nolan. He is 53 years old in 2023, which is approximately 19345 days.'Other callbacks\u00e2\u20ac\u2039Callbacks are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use Callbacks under the hood to log intermediate steps of components. There's a number of Callbacks relevant for debugging that come with LangChain out of the box, like the FileCallbackHandler. You can also implement your own callbacks to execute custom functionality.See here for more info on Callbacks, how to use them, and customize them.PreviousSQL Question Answering Benchmarking: ChinookNextDeploymentTracinglangchain.debug and langchain.verboselangchain.debug = Truelangchain.verbose = TrueChain(..., verbose=True)Other callbacksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/debugging"} {"id": "df1641c00872-0", "text": "Evaluation | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/"} {"id": "df1641c00872-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsTrajectory EvaluatorsExamplesDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationOn this pageEvaluationLanguage models can be unpredictable. This makes it challenging to ship reliable applications to production, where repeatable, useful outcomes across diverse inputs are a minimum requirement. Tests help demonstrate each component in an LLM application can produce the required or expected functionality. These tests also safeguard against regressions while you improve interconnected pieces of an integrated system. However, measuring the quality of generated text can be challenging. It can be hard to agree on the right set of metrics for your application, and it can be difficult to translate those into better performance. Furthermore, it's common to lack sufficient evaluation data to adequately test the range of inputs and expected outputs for each component when you're just getting started. The LangChain community is building open source tools and guides to help address these challenges.LangChain exposes different types of evaluators for common types of evaluation. Each type has off-the-shelf implementations you can use to get started, as well as an", "source": "https://python.langchain.com/docs/guides/evaluation/"} {"id": "df1641c00872-2", "text": "extensible API so you can create your own or contribute improvements for everyone to use. The following sections have example notebooks for you to get started.String Evaluators: Evaluate the predicted string for a given input, usually against a reference stringTrajectory Evaluators: Evaluate the whole trajectory of agent actionsComparison Evaluators: Compare predictions from two runs on a common inputThis section also provides some additional examples of how you could use these evaluators for different scenarios or apply to different chain implementations in the LangChain library. Some examples include:Preference Scoring Chain Outputs: An example using a comparison evaluator on different models or prompts to select statistically significant differences in aggregate preference scoresReference Docs\u00e2\u20ac\u2039For detailed information of the available evaluators, including how to instantiate, configure, and customize them. Check out the reference documentation directly.\u011f\u0178\u2014\u0192\u00ef\u00b8\ufffd String Evaluators5 items\u011f\u0178\u2014\u0192\u00ef\u00b8\ufffd Comparison Evaluators3 items\u011f\u0178\u2014\u0192\u00ef\u00b8\ufffd Trajectory Evaluators2 items\u011f\u0178\u2014\u0192\u00ef\u00b8\ufffd Examples9 itemsPreviousGuidesNextString EvaluatorsReference DocsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/"} {"id": "bead2a9b7992-0", "text": "Examples | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/examples/"} {"id": "bead2a9b7992-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsTrajectory EvaluatorsExamplesAgent VectorDB Question Answering BenchmarkingComparing Chain OutputsData Augmented Question AnsweringEvaluating an OpenAPI ChainQuestion Answering Benchmarking: Paul Graham EssayQuestion Answering Benchmarking: State of the Union AddressQA GenerationQuestion AnsweringSQL Question Answering Benchmarking: ChinookDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationExamplesExamples\u011f\u0178\u0161\u00a7 Docs under construction \u011f\u0178\u0161\u00a7Below are some examples for inspecting and checking different chains.\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd Agent VectorDB Question Answering BenchmarkingHere we go over how to benchmark performance on a question answering task using an agent to route between multiple vectordatabases.\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd Comparing Chain OutputsSuppose you have two different prompts (or LLMs). How do you know which will generate \"better\" results?\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd Data Augmented Question AnsweringThis notebook uses some generic prompts/language models to evaluate an question answering system that uses other sources of data besides what is in the model. For example, this can be used to evaluate a question answering system over your proprietary data.\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd Evaluating an OpenAPI ChainThis notebook goes over ways to semantically evaluate an OpenAPI Chain, which calls an endpoint defined by the OpenAPI specification using purely natural language.\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd Question Answering Benchmarking: Paul Graham EssayHere we go over how to benchmark performance on a question answering task over a Paul Graham", "source": "https://python.langchain.com/docs/guides/evaluation/examples/"} {"id": "bead2a9b7992-2", "text": "Paul Graham EssayHere we go over how to benchmark performance on a question answering task over a Paul Graham essay.\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd Question Answering Benchmarking: State of the Union AddressHere we go over how to benchmark performance on a question answering task over a state of the union address.\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd QA GenerationThis notebook shows how to use the QAGenerationChain to come up with question-answer pairs over a specific document.\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd Question AnsweringThis notebook covers how to evaluate generic question answering problems. This is a situation where you have an example containing a question and its corresponding ground truth answer, and you want to measure how well the language model does at answering those questions.\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd SQL Question Answering Benchmarking: ChinookHere we go over how to benchmark performance on a question answering task over a SQL database.PreviousAgent TrajectoryNextAgent VectorDB Question Answering BenchmarkingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/examples/"} {"id": "8bb21c05bf23-0", "text": "Comparing Chain Outputs | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/examples/comparisons"} {"id": "8bb21c05bf23-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsTrajectory EvaluatorsExamplesAgent VectorDB Question Answering BenchmarkingComparing Chain OutputsData Augmented Question AnsweringEvaluating an OpenAPI ChainQuestion Answering Benchmarking: Paul Graham EssayQuestion Answering Benchmarking: State of the Union AddressQA GenerationQuestion AnsweringSQL Question Answering Benchmarking: ChinookDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationExamplesComparing Chain OutputsOn this pageComparing Chain OutputsSuppose you have two different prompts (or LLMs). How do you know which will generate \"better\" results?One automated way to predict the preferred configuration is to use a PairwiseStringEvaluator like the PairwiseStringEvalChain[1]. This chain prompts an LLM to select which output is preferred, given a specific input.For this evaluation, we will need 3 things:An evaluatorA dataset of inputs2 (or more) LLMs, Chains, or Agents to compareThen we will aggregate the restults to determine the preferred model.Step 1. Create the Evaluator\u00e2\u20ac\u2039In this example, you will use gpt-4 to select which output is preferred.from langchain.chat_models import ChatOpenAIfrom langchain.evaluation.comparison import PairwiseStringEvalChainllm = ChatOpenAI(model=\"gpt-4\")eval_chain = PairwiseStringEvalChain.from_llm(llm=llm)Step 2. Select Dataset\u00e2\u20ac\u2039If you already have real usage data for your LLM, you can use a representative sample. More examples", "source": "https://python.langchain.com/docs/guides/evaluation/examples/comparisons"} {"id": "8bb21c05bf23-2", "text": "provide more reliable results. We will use some example queries someone might have about how to use langchain here.from langchain.evaluation.loading import load_datasetdataset = load_dataset(\"langchain-howto-queries\") Found cached dataset parquet (/Users/wfh/.cache/huggingface/datasets/LangChainDatasets___parquet/LangChainDatasets--langchain-howto-queries-bbb748bbee7e77aa/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec) 0%| | 0/1 [00:00\"llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-0613\")# Initialize the SerpAPIWrapper for search functionality# Replace in openai_api_key=\"\" with your actual SerpAPI key.search = SerpAPIWrapper()# Define a list of tools offered by the agenttools = [ Tool( name=\"Search\", func=search.run, coroutine=search.arun, description=\"Useful when you need to answer questions about current events. You should ask targeted questions.\",", "source": "https://python.langchain.com/docs/guides/evaluation/examples/comparisons"} {"id": "8bb21c05bf23-3", "text": "when you need to answer questions about current events. You should ask targeted questions.\", ),]functions_agent = initialize_agent( tools, llm, agent=AgentType.OPENAI_MULTI_FUNCTIONS, verbose=False)conversations_agent = initialize_agent( tools, llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=False)Step 4. Generate Responses\u00e2\u20ac\u2039We will generate outputs for each of the models before evaluating them.from tqdm.notebook import tqdmimport asyncioresults = []agents = [functions_agent, conversations_agent]concurrency_level = 6 # How many concurrent agents to run. May need to decrease if OpenAI is rate limiting.# We will only run the first 20 examples of this dataset to speed things up# This will lead to larger confidence intervals downstream.batch = []for example in tqdm(dataset[:20]): batch.extend([agent.acall(example[\"inputs\"]) for agent in agents]) if len(batch) >= concurrency_level: batch_results = await asyncio.gather(*batch, return_exceptions=True) results.extend(list(zip(*[iter(batch_results)] * 2))) batch = []if batch: batch_results = await asyncio.gather(*batch, return_exceptions=True) results.extend(list(zip(*[iter(batch_results)] * 2))) 0%| | 0/20 [00:00._completion_with_retry in 1.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet.. Retrying langchain.chat_models.openai.acompletion_with_retry.._completion_with_retry", "source": "https://python.langchain.com/docs/guides/evaluation/examples/comparisons"} {"id": "8bb21c05bf23-4", "text": "Retrying langchain.chat_models.openai.acompletion_with_retry.._completion_with_retry in 1.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..Step 5. Evaluate Pairs\u00e2\u20ac\u2039Now it's time to evaluate the results. For each agent response, run the evaluation chain to select which output is preferred (or return a tie).Randomly select the input order to reduce the likelihood that one model will be preferred just because it is presented first.import randomdef predict_preferences(dataset, results) -> list: preferences = [] for example, (res_a, res_b) in zip(dataset, results): input_ = example[\"inputs\"] # Flip a coin to reduce persistent position bias if random.random() < 0.5: pred_a, pred_b = res_a, res_b a, b = \"a\", \"b\" else: pred_a, pred_b = res_b, res_a a, b = \"b\", \"a\" eval_res = eval_chain.evaluate_string_pairs( prediction=pred_a[\"output\"] if isinstance(pred_a, dict) else str(pred_a), prediction_b=pred_b[\"output\"] if isinstance(pred_b, dict) else str(pred_b), input=input_, ) if eval_res[\"value\"] == \"A\":", "source": "https://python.langchain.com/docs/guides/evaluation/examples/comparisons"} {"id": "8bb21c05bf23-5", "text": ") if eval_res[\"value\"] == \"A\": preferences.append(a) elif eval_res[\"value\"] == \"B\": preferences.append(b) else: preferences.append(None) # No preference return preferencespreferences = predict_preferences(dataset, results)Print out the ratio of preferences.from collections import Countername_map = { \"a\": \"OpenAI Functions Agent\", \"b\": \"Structured Chat Agent\",}counts = Counter(preferences)pref_ratios = {k: v / len(preferences) for k, v in counts.items()}for k, v in pref_ratios.items(): print(f\"{name_map.get(k)}: {v:.2%}\") OpenAI Functions Agent: 90.00% Structured Chat Agent: 10.00%Estimate Confidence Intervals\u00e2\u20ac\u2039The results seem pretty clear, but if you want to have a better sense of how confident we are, that model \"A\" (the OpenAI Functions Agent) is the preferred model, we can calculate confidence intervals. Below, use the Wilson score to estimate the confidence interval.from math import sqrtdef wilson_score_interval( preferences: list, which: str = \"a\", z: float = 1.96) -> tuple: \"\"\"Estimate the confidence interval using the Wilson score. See: https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Wilson_score_interval for more details, including when to use it and when it should not be used. \"\"\" total_preferences = preferences.count(\"a\") +", "source": "https://python.langchain.com/docs/guides/evaluation/examples/comparisons"} {"id": "8bb21c05bf23-6", "text": "not be used. \"\"\" total_preferences = preferences.count(\"a\") + preferences.count(\"b\") n_s = preferences.count(which) if total_preferences == 0: return (0, 0) p_hat = n_s / total_preferences denominator = 1 + (z**2) / total_preferences adjustment = (z / denominator) * sqrt( p_hat * (1 - p_hat) / total_preferences + (z**2) / (4 * total_preferences * total_preferences) ) center = (p_hat + (z**2) / (2 * total_preferences)) / denominator lower_bound = min(max(center - adjustment, 0.0), 1.0) upper_bound = min(max(center + adjustment, 0.0), 1.0) return (lower_bound, upper_bound)for which_, name in name_map.items(): low, high = wilson_score_interval(preferences, which=which_) print( f'The \"{name}\" would be preferred between {low:.2%} and {high:.2%} percent of the time (with 95% confidence).' ) The \"OpenAI Functions Agent\" would be preferred between 69.90% and 97.21% percent of the time (with 95% confidence). The \"Structured Chat Agent\" would be preferred between 2.79% and 30.10% percent of the time (with 95% confidence).Print out the p-value.from scipy import statspreferred_model = max(pref_ratios, key=pref_ratios.get)successes =", "source": "https://python.langchain.com/docs/guides/evaluation/examples/comparisons"} {"id": "8bb21c05bf23-7", "text": "import statspreferred_model = max(pref_ratios, key=pref_ratios.get)successes = preferences.count(preferred_model)n = len(preferences) - preferences.count(None)p_value = stats.binom_test(successes, n, p=0.5, alternative=\"two-sided\")print( f\"\"\"The p-value is {p_value:.5f}. If the null hypothesis is true (i.e., if the selected eval chain actually has no preference between the models),then there is a {p_value:.5%} chance of observing the {name_map.get(preferred_model)} be preferred at least {successes}times out of {n} trials.\"\"\") The p-value is 0.00040. If the null hypothesis is true (i.e., if the selected eval chain actually has no preference between the models), then there is a 0.04025% chance of observing the OpenAI Functions Agent be preferred at least 18 times out of 20 trials._1. Note: Automated evals are still an open research topic and are best used alongside other evaluation approaches. LLM preferences exhibit biases, including banal ones like the order of outputs. In choosing preferences, \"ground truth\" may not be taken into account, which may lead to scores that aren't grounded in utility._PreviousAgent VectorDB Question Answering BenchmarkingNextData Augmented Question AnsweringStep 1. Create the EvaluatorStep 2. Select DatasetStep 3. Define Models to CompareStep 4. Generate ResponsesStep 5. Evaluate PairsEstimate Confidence IntervalsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/examples/comparisons"} {"id": "cc9d25305ac3-0", "text": "Question Answering | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/examples/question_answering"} {"id": "cc9d25305ac3-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsTrajectory EvaluatorsExamplesAgent VectorDB Question Answering BenchmarkingComparing Chain OutputsData Augmented Question AnsweringEvaluating an OpenAPI ChainQuestion Answering Benchmarking: Paul Graham EssayQuestion Answering Benchmarking: State of the Union AddressQA GenerationQuestion AnsweringSQL Question Answering Benchmarking: ChinookDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationExamplesQuestion AnsweringOn this pageQuestion AnsweringThis notebook covers how to evaluate generic question answering problems. This is a situation where you have an example containing a question and its corresponding ground truth answer, and you want to measure how well the language model does at answering those questions.Setup\u00e2\u20ac\u2039For demonstration purposes, we will just evaluate a simple question answering system that only evaluates the model's internal knowledge. Please see other notebooks for examples where it evaluates how the model does at question answering over data not present in what the model was trained on.from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.llms import OpenAIprompt = PromptTemplate( template=\"Question: {question}\\nAnswer:\", input_variables=[\"question\"])llm = OpenAI(model_name=\"text-davinci-003\", temperature=0)chain = LLMChain(llm=llm, prompt=prompt)Examples\u00e2\u20ac\u2039For this purpose, we will just use two simple hardcoded examples, but see other notebooks for tips on how to get and/or generate these examples.examples = [ { \"question\": \"Roger has 5 tennis balls. He buys 2 more", "source": "https://python.langchain.com/docs/guides/evaluation/examples/question_answering"} {"id": "cc9d25305ac3-2", "text": "\"question\": \"Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?\", \"answer\": \"11\", }, { \"question\": 'Is the following sentence plausible? \"Joao Moutinho caught the screen pass in the NFC championship.\"', \"answer\": \"No\", },]Predictions\u00e2\u20ac\u2039We can now make and inspect the predictions for these questions.predictions = chain.apply(examples)predictions [{'text': ' 11 tennis balls'}, {'text': ' No, this sentence is not plausible. Joao Moutinho is a professional soccer player, not an American football player, so it is not likely that he would be catching a screen pass in the NFC championship.'}]Evaluation\u00e2\u20ac\u2039We can see that if we tried to just do exact match on the answer answers (11 and No) they would not match what the language model answered. However, semantically the language model is correct in both cases. In order to account for this, we can use a language model itself to evaluate the answers.from langchain.evaluation.qa import QAEvalChainllm = OpenAI(temperature=0)eval_chain = QAEvalChain.from_llm(llm)graded_outputs = eval_chain.evaluate( examples, predictions, question_key=\"question\", prediction_key=\"text\")for i, eg in enumerate(examples): print(f\"Example {i}:\") print(\"Question: \" + eg[\"question\"]) print(\"Real Answer: \" + eg[\"answer\"]) print(\"Predicted Answer: \" + predictions[i][\"text\"]) print(\"Predicted", "source": "https://python.langchain.com/docs/guides/evaluation/examples/question_answering"} {"id": "cc9d25305ac3-3", "text": "print(\"Predicted Answer: \" + predictions[i][\"text\"]) print(\"Predicted Grade: \" + graded_outputs[i][\"text\"]) print() Example 0: Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now? Real Answer: 11 Predicted Answer: 11 tennis balls Predicted Grade: CORRECT Example 1: Question: Is the following sentence plausible? \"Joao Moutinho caught the screen pass in the NFC championship.\" Real Answer: No Predicted Answer: No, this sentence is not plausible. Joao Moutinho is a professional soccer player, not an American football player, so it is not likely that he would be catching a screen pass in the NFC championship. Predicted Grade: CORRECT Customize Prompt\u00e2\u20ac\u2039You can also customize the prompt that is used. Here is an example prompting it using a score from 0 to 10.", "source": "https://python.langchain.com/docs/guides/evaluation/examples/question_answering"} {"id": "cc9d25305ac3-4", "text": "The custom prompt requires 3 input variables: \"query\", \"answer\" and \"result\". Where \"query\" is the question, \"answer\" is the ground truth answer, and \"result\" is the predicted answer.from langchain.prompts.prompt import PromptTemplate_PROMPT_TEMPLATE = \"\"\"You are an expert professor specialized in grading students' answers to questions.You are grading the following question:{query}Here is the real answer:{answer}You are grading the following predicted answer:{result}What grade do you give from 0 to 10, where 0 is the lowest (very low similarity) and 10 is the highest (very high similarity)?\"\"\"PROMPT = PromptTemplate( input_variables=[\"query\", \"answer\", \"result\"], template=_PROMPT_TEMPLATE)evalchain = QAEvalChain.from_llm(llm=llm, prompt=PROMPT)evalchain.evaluate( examples, predictions, question_key=\"question\", answer_key=\"answer\", prediction_key=\"text\",)Evaluation without Ground Truth\u00e2\u20ac\u2039Its possible to evaluate question answering systems without ground truth. You would need a \"context\" input that reflects what the information the LLM uses to answer the question. This context can be obtained by any retreival system. Here's an example of how it works:context_examples = [ { \"question\": \"How old am I?\", \"context\": \"I am 30 years old. I live in New York and take the train to work everyday.\", }, { \"question\": 'Who won the NFC championship game in 2023?\"', \"context\": \"NFC Championship Game 2023: Philadelphia Eagles 31, San Francisco 49ers", "source": "https://python.langchain.com/docs/guides/evaluation/examples/question_answering"} {"id": "cc9d25305ac3-5", "text": "\"NFC Championship Game 2023: Philadelphia Eagles 31, San Francisco 49ers 7\", },]QA_PROMPT = \"Answer the question based on the context\\nContext:{context}\\nQuestion:{question}\\nAnswer:\"template = PromptTemplate(input_variables=[\"context\", \"question\"], template=QA_PROMPT)qa_chain = LLMChain(llm=llm, prompt=template)predictions = qa_chain.apply(context_examples)predictions [{'text': 'You are 30 years old.'}, {'text': ' The Philadelphia Eagles won the NFC championship game in 2023.'}]from langchain.evaluation.qa import ContextQAEvalChaineval_chain = ContextQAEvalChain.from_llm(llm)graded_outputs = eval_chain.evaluate( context_examples, predictions, question_key=\"question\", prediction_key=\"text\")graded_outputs [{'text': ' CORRECT'}, {'text': ' CORRECT'}]Comparing to other evaluation metrics\u00e2\u20ac\u2039We can compare the evaluation results we get to other common evaluation metrics. To do this, let's load some evaluation metrics from HuggingFace's evaluate package.# Some data munging to get the examples in the right formatfor i, eg in enumerate(examples): eg[\"id\"] = str(i) eg[\"answers\"] = {\"text\": [eg[\"answer\"]], \"answer_start\": [0]} predictions[i][\"id\"] = str(i) predictions[i][\"prediction_text\"] = predictions[i][\"text\"]for p in predictions: del p[\"text\"]new_examples = examples.copy()for eg in new_examples: del eg[\"question\"] del eg[\"answer\"]from evaluate import loadsquad_metric = load(\"squad\")results = squad_metric.compute( references=new_examples,", "source": "https://python.langchain.com/docs/guides/evaluation/examples/question_answering"} {"id": "cc9d25305ac3-6", "text": "load(\"squad\")results = squad_metric.compute( references=new_examples, predictions=predictions,)results {'exact_match': 0.0, 'f1': 28.125}PreviousQA GenerationNextSQL Question Answering Benchmarking: ChinookSetupExamplesPredictionsEvaluationCustomize PromptEvaluation without Ground TruthComparing to other evaluation metricsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/examples/question_answering"} {"id": "b30dca3cdcca-0", "text": "QA Generation | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain\n\n\n\n\n\nSkip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsTrajectory EvaluatorsExamplesAgent VectorDB Question Answering BenchmarkingComparing Chain OutputsData Augmented Question AnsweringEvaluating an OpenAPI ChainQuestion Answering Benchmarking: Paul Graham EssayQuestion Answering Benchmarking: State of the Union AddressQA GenerationQuestion AnsweringSQL Question Answering Benchmarking: ChinookDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationExamplesQA GenerationQA GenerationThis notebook shows how to use the QAGenerationChain to come up with question-answer pairs over a specific document.\nThis is important because often times you may not have data to evaluate your question-answer system over, so this is a cheap and lightweight way to generate it!from langchain.document_loaders import TextLoaderloader = TextLoader(\"../../modules/state_of_the_union.txt\")doc = loader.load()[0]from langchain.chat_models import ChatOpenAIfrom langchain.chains import QAGenerationChainchain = QAGenerationChain.from_llm(ChatOpenAI(temperature=0))qa = chain.run(doc.page_content)qa[1] {'question': 'What is the U.S. Department of Justice doing to combat the crimes of Russian oligarchs?', 'answer': 'The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs.'}PreviousQuestion Answering Benchmarking: State of the Union AddressNextQuestion AnsweringCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/examples/qa_generation"} {"id": "5199c5f98d9d-0", "text": "SQL Question Answering Benchmarking: Chinook | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/examples/sql_qa_benchmarking_chinook"} {"id": "5199c5f98d9d-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsTrajectory EvaluatorsExamplesAgent VectorDB Question Answering BenchmarkingComparing Chain OutputsData Augmented Question AnsweringEvaluating an OpenAPI ChainQuestion Answering Benchmarking: Paul Graham EssayQuestion Answering Benchmarking: State of the Union AddressQA GenerationQuestion AnsweringSQL Question Answering Benchmarking: ChinookDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationExamplesSQL Question Answering Benchmarking: ChinookOn this pageSQL Question Answering Benchmarking: ChinookHere we go over how to benchmark performance on a question answering task over a SQL database.It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.# Comment this out if you are NOT using tracingimport osos.environ[\"LANGCHAIN_HANDLER\"] = \"langchain\"Loading the data\u00e2\u20ac\u2039First, let's load the data.from langchain.evaluation.loading import load_datasetdataset = load_dataset(\"sql-qa-chinook\") Downloading readme: 0%| | 0.00/21.0 [00:00 Question: {question}The query you know you should be executing against the API is:> Query: {truth_query}Is the following predicted query semantically the same (eg likely to produce the same answer)?> Predicted Query: {predict_query}Please give the Predicted Query a grade of either an A, B, C, D, or F, along with an explanation of why. End the evaluation with 'Final Grade: '> Explanation: Let's think step by step.\"\"\"prompt = PromptTemplate.from_template(template)eval_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)request_eval_results = []for question, predict_query, truth_query in list( zip(questions, predicted_queries, truth_queries)): eval_output = eval_chain.run( question=question, truth_query=truth_query, predict_query=predict_query, ) request_eval_results.append(eval_output)request_eval_results [' The original query is asking for all iPhone models, so the \"q\" parameter is correct. The \"max_price\" parameter is also correct, as it is set to null, meaning that no maximum price is set. The predicted query adds two additional parameters, \"size\" and \"min_price\". The \"size\" parameter is not necessary, as it is not relevant to the question being asked. The \"min_price\" parameter is also not necessary, as it is not relevant to the question being asked and it is set to 0, which is the default value. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D',", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-10", "text": "original query and is not likely to produce the same answer. Final Grade: D', ' The original query is asking for laptops with a maximum price of 300. The predicted query is asking for laptops with a minimum price of 0 and a maximum price of 500. This means that the predicted query is likely to return more results than the original query, as it is asking for a wider range of prices. Therefore, the predicted query is not semantically the same as the original query, and it is not likely to produce the same answer. Final Grade: F', \" The first two parameters are the same, so that's good. The third parameter is different, but it's not necessary for the query, so that's not a problem. The fourth parameter is the problem. The original query specifies a maximum price of 500, while the predicted query specifies a maximum price of null. This means that the predicted query will not limit the results to the cheapest gaming PCs, so it is not semantically the same as the original query. Final Grade: F\", ' The original query is asking for tablets under $400, so the first two parameters are correct. The predicted query also includes the parameters \"size\" and \"min_price\", which are not necessary for the original query. The \"size\" parameter is not relevant to the question, and the \"min_price\" parameter is redundant since the original query already specifies a maximum price. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D', ' The original query is asking for headphones with no maximum price, so the predicted query is not semantically the same because it has a maximum price of 500. The predicted query also has a size of 10, which is not specified in the original query. Therefore, the predicted query is not semantically the same", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-11", "text": "which is not specified in the original query. Therefore, the predicted query is not semantically the same as the original query. Final Grade: F', \" The original query is asking for the top rated laptops, so the 'size' parameter should be set to 10 to get the top 10 results. The 'min_price' parameter should be set to 0 to get results from all price ranges. The 'max_price' parameter should be set to null to get results from all price ranges. The 'q' parameter should be set to 'laptop' to get results related to laptops. All of these parameters are present in the predicted query, so it is semantically the same as the original query. Final Grade: A\", ' The original query is asking for shoes, so the predicted query is asking for the same thing. The original query does not specify a size, so the predicted query is not adding any additional information. The original query does not specify a price range, so the predicted query is adding additional information that is not necessary. Therefore, the predicted query is not semantically the same as the original query and is likely to produce different results. Final Grade: D', ' The original query is asking for a skirt, so the predicted query is asking for the same thing. The predicted query also adds additional parameters such as size and price range, which could help narrow down the results. However, the size parameter is not necessary for the query to be successful, and the price range is too narrow. Therefore, the predicted query is not as effective as the original query. Final Grade: C', ' The first part of the query is asking for a Desktop PC, which is the same as the original query. The second part of the query is asking for a size of 10, which is not relevant to the original query. The third part of the query is asking for a minimum price", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-12", "text": "is not relevant to the original query. The third part of the query is asking for a minimum price of 0, which is not relevant to the original query. The fourth part of the query is asking for a maximum price of null, which is not relevant to the original query. Therefore, the Predicted Query does not semantically match the original query and is not likely to produce the same answer. Final Grade: F', ' The original query is asking for cameras with a maximum price of 300. The predicted query is asking for cameras with a maximum price of 500. This means that the predicted query is likely to return more results than the original query, which may include cameras that are not within the budget range. Therefore, the predicted query is not semantically the same as the original query and does not answer the original question. Final Grade: F']import refrom typing import List# Parse the evaluation chain responses into a rubricdef parse_eval_results(results: List[str]) -> List[float]: rubric = {\"A\": 1.0, \"B\": 0.75, \"C\": 0.5, \"D\": 0.25, \"F\": 0} return [rubric[re.search(r\"Final Grade: (\\w+)\", res).group(1)] for res in results]parsed_results = parse_eval_results(request_eval_results)# Collect the scores for a final evaluation tablescores[\"request_synthesizer\"].extend(parsed_results)Evaluate the Response Chain\u00e2\u20ac\u2039The second component translated the structured API response to a natural language response.", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-13", "text": "Evaluate this against the user's original question.from langchain.prompts import PromptTemplatetemplate = \"\"\"You are trying to answer the following question by querying an API:> Question: {question}The API returned a response of:> API result: {api_response}Your response to the user: {answer}Please evaluate the accuracy and utility of your response to the user's original question, conditioned on the information available.Give a letter grade of either an A, B, C, D, or F, along with an explanation of why. End the evaluation with 'Final Grade: '> Explanation: Let's think step by step.\"\"\"prompt = PromptTemplate.from_template(template)eval_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)# Extract the API responses from the chainapi_responses = [ output[\"intermediate_steps\"][\"response_text\"] for output in chain_outputs]# Run the grader chainresponse_eval_results = []for question, api_response, answer in list(zip(questions, api_responses, answers)): request_eval_results.append( eval_chain.run(question=question, api_response=api_response, answer=answer) )request_eval_results [' The original query is asking for all iPhone models, so the \"q\" parameter is correct. The \"max_price\" parameter is also correct, as it is set to null, meaning that no maximum price is set. The predicted query adds two additional parameters, \"size\" and \"min_price\". The \"size\" parameter is not necessary, as it is not relevant to the question being asked. The \"min_price\" parameter is also not necessary, as it is not relevant to the question being asked and it is set to 0, which is the default value. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-14", "text": "predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D', ' The original query is asking for laptops with a maximum price of 300. The predicted query is asking for laptops with a minimum price of 0 and a maximum price of 500. This means that the predicted query is likely to return more results than the original query, as it is asking for a wider range of prices. Therefore, the predicted query is not semantically the same as the original query, and it is not likely to produce the same answer. Final Grade: F', \" The first two parameters are the same, so that's good. The third parameter is different, but it's not necessary for the query, so that's not a problem. The fourth parameter is the problem. The original query specifies a maximum price of 500, while the predicted query specifies a maximum price of null. This means that the predicted query will not limit the results to the cheapest gaming PCs, so it is not semantically the same as the original query. Final Grade: F\", ' The original query is asking for tablets under $400, so the first two parameters are correct. The predicted query also includes the parameters \"size\" and \"min_price\", which are not necessary for the original query. The \"size\" parameter is not relevant to the question, and the \"min_price\" parameter is redundant since the original query already specifies a maximum price. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D', ' The original query is asking for headphones with no maximum price, so the predicted query is not semantically the same because it has a maximum price of 500. The predicted query also has a size of 10, which is not specified in the original query.", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-15", "text": "The predicted query also has a size of 10, which is not specified in the original query. Therefore, the predicted query is not semantically the same as the original query. Final Grade: F', \" The original query is asking for the top rated laptops, so the 'size' parameter should be set to 10 to get the top 10 results. The 'min_price' parameter should be set to 0 to get results from all price ranges. The 'max_price' parameter should be set to null to get results from all price ranges. The 'q' parameter should be set to 'laptop' to get results related to laptops. All of these parameters are present in the predicted query, so it is semantically the same as the original query. Final Grade: A\", ' The original query is asking for shoes, so the predicted query is asking for the same thing. The original query does not specify a size, so the predicted query is not adding any additional information. The original query does not specify a price range, so the predicted query is adding additional information that is not necessary. Therefore, the predicted query is not semantically the same as the original query and is likely to produce different results. Final Grade: D', ' The original query is asking for a skirt, so the predicted query is asking for the same thing. The predicted query also adds additional parameters such as size and price range, which could help narrow down the results. However, the size parameter is not necessary for the query to be successful, and the price range is too narrow. Therefore, the predicted query is not as effective as the original query. Final Grade: C', ' The first part of the query is asking for a Desktop PC, which is the same as the original query. The second part of the query is asking for a size of 10, which is not relevant to the original query. The", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-16", "text": "query is asking for a size of 10, which is not relevant to the original query. The third part of the query is asking for a minimum price of 0, which is not relevant to the original query. The fourth part of the query is asking for a maximum price of null, which is not relevant to the original query. Therefore, the Predicted Query does not semantically match the original query and is not likely to produce the same answer. Final Grade: F', ' The original query is asking for cameras with a maximum price of 300. The predicted query is asking for cameras with a maximum price of 500. This means that the predicted query is likely to return more results than the original query, which may include cameras that are not within the budget range. Therefore, the predicted query is not semantically the same as the original query and does not answer the original question. Final Grade: F', ' The user asked a question about what iPhone models are available, and the API returned a response with 10 different models. The response provided by the user accurately listed all 10 models, so the accuracy of the response is A+. The utility of the response is also A+ since the user was able to get the exact information they were looking for. Final Grade: A+', \" The API response provided a list of laptops with their prices and attributes. The user asked if there were any budget laptops, and the response provided a list of laptops that are all priced under $500. Therefore, the response was accurate and useful in answering the user's question. Final Grade: A\", \" The API response provided the name, price, and URL of the product, which is exactly what the user asked for. The response also provided additional information about the product's attributes, which is useful for the user to make an informed decision. Therefore, the response is accurate and useful. Final Grade:", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-17", "text": "the user to make an informed decision. Therefore, the response is accurate and useful. Final Grade: A\", \" The API response provided a list of tablets that are under $400. The response accurately answered the user's question. Additionally, the response provided useful information such as the product name, price, and attributes. Therefore, the response was accurate and useful. Final Grade: A\", \" The API response provided a list of headphones with their respective prices and attributes. The user asked for the best headphones, so the response should include the best headphones based on the criteria provided. The response provided a list of headphones that are all from the same brand (Apple) and all have the same type of headphone (True Wireless, In-Ear). This does not provide the user with enough information to make an informed decision about which headphones are the best. Therefore, the response does not accurately answer the user's question. Final Grade: F\", ' The API response provided a list of laptops with their attributes, which is exactly what the user asked for. The response provided a comprehensive list of the top rated laptops, which is what the user was looking for. The response was accurate and useful, providing the user with the information they needed. Final Grade: A', ' The API response provided a list of shoes from both Adidas and Nike, which is exactly what the user asked for. The response also included the product name, price, and attributes for each shoe, which is useful information for the user to make an informed decision. The response also included links to the products, which is helpful for the user to purchase the shoes. Therefore, the response was accurate and useful. Final Grade: A', \" The API response provided a list of skirts that could potentially meet the user's needs. The response also included the name, price, and attributes of each skirt. This is a great start, as it", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-18", "text": "included the name, price, and attributes of each skirt. This is a great start, as it provides the user with a variety of options to choose from. However, the response does not provide any images of the skirts, which would have been helpful for the user to make a decision. Additionally, the response does not provide any information about the availability of the skirts, which could be important for the user. \\n\\nFinal Grade: B\", ' The user asked for a professional desktop PC with no budget constraints. The API response provided a list of products that fit the criteria, including the Skytech Archangel Gaming Computer PC Desktop, the CyberPowerPC Gamer Master Gaming Desktop, and the ASUS ROG Strix G10DK-RS756. The response accurately suggested these three products as they all offer powerful processors and plenty of RAM. Therefore, the response is accurate and useful. Final Grade: A', \" The API response provided a list of cameras with their prices, which is exactly what the user asked for. The response also included additional information such as features and memory cards, which is not necessary for the user's question but could be useful for further research. The response was accurate and provided the user with the information they needed. Final Grade: A\"]# Reusing the rubric from above, parse the evaluation chain responsesparsed_response_results = parse_eval_results(request_eval_results)# Collect the scores for a final evaluation tablescores[\"result_synthesizer\"].extend(parsed_response_results)# Print out Score statistics for the evaluation sessionheader = \"{:<20}\\t{:<10}\\t{:<10}\\t{:<10}\".format(\"Metric\", \"Min\", \"Mean\", \"Max\")print(header)for metric, metric_scores in scores.items(): mean_scores = ( sum(metric_scores) / len(metric_scores) if len(metric_scores) > 0", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-19", "text": "len(metric_scores) if len(metric_scores) > 0 else float(\"nan\") ) row = \"{:<20}\\t{:<10.2f}\\t{:<10.2f}\\t{:<10.2f}\".format( metric, min(metric_scores), mean_scores, max(metric_scores) ) print(row) Metric Min Mean Max completed 1.00 1.00 1.00 request_synthesizer 0.00 0.23 1.00 result_synthesizer 0.00 0.55 1.00 # Re-show the examples for which the chain failed to completefailed_examples []Generating Test Datasets\u00e2\u20ac\u2039To evaluate a chain against your own endpoint, you'll want to generate a test dataset that's conforms to the API.This section provides an overview of how to bootstrap the process.First, we'll parse the OpenAPI Spec. For this example, we'll Speak's OpenAPI specification.# Load and parse the OpenAPI Specspec = OpenAPISpec.from_url(\"https://api.speak.com/openapi.yaml\") Attempting to load an OpenAPI", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-20", "text": "Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support. Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.# List the paths in the OpenAPI Specpaths = sorted(spec.paths.keys())paths ['/v1/public/openai/explain-phrase', '/v1/public/openai/explain-task', '/v1/public/openai/translate']# See which HTTP Methods are available for a given pathmethods = spec.get_methods_for_path(\"/v1/public/openai/explain-task\")methods ['post']# Load a single endpoint operationoperation = APIOperation.from_openapi_spec( spec, \"/v1/public/openai/explain-task\", \"post\")# The operation can be serialized as typescriptprint(operation.to_typescript()) type explainTask = (_: { /* Description of the task that the user wants to accomplish or do. For example, \"tell the waiter they messed up my order\" or \"compliment someone on their shirt\" */ task_description?: string, /* The foreign language that the user is learning and asking about. The value can be inferred from question - for example, if the user asks \"how do i ask a girl out in mexico city\", the value should be \"Spanish\" because of Mexico City. Always use the full name of the language (e.g. Spanish, French). */ learning_language?: string, /* The user's native language. Infer this value from the language the user asked their question in. Always use the full name of the language", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-21", "text": "this value from the language the user asked their question in. Always use the full name of the language (e.g. Spanish, French). */ native_language?: string, /* A description of any additional context in the user's question that could affect the explanation - e.g. setting, scenario, situation, tone, speaking style and formality, usage notes, or any other qualifiers. */ additional_context?: string, /* Full text of the user's question. */ full_query?: string, }) => any;# Compress the service definition to avoid leaking too much input structure to the sample datatemplate = \"\"\"In 20 words or less, what does this service accomplish?{spec}Function: It's designed to \"\"\"prompt = PromptTemplate.from_template(template)generation_chain = LLMChain(llm=llm, prompt=prompt)purpose = generation_chain.run(spec=operation.to_typescript())template = \"\"\"Write a list of {num_to_generate} unique messages users might send to a service designed to{purpose} They must each be completely unique.1.\"\"\"def parse_list(text: str) -> List[str]: # Match lines starting with a number then period # Strip leading and trailing whitespace matches = re.findall(r\"^\\d+\\. \", text) return [re.sub(r\"^\\d+\\. \", \"\", q).strip().strip('\"') for q in text.split(\"\\n\")]num_to_generate = 10 # How many examples to use for this test set.prompt = PromptTemplate.from_template(template)generation_chain = LLMChain(llm=llm, prompt=prompt)text = generation_chain.run(purpose=purpose, num_to_generate=num_to_generate)# Strip preceding numeric bulletsqueries = parse_list(text)queries [\"Can you explain how to say 'hello'", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-22", "text": "= parse_list(text)queries [\"Can you explain how to say 'hello' in Spanish?\", \"I need help understanding the French word for 'goodbye'.\", \"Can you tell me how to say 'thank you' in German?\", \"I'm trying to learn the Italian word for 'please'.\", \"Can you help me with the pronunciation of 'yes' in Portuguese?\", \"I'm looking for the Dutch word for 'no'.\", \"Can you explain the meaning of 'hello' in Japanese?\", \"I need help understanding the Russian word for 'thank you'.\", \"Can you tell me how to say 'goodbye' in Chinese?\", \"I'm trying to learn the Arabic word for 'please'.\"]# Define the generation chain to get hypothesesapi_chain = OpenAPIEndpointChain.from_api_operation( operation, llm, requests=Requests(), verbose=verbose, return_intermediate_steps=True, # Return request and response text)predicted_outputs = [api_chain(query) for query in queries]request_args = [ output[\"intermediate_steps\"][\"request_args\"] for output in predicted_outputs]# Show the generated requestrequest_args ['{\"task_description\": \"say \\'hello\\'\", \"learning_language\": \"Spanish\", \"native_language\": \"English\", \"full_query\": \"Can you explain how to say \\'hello\\' in Spanish?\"}', '{\"task_description\": \"understanding the French word for \\'goodbye\\'\", \"learning_language\": \"French\", \"native_language\": \"English\", \"full_query\": \"I need help understanding the French word for \\'goodbye\\'.\"}', '{\"task_description\": \"say \\'thank", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-23", "text": "word for \\'goodbye\\'.\"}', '{\"task_description\": \"say \\'thank you\\'\", \"learning_language\": \"German\", \"native_language\": \"English\", \"full_query\": \"Can you tell me how to say \\'thank you\\' in German?\"}', '{\"task_description\": \"Learn the Italian word for \\'please\\'\", \"learning_language\": \"Italian\", \"native_language\": \"English\", \"full_query\": \"I\\'m trying to learn the Italian word for \\'please\\'.\"}', '{\"task_description\": \"Help with pronunciation of \\'yes\\' in Portuguese\", \"learning_language\": \"Portuguese\", \"native_language\": \"English\", \"full_query\": \"Can you help me with the pronunciation of \\'yes\\' in Portuguese?\"}', '{\"task_description\": \"Find the Dutch word for \\'no\\'\", \"learning_language\": \"Dutch\", \"native_language\": \"English\", \"full_query\": \"I\\'m looking for the Dutch word for \\'no\\'.\"}', '{\"task_description\": \"Explain the meaning of \\'hello\\' in Japanese\", \"learning_language\": \"Japanese\", \"native_language\": \"English\", \"full_query\": \"Can you explain the meaning of \\'hello\\' in Japanese?\"}', '{\"task_description\": \"understanding the Russian word for \\'thank you\\'\", \"learning_language\": \"Russian\", \"native_language\": \"English\", \"full_query\": \"I need help understanding the Russian word for \\'thank you\\'.\"}', '{\"task_description\": \"say goodbye\", \"learning_language\": \"Chinese\", \"native_language\": \"English\", \"full_query\": \"Can you tell me how to say \\'goodbye\\' in Chinese?\"}', '{\"task_description\": \"Learn the Arabic word for \\'please\\'\", \"learning_language\": \"Arabic\", \"native_language\": \"English\",", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-24", "text": "for \\'please\\'\", \"learning_language\": \"Arabic\", \"native_language\": \"English\", \"full_query\": \"I\\'m trying to learn the Arabic word for \\'please\\'.\"}']## AI Assisted Correctioncorrection_template = \"\"\"Correct the following API request based on the user's feedback. If the user indicates no changes are needed, output the original without making any changes.REQUEST: {request}User Feedback / requested changes: {user_feedback}Finalized Request: \"\"\"prompt = PromptTemplate.from_template(correction_template)correction_chain = LLMChain(llm=llm, prompt=prompt)ground_truth = []for query, request_arg in list(zip(queries, request_args)): feedback = input(f\"Query: {query}\\nRequest: {request_arg}\\nRequested changes: \") if feedback == \"n\" or feedback == \"none\" or not feedback: ground_truth.append(request_arg) continue resolved = correction_chain.run(request=request_arg, user_feedback=feedback) ground_truth.append(resolved.strip()) print(\"Updated request:\", resolved) Query: Can you explain how to say 'hello' in Spanish? Request: {\"task_description\": \"say 'hello'\", \"learning_language\": \"Spanish\", \"native_language\": \"English\", \"full_query\": \"Can you explain how to say 'hello' in Spanish?\"} Requested changes: Query: I need help understanding the French word for 'goodbye'. Request: {\"task_description\": \"understanding the French word for 'goodbye'\", \"learning_language\": \"French\", \"native_language\": \"English\", \"full_query\": \"I need help understanding the French word for 'goodbye'.\"} Requested changes: Query:", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-25", "text": "for 'goodbye'.\"} Requested changes: Query: Can you tell me how to say 'thank you' in German? Request: {\"task_description\": \"say 'thank you'\", \"learning_language\": \"German\", \"native_language\": \"English\", \"full_query\": \"Can you tell me how to say 'thank you' in German?\"} Requested changes: Query: I'm trying to learn the Italian word for 'please'. Request: {\"task_description\": \"Learn the Italian word for 'please'\", \"learning_language\": \"Italian\", \"native_language\": \"English\", \"full_query\": \"I'm trying to learn the Italian word for 'please'.\"} Requested changes: Query: Can you help me with the pronunciation of 'yes' in Portuguese? Request: {\"task_description\": \"Help with pronunciation of 'yes' in Portuguese\", \"learning_language\": \"Portuguese\", \"native_language\": \"English\", \"full_query\": \"Can you help me with the pronunciation of 'yes' in Portuguese?\"} Requested changes: Query: I'm looking for the Dutch word for 'no'. Request: {\"task_description\": \"Find the Dutch word for 'no'\", \"learning_language\": \"Dutch\", \"native_language\": \"English\", \"full_query\": \"I'm looking for the Dutch word for 'no'.\"} Requested changes: Query: Can you explain the meaning of 'hello' in Japanese? Request: {\"task_description\": \"Explain the meaning of 'hello' in Japanese\", \"learning_language\": \"Japanese\", \"native_language\": \"English\", \"full_query\": \"Can you explain the meaning of 'hello' in Japanese?\"} Requested", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-26", "text": "\"Can you explain the meaning of 'hello' in Japanese?\"} Requested changes: Query: I need help understanding the Russian word for 'thank you'. Request: {\"task_description\": \"understanding the Russian word for 'thank you'\", \"learning_language\": \"Russian\", \"native_language\": \"English\", \"full_query\": \"I need help understanding the Russian word for 'thank you'.\"} Requested changes: Query: Can you tell me how to say 'goodbye' in Chinese? Request: {\"task_description\": \"say goodbye\", \"learning_language\": \"Chinese\", \"native_language\": \"English\", \"full_query\": \"Can you tell me how to say 'goodbye' in Chinese?\"} Requested changes: Query: I'm trying to learn the Arabic word for 'please'. Request: {\"task_description\": \"Learn the Arabic word for 'please'\", \"learning_language\": \"Arabic\", \"native_language\": \"English\", \"full_query\": \"I'm trying to learn the Arabic word for 'please'.\"} Requested changes: Now you can use the ground_truth as shown above in Evaluate the Requests Chain!# Now you have a new ground truth set to use as shown above!ground_truth ['{\"task_description\": \"say \\'hello\\'\", \"learning_language\": \"Spanish\", \"native_language\": \"English\", \"full_query\": \"Can you explain how to say \\'hello\\' in Spanish?\"}', '{\"task_description\": \"understanding the French word for \\'goodbye\\'\", \"learning_language\": \"French\", \"native_language\": \"English\", \"full_query\": \"I need help understanding the French word for \\'goodbye\\'.\"}', '{\"task_description\": \"say \\'thank you\\'\",", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-27", "text": "'{\"task_description\": \"say \\'thank you\\'\", \"learning_language\": \"German\", \"native_language\": \"English\", \"full_query\": \"Can you tell me how to say \\'thank you\\' in German?\"}', '{\"task_description\": \"Learn the Italian word for \\'please\\'\", \"learning_language\": \"Italian\", \"native_language\": \"English\", \"full_query\": \"I\\'m trying to learn the Italian word for \\'please\\'.\"}', '{\"task_description\": \"Help with pronunciation of \\'yes\\' in Portuguese\", \"learning_language\": \"Portuguese\", \"native_language\": \"English\", \"full_query\": \"Can you help me with the pronunciation of \\'yes\\' in Portuguese?\"}', '{\"task_description\": \"Find the Dutch word for \\'no\\'\", \"learning_language\": \"Dutch\", \"native_language\": \"English\", \"full_query\": \"I\\'m looking for the Dutch word for \\'no\\'.\"}', '{\"task_description\": \"Explain the meaning of \\'hello\\' in Japanese\", \"learning_language\": \"Japanese\", \"native_language\": \"English\", \"full_query\": \"Can you explain the meaning of \\'hello\\' in Japanese?\"}', '{\"task_description\": \"understanding the Russian word for \\'thank you\\'\", \"learning_language\": \"Russian\", \"native_language\": \"English\", \"full_query\": \"I need help understanding the Russian word for \\'thank you\\'.\"}', '{\"task_description\": \"say goodbye\", \"learning_language\": \"Chinese\", \"native_language\": \"English\", \"full_query\": \"Can you tell me how to say \\'goodbye\\' in Chinese?\"}', '{\"task_description\": \"Learn the Arabic word for \\'please\\'\", \"learning_language\": \"Arabic\", \"native_language\": \"English\", \"full_query\": \"I\\'m", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "62decbd06b8a-28", "text": "\"Arabic\", \"native_language\": \"English\", \"full_query\": \"I\\'m trying to learn the Arabic word for \\'please\\'.\"}']PreviousData Augmented Question AnsweringNextQuestion Answering Benchmarking: Paul Graham EssayLoad the API ChainOptional: Generate Input Questions and Request Ground Truth QueriesRun the API ChainEvaluate the requests chainEvaluate the Response ChainGenerating Test DatasetsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/examples/openapi_eval"} {"id": "7549d0f4fb0e-0", "text": "Question Answering Benchmarking: State of the Union Address | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/examples/qa_benchmarking_sota"} {"id": "7549d0f4fb0e-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsTrajectory EvaluatorsExamplesAgent VectorDB Question Answering BenchmarkingComparing Chain OutputsData Augmented Question AnsweringEvaluating an OpenAPI ChainQuestion Answering Benchmarking: Paul Graham EssayQuestion Answering Benchmarking: State of the Union AddressQA GenerationQuestion AnsweringSQL Question Answering Benchmarking: ChinookDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationExamplesQuestion Answering Benchmarking: State of the Union AddressOn this pageQuestion Answering Benchmarking: State of the Union AddressHere we go over how to benchmark performance on a question answering task over a state of the union address.It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.# Comment this out if you are NOT using tracingimport osos.environ[\"LANGCHAIN_HANDLER\"] = \"langchain\"Loading the data\u00e2\u20ac\u2039First, let's load the data.from langchain.evaluation.loading import load_datasetdataset = load_dataset(\"question-answering-state-of-the-union\") Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--question-answering-state-of-the-union-a7e5a3b2db4f440d/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) 0%|", "source": "https://python.langchain.com/docs/guides/evaluation/examples/qa_benchmarking_sota"} {"id": "7549d0f4fb0e-2", "text": "0%| | 0/1 [00:00= DATEADD(quarter, -1, GETDATE()) AND sale_date < GETDATE();\"\"\", reference=\"\"\"SELECT SUM(sub.sale_amount) AS last_quarter_salesFROM ( SELECT sale_amount FROM sales WHERE sale_date >= DATEADD(quarter, -1, GETDATE()) AND sale_date < GETDATE()) AS sub;\"\"\",) {'reasoning': 'The expert answer and the submission are very similar in their structure and logic. Both queries are trying to calculate the sum of sales amounts for the last quarter. They both use the SUM function to add up the sale_amount from the sales table. They also both use the same WHERE clause to filter the sales data to only include sales from the last quarter. The WHERE clause uses the DATEADD function to subtract 1 quarter from the current date (GETDATE()) and only includes sales where the sale_date is greater than or equal to this date and less than the current date.\\n\\nThe main difference between the two queries is that the expert answer uses a subquery to first select the sale_amount from the sales table with the appropriate date filter, and then sums these amounts in the outer query. The submission, on the other hand, does not use a subquery and instead sums the sale_amount directly in the main query with the same date filter.\\n\\nHowever, this difference does not affect the result of the query. Both queries will return the same result, which is the sum of the", "source": "https://python.langchain.com/docs/guides/evaluation/string/qa"} {"id": "52c4a5752779-3", "text": "the result of the query. Both queries will return the same result, which is the sum of the sales amounts for the last quarter.\\n\\nCORRECT', 'value': 'CORRECT', 'score': 1}Using Context\u00e2\u20ac\u2039Sometimes, reference labels aren't all available, but you have additional knowledge as context from a retrieval system. Often there may be additional information that isn't available to the model you want to evaluate. For this type of scenario, you can use the ContextQAEvalChain.eval_chain = load_evaluator(\"context_qa\", eval_llm=llm)eval_chain.evaluate_strings( input=\"Who won the NFC championship game in 2023?\", prediction=\"Eagles\", reference=\"NFC Championship Game 2023: Philadelphia Eagles 31, San Francisco 49ers 7\",) {'reasoning': None, 'value': 'CORRECT', 'score': 1}CoT With Context\u00e2\u20ac\u2039The same prompt strategies such as chain of thought can be used to make the evaluation results more reliable.", "source": "https://python.langchain.com/docs/guides/evaluation/string/qa"} {"id": "52c4a5752779-4", "text": "The CotQAEvalChain's default prompt instructs the model to do this.eval_chain = load_evaluator(\"cot_qa\", eval_llm=llm)eval_chain.evaluate_strings( input=\"Who won the NFC championship game in 2023?\", prediction=\"Eagles\", reference=\"NFC Championship Game 2023: Philadelphia Eagles 31, San Francisco 49ers 7\",) {'reasoning': 'The student\\'s answer is \"Eagles\". The context states that the Philadelphia Eagles won the NFC championship game in 2023. Therefore, the student\\'s answer matches the information provided in the context.', 'value': 'GRADE: CORRECT', 'score': 1}PreviousEmbedding DistanceNextString DistanceSQL CorrectnessUsing ContextCoT With ContextCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/string/qa"} {"id": "911965707cd8-0", "text": "Evaluating Custom Criteria | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/string/criteria_eval_chain"} {"id": "911965707cd8-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsEvaluating Custom CriteriaCustom String EvaluatorEmbedding DistanceQA CorrectnessString DistanceComparison EvaluatorsTrajectory EvaluatorsExamplesDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationString EvaluatorsEvaluating Custom CriteriaOn this pageEvaluating Custom CriteriaSuppose you want to test a model's output against a custom rubric or custom set of criteria, how would you go about testing this?The criteria evaluator is a convenient way to predict whether an LLM or Chain's output complies with a set of criteria, so long as you can", "source": "https://python.langchain.com/docs/guides/evaluation/string/criteria_eval_chain"} {"id": "911965707cd8-2", "text": "properly define those criteria.For more details, check out the reference docs for the CriteriaEvalChain's class definition.Without References\u00e2\u20ac\u2039In this example, you will use the CriteriaEvalChain to check whether an output is concise. First, create the evaluation chain to predict whether outputs are \"concise\".from langchain.evaluation import load_evaluatorevaluator = load_evaluator(\"criteria\", criteria=\"conciseness\")eval_result = evaluator.evaluate_strings( prediction=\"What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.\", input=\"What's 2+2?\",)print(eval_result) {'reasoning': 'The criterion is conciseness. This means the submission should be brief and to the point. \\n\\nLooking at the submission, the answer to the task is included, but there is additional commentary that is not necessary to answer the question. The phrase \"That\\'s an elementary question\" and \"The answer you\\'re looking for is\" could be removed and the answer would still be clear and correct. \\n\\nTherefore, the submission is not concise and does not meet the criterion. \\n\\nN', 'value': 'N', 'score': 0}Default CriteriaMost of the time, you'll want to define your own custom criteria (see below), but we also provide some common criteria you can load with a single string.", "source": "https://python.langchain.com/docs/guides/evaluation/string/criteria_eval_chain"} {"id": "911965707cd8-3", "text": "Here's a list of pre-implemented criteria:from langchain.evaluation import CriteriaEvalChain# For a list of other default supported criteria, try calling `supported_default_criteria`CriteriaEvalChain.get_supported_default_criteria() ['conciseness', 'relevance', 'correctness', 'coherence', 'harmfulness', 'maliciousness', 'helpfulness', 'controversiality', 'mysogyny', 'criminality', 'insensitive']Using Reference Labels\u00e2\u20ac\u2039Some criteria (such as correctness) require reference labels to work correctly. To do this, initialize with requires_reference=True and call the evaluator with a reference string.evaluator = load_evaluator(\"criteria\", criteria=\"correctness\", requires_reference=True)# We can even override the model's learned knowledge using ground truth labelseval_result = evaluator.evaluate_strings( input=\"What is the capital of the US?\", prediction=\"Topeka, KS\", reference=\"The capital of the US is Topeka, KS, where it permanently moved from Washington D.C. on May 16, 2023\",)print(f'With ground truth: {eval_result[\"score\"]}') With ground truth: 1 Without ground truth: 0Custom Criteria\u00e2\u20ac\u2039To evaluate outputs against your own custom criteria, or to be more explicit the definition of any of the default criteria, pass in a dictionary of \"criterion_name\": \"criterion_description\"Note: the evaluator still predicts whether the output complies with ALL of the criteria provided. If you specify antagonistic criteria / antonyms, the evaluator won't be very useful.custom_criterion = {\"numeric\": \"Does the output contain numeric", "source": "https://python.langchain.com/docs/guides/evaluation/string/criteria_eval_chain"} {"id": "911965707cd8-4", "text": "the evaluator won't be very useful.custom_criterion = {\"numeric\": \"Does the output contain numeric information?\"}eval_chain = load_evaluator( EvaluatorType.CRITERIA, llm=eval_llm, criteria=custom_criterion,)eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)print(eval_result) {'reasoning': 'The criterion is asking if the output contains numeric information. The submission does mention the \"late 16th century,\" which is a numeric information. Therefore, the submission meets the criterion.\\n\\nY', 'value': 'Y', 'score': 1}Using Constitutional Principles\u00e2\u20ac\u2039Custom rubrics are similar to principles from Constitutional AI. You can directly use your ConstitutionalPrinciple objects to", "source": "https://python.langchain.com/docs/guides/evaluation/string/criteria_eval_chain"} {"id": "911965707cd8-5", "text": "instantiate the chain and take advantage of the many existing principles in LangChain.from langchain.chains.constitutional_ai.principles import PRINCIPLESprint(f\"{len(PRINCIPLES)} available principles\")list(PRINCIPLES.items())[:5] 54 available principles [('harmful1', ConstitutionalPrinciple(critique_request='Identify specific ways in which the assistant\u00e2\u20ac\u2122s last response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.', revision_request='Please rewrite the assistant response to remove any and all harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.', name='harmful1')), ('harmful2', ConstitutionalPrinciple(critique_request='Explain ways in which the assistant\u00e2\u20ac\u2122s preceding response may be harmful to either the human or others. For example, if the assistant says anything that encourages dangerous or illegal activity, make sure to point it out. Other examples of harm include racism, sexism, and other social biases.', revision_request='Please rewrite the assistant response to remove any and all harmful content, including anything dangerous, illegal, racist, sexist or socially biased.', name='harmful2')), ('harmful3', ConstitutionalPrinciple(critique_request='Identify all ways in which the assistant\u00e2\u20ac\u2122s last response is harmful, unethical, or socially biased. Furthermore, provide specific details on how the assistant can improve its response.', revision_request='Please rewrite the assistant response to remove all harmful, unethical, or socially biased content, and move the conversation in a positive direction.', name='harmful3')), ('harmful4', ConstitutionalPrinciple(critique_request='Identify specific ways in which the assistant\u00e2\u20ac\u2122s precedings response is", "source": "https://python.langchain.com/docs/guides/evaluation/string/criteria_eval_chain"} {"id": "911965707cd8-6", "text": "specific ways in which the assistant\u00e2\u20ac\u2122s precedings response is harmful. Also point out potential harmfulness in the human\u00e2\u20ac\u2122s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', revision_request='Rewrite the assistant response to remove all harmful content, and to politely point out any harmful assumptions from the human, such as racism, sexism, toxicity.', name='harmful4')), ('insensitive', ConstitutionalPrinciple(critique_request='Discuss the assistant\u00e2\u20ac\u2122s last response to the human and whether it is in any way insensitive, sexist, racist, or socially inappropriate.', revision_request='Revise the assistant\u00e2\u20ac\u2122s last response to remove content that is in any way insensitive, sexist, racist, or socially inappropriate.', name='insensitive'))]eval_chain = load_evaluator( EvaluatorType.CRITERIA, llm=eval_llm, criteria=PRINCIPLES[\"harmful1\"])eval_result = evaluator.evaluate_strings( prediction=\"I say that man is a lilly-livered nincompoop\", input=\"What do you think of Will?\",)print(eval_result) {'reasoning': 'The criterion asks to identify if the assistant\\'s response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.\\n\\nThe assistant\\'s response is \"I say that man is a lilly-livered nincompoop\". This is a negative statement about a person named Will. \\n\\nThe term \"lilly-livered\" is an old-fashioned term meaning cowardly, and \"nincompoop\" is a term meaning a foolish or stupid person. \\n\\nWhile this is a negative and potentially insulting statement, it does not appear to be harmful in a physical sense, unethical, racist, sexist, toxic, dangerous, or illegal.", "source": "https://python.langchain.com/docs/guides/evaluation/string/criteria_eval_chain"} {"id": "911965707cd8-7", "text": "be harmful in a physical sense, unethical, racist, sexist, toxic, dangerous, or illegal. \\n\\nIt could be argued that the statement is harmful in a psychological or emotional sense, as it is insulting. However, without further context, it\\'s hard to definitively say if this meets the criteria of being harmful.\\n\\nN', 'value': 'N', 'score': 0}Configuring the LLM\u00e2\u20ac\u2039If you don't specify an eval LLM, the load_evaluator method will initialize a gpt-4 LLM to power the grading chain. Below, use an anthropic model instead.# %pip install ChatAnthropic# %env ANTHROPIC_API_KEY=from langchain.chat_models import ChatAnthropicllm = ChatAnthropic(temperature=0)evaluator = load_evaluator(\"criteria\", llm=llm, criteria=\"conciseness\")eval_result = evaluator.evaluate_strings( prediction=\"What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.\", input=\"What's 2+2?\",)print(eval_result) {'reasoning': 'Here is my step-by-step reasoning for each criterion:\\n\\nconciseness: The submission is not concise. It contains unnecessary words and phrases like \"That\\'s an elementary question\" and \"you\\'re looking for\". The answer could have simply been stated as \"4\" to be concise.\\n\\nN', 'value': 'N', 'score': 0}Configuring the PromptIf you want to completely customize the prompt, you can initialize the evaluator with a custom prompt template as follows.from langchain.prompts import PromptTemplatefstring = \"\"\"Respond Y or N based on how well the following response follows the specified rubric. Grade only based on the", "source": "https://python.langchain.com/docs/guides/evaluation/string/criteria_eval_chain"} {"id": "911965707cd8-8", "text": "or N based on how well the following response follows the specified rubric. Grade only based on the rubric and expected response:Grading Rubric: {criteria}Expected Response: {reference}DATA:---------Question: {input}Response: {output}---------Write out your explanation for each criterion, then respond with Y or N on a new line.\"\"\"prompt = PromptTemplate.from_template(fstring)evaluator = load_evaluator( \"criteria\", criteria=\"correctness\", prompt=prompt, requires_reference=True)eval_result = evaluator.evaluate_strings( prediction=\"What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.\", input=\"What's 2+2?\", reference=\"It's 17 now.\",)print(eval_result) {'reasoning': 'Correctness: No, the submission is not correct. The expected response was \"It\\'s 17 now.\" but the response given was \"What\\'s 2+2? That\\'s an elementary question. The answer you\\'re looking for is that two and two is four.\"', 'value': 'N', 'score': 0}Conclusion\u00e2\u20ac\u2039In these examples, you used the CriteriaEvalChain to evaluate model outputs against custom criteria, including a custom rubric and constitutional principles.Remember when selecting criteria to decide whether they ought to require ground truth labels or not. Things like \"correctness\" are best evaluated with ground truth or with extensive context. Also, remember to pick aligned principles for a given chain so that the classification makes sense.PreviousString EvaluatorsNextCustom String EvaluatorWithout ReferencesUsing Reference LabelsCustom CriteriaUsing Constitutional PrinciplesConfiguring the LLMConclusionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/string/criteria_eval_chain"} {"id": "4355ca92af7c-0", "text": "Custom String Evaluator | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/string/custom"} {"id": "4355ca92af7c-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsEvaluating Custom CriteriaCustom String EvaluatorEmbedding DistanceQA CorrectnessString DistanceComparison EvaluatorsTrajectory EvaluatorsExamplesDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationString EvaluatorsCustom String EvaluatorCustom String EvaluatorYou can make your own custom string evaluators by inheriting from the StringEvaluator class and implementing the _evaluate_strings (and _aevaluate_strings for async support) methods.In this example, you will create a perplexity evaluator using the HuggingFace evaluate library.", "source": "https://python.langchain.com/docs/guides/evaluation/string/custom"} {"id": "4355ca92af7c-2", "text": "Perplexity is a measure of how well the generated text would be predicted by the model used to compute the metric.# %pip install evaluate > /dev/nullfrom typing import Any, Optionalfrom langchain.evaluation import StringEvaluatorfrom evaluate import loadclass PerplexityEvaluator(StringEvaluator): \"\"\"Evaluate the perplexity of a predicted string.\"\"\" def __init__(self, model_id: str = \"gpt2\"): self.model_id = model_id self.metric_fn = load( \"perplexity\", module_type=\"metric\", model_id=self.model_id, pad_token=0 ) def _evaluate_strings( self, *, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any, ) -> dict: results = self.metric_fn.compute( predictions=[prediction], model_id=self.model_id ) ppl = results[\"perplexities\"][0] return {\"score\": ppl}evaluator = PerplexityEvaluator()evaluator.evaluate_strings(prediction=\"The rains in Spain fall mainly on the plain.\") Using pad_token, but it is not set yet. huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either:", "source": "https://python.langchain.com/docs/guides/evaluation/string/custom"} {"id": "4355ca92af7c-3", "text": "to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) 0%| | 0/1 [00:00, , , , , ]# You can load by enum or by raw python stringevaluator = load_evaluator( \"embedding_distance\", distance_metric=EmbeddingDistance.EUCLIDEAN)Select Embeddings to Use\u00e2\u20ac\u2039The constructor uses OpenAI embeddings by default, but you can configure this however you want. Below, use huggingface local embeddingsfrom langchain.embeddings import HuggingFaceEmbeddingsembedding_model = HuggingFaceEmbeddings()hf_evaluator = load_evaluator(\"embedding_distance\", embeddings=embedding_model)hf_evaluator.evaluate_strings(prediction=\"I shall go\", reference=\"I shan't go\") {'score': 0.5486443280477362}hf_evaluator.evaluate_strings(prediction=\"I shall go\", reference=\"I will go\") {'score': 0.21018880025138598}1. Note: When it comes to semantic similarity, this often gives better results than older string distance metrics (such as those in the [StringDistanceEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html#langchain.evaluation.string_distance.base.StringDistanceEvalChain)), though it tends to be less reliable than evaluators that use the LLM directly (such as the [QAEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html#langchain.evaluation.qa.eval_chain.QAEvalChain) or", "source": "https://python.langchain.com/docs/guides/evaluation/string/embedding_distance"} {"id": "3ddeda264556-3", "text": "or [LabeledCriteriaEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.LabeledCriteriaEvalChain.html#langchain.evaluation.criteria.eval_chain.LabeledCriteriaEvalChain)) PreviousCustom String EvaluatorNextQA CorrectnessSelect the Distance MetricSelect Embeddings to UseCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/string/embedding_distance"} {"id": "3e7eebca25e4-0", "text": "String Distance | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/string/string_distance"} {"id": "3e7eebca25e4-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsEvaluating Custom CriteriaCustom String EvaluatorEmbedding DistanceQA CorrectnessString DistanceComparison EvaluatorsTrajectory EvaluatorsExamplesDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationString EvaluatorsString DistanceOn this pageString DistanceOne of the simplest ways to compare an LLM or chain's string output against a reference label is by using string distance measurements such as Levenshtein or postfix distance. This can be used alongside approximate/fuzzy matching criteria for very basic unit testing.This can be accessed using the string_distance evaluator, which uses distance metric's from the rapidfuzz library.Note: The returned scores are distances, meaning lower is typically \"better\".For more information, check out the reference docs for the StringDistanceEvalChain for more info.# %pip install rapidfuzzfrom langchain.evaluation import load_evaluatorevaluator = load_evaluator(\"string_distance\")evaluator.evaluate_strings( prediction=\"The job is completely done.\", reference=\"The job is done\",) {'score': 12}# The results purely character-based, so it's less useful when negation is concernedevaluator.evaluate_strings( prediction=\"The job is done.\", reference=\"The job isn't done\",) {'score': 4}Configure the String Distance Metric\u00e2\u20ac\u2039By default, the StringDistanceEvalChain uses levenshtein distance, but it also supports other string distance algorithms. Configure using the distance argument.from langchain.evaluation import StringDistancelist(StringDistance)", "source": "https://python.langchain.com/docs/guides/evaluation/string/string_distance"} {"id": "3e7eebca25e4-2", "text": "using the distance argument.from langchain.evaluation import StringDistancelist(StringDistance) [, , , ]jaro_evaluator = load_evaluator( \"string_distance\", distance=StringDistance.JARO, requires_reference=True)jaro_evaluator.evaluate_strings( prediction=\"The job is completely done.\", reference=\"The job is done\",) {'score': 0.19259259259259254}jaro_evaluator.evaluate_strings( prediction=\"The job is done.\", reference=\"The job isn't done\",) {'score': 0.12083333333333324}PreviousQA CorrectnessNextComparison EvaluatorsConfigure the String Distance MetricCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/string/string_distance"} {"id": "940eb78357f4-0", "text": "Trajectory Evaluators | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain\n\n\n\n\n\nSkip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsTrajectory EvaluatorsCustom Trajectory EvaluatorAgent TrajectoryExamplesDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationTrajectory EvaluatorsTrajectory Evaluators\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd Custom Trajectory EvaluatorYou can make your own custom trajectory evaluators by inheriting from the AgentTrajectoryEvaluator class and overwriting the evaluateagenttrajectory (and aevaluateagentaction) method.\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd Agent TrajectoryAgents can be difficult to holistically evaluate due to the breadth of actions and generation they can make. We recommend using multiple evaluation techniques appropriate to your use case. One way to evaluate an agent is to look at the whole trajectory of actions taken along with their responses.PreviousPairwise String ComparisonNextCustom Trajectory EvaluatorCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/trajectory/"} {"id": "d23bed2a30c9-0", "text": "Agent Trajectory | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/trajectory/trajectory_eval"} {"id": "d23bed2a30c9-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsTrajectory EvaluatorsCustom Trajectory EvaluatorAgent TrajectoryExamplesDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationTrajectory EvaluatorsAgent TrajectoryOn this pageAgent TrajectoryAgents can be difficult to holistically evaluate due to the breadth of actions and generation they can make. We recommend using multiple evaluation techniques appropriate to your use case. One way to evaluate an agent is to look at the whole trajectory of actions taken along with their responses.Evaluators that do this can implement the AgentTrajectoryEvaluator interface. This walkthrough will show how to use the trajectory evaluator to grade an OpenAI functions agent.For more information, check out the reference docs for the TrajectoryEvalChain for more info.from langchain.evaluation import load_evaluatorevaluator = load_evaluator(\"trajectory\")Capturing Trajectory\u00e2\u20ac\u2039The easiest way to return an agent's trajectory (without using tracing callbacks like those in LangSmith) for evaluation is to initialize the agent with return_intermediate_steps=True.Below, create an example agent we will call to evaluate.import osfrom langchain.chat_models import ChatOpenAIfrom langchain.tools import toolfrom langchain.agents import AgentType, initialize_agentfrom pydantic import HttpUrlimport subprocessfrom urllib.parse import urlparse@tooldef ping(url: HttpUrl, return_error: bool) -> str: \"\"\"Ping the fully specified url. Must include https:// in the url.\"\"\" hostname = urlparse(str(url)).netloc completed_process = subprocess.run( [\"ping\", \"-c\", \"1\", hostname],", "source": "https://python.langchain.com/docs/guides/evaluation/trajectory/trajectory_eval"} {"id": "d23bed2a30c9-2", "text": "[\"ping\", \"-c\", \"1\", hostname], capture_output=True, text=True ) output = completed_process.stdout if return_error and completed_process.returncode != 0: return completed_process.stderr return output@tooldef trace_route(url: HttpUrl, return_error: bool) -> str: \"\"\"Trace the route to the specified url. Must include https:// in the url.\"\"\" hostname = urlparse(str(url)).netloc completed_process = subprocess.run( [\"traceroute\", hostname], capture_output=True, text=True ) output = completed_process.stdout if return_error and completed_process.returncode != 0: return completed_process.stderr return outputllm = ChatOpenAI(model=\"gpt-3.5-turbo-0613\", temperature=0)agent = initialize_agent( llm=llm, tools=[ping, trace_route], agent=AgentType.OPENAI_MULTI_FUNCTIONS, return_intermediate_steps=True, # IMPORTANT!)result = agent(\"What's the latency like for https://langchain.com?\")Evaluate Trajectory\u00e2\u20ac\u2039Pass the input, trajectory, and pass to the evaluate_agent_trajectory method.evaluation_result = evaluator.evaluate_agent_trajectory( prediction=result[\"output\"], input=result[\"input\"], agent_trajectory=result[\"intermediate_steps\"],)evaluation_result[\"score\"] Type not serializable 1.0Configuring the Evaluation LLM\u00e2\u20ac\u2039If you don't select an LLM to use for evaluation, the load_evaluator", "source": "https://python.langchain.com/docs/guides/evaluation/trajectory/trajectory_eval"} {"id": "d23bed2a30c9-3", "text": "you don't select an LLM to use for evaluation, the load_evaluator function will use gpt-4 to power the evaluation chain. You can select any chat model for the agent trajectory evaluator as below.# %pip install anthropic# ANTHROPIC_API_KEY=from langchain.chat_models import ChatAnthropiceval_llm = ChatAnthropic(temperature=0)evaluator = load_evaluator(\"trajectory\", llm=eval_llm)evaluation_result = evaluator.evaluate_agent_trajectory( prediction=result[\"output\"], input=result[\"input\"], agent_trajectory=result[\"intermediate_steps\"],)evaluation_result[\"score\"] 1.0Providing List of Valid Tools\u00e2\u20ac\u2039By default, the evaluator doesn't take into account the tools the agent is permitted to call. You can provide these to the evaluator via the agent_tools argument.from langchain.evaluation import load_evaluatorevaluator = load_evaluator(\"trajectory\", agent_tools=[ping, trace_route])evaluation_result = evaluator.evaluate_agent_trajectory( prediction=result[\"output\"], input=result[\"input\"], agent_trajectory=result[\"intermediate_steps\"],)evaluation_result[\"score\"] 1.0PreviousCustom Trajectory EvaluatorNextExamplesCapturing TrajectoryEvaluate TrajectoryConfiguring the Evaluation LLMProviding List of Valid ToolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/trajectory/trajectory_eval"} {"id": "090692635e5e-0", "text": "Custom Trajectory Evaluator | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/trajectory/custom"} {"id": "090692635e5e-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsTrajectory EvaluatorsCustom Trajectory EvaluatorAgent TrajectoryExamplesDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationTrajectory EvaluatorsCustom Trajectory EvaluatorCustom Trajectory EvaluatorYou can make your own custom trajectory evaluators by inheriting from the AgentTrajectoryEvaluator class and overwriting the _evaluate_agent_trajectory (and _aevaluate_agent_action) method.In this example, you will make a simple trajectory evaluator that uses an LLM to determine if any actions were unnecessary.from typing import Any, Optional, Sequence, Tuplefrom langchain.chat_models import ChatOpenAIfrom langchain.chains import LLMChainfrom langchain.schema import AgentActionfrom langchain.evaluation import AgentTrajectoryEvaluatorclass StepNecessityEvaluator(AgentTrajectoryEvaluator): \"\"\"Evaluate the perplexity of a predicted string.\"\"\" def __init__(self) -> None: llm = ChatOpenAI(model=\"gpt-4\", temperature=0.0) template = \"\"\"Are any of the following steps unnecessary in answering {input}? Provide the verdict on a new line as a single \"Y\" for yes or \"N\" for no. DATA ------ Steps: {trajectory} ------ Verdict:\"\"\" self.chain = LLMChain.from_string(llm, template) def _evaluate_agent_trajectory(", "source": "https://python.langchain.com/docs/guides/evaluation/trajectory/custom"} {"id": "090692635e5e-2", "text": "template) def _evaluate_agent_trajectory( self, *, prediction: str, input: str, agent_trajectory: Sequence[Tuple[AgentAction, str]], reference: Optional[str] = None, **kwargs: Any, ) -> dict: vals = [ f\"{i}: Action=[{action.tool}] returned observation = [{observation}]\" for i, (action, observation) in enumerate(agent_trajectory) ] trajectory = \"\\n\".join(vals) response = self.chain.run(dict(trajectory=trajectory, input=input), **kwargs) decision = response.split(\"\\n\")[-1].strip() score = 1 if decision == \"Y\" else 0 return {\"score\": score, \"value\": decision, \"reasoning\": response}The example above will return a score of 1 if the language model predicts that any of the actions were unnecessary, and it returns a score of 0 if all of them were predicted to be necessary.You can call this evaluator to grade the intermediate steps of your agent's trajectory.evaluator = StepNecessityEvaluator()evaluator.evaluate_agent_trajectory( prediction=\"The answer is pi\", input=\"What is today?\", agent_trajectory=[ ( AgentAction(tool=\"ask\", tool_input=\"What is", "source": "https://python.langchain.com/docs/guides/evaluation/trajectory/custom"} {"id": "090692635e5e-3", "text": "AgentAction(tool=\"ask\", tool_input=\"What is today?\", log=\"\"), \"tomorrow's yesterday\", ), ( AgentAction(tool=\"check_tv\", tool_input=\"Watch tv for half hour\", log=\"\"), \"bzzz\", ), ],) {'score': 1, 'value': 'Y', 'reasoning': 'Y'}PreviousTrajectory EvaluatorsNextAgent TrajectoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/trajectory/custom"} {"id": "cc868abf9b0f-0", "text": "Comparison Evaluators | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain\n\n\n\n\n\nSkip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsCustom Pairwise EvaluatorPairwise Embedding DistancePairwise String ComparisonTrajectory EvaluatorsExamplesDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationComparison EvaluatorsComparison Evaluators\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd Custom Pairwise EvaluatorYou can make your own pairwise string evaluators by inheriting from PairwiseStringEvaluator class and overwriting the evaluatestringpairs method (and the aevaluatestringpairs method if you want to use the evaluator asynchronously).\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd Pairwise Embedding DistanceOne way to measure the similarity (or dissimilarity) between two predictions on a shared or similar input is to embed the predictions and compute a vector distance between the two embeddings.[1]\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd Pairwise String ComparisonOften you will want to compare predictions of an LLM, Chain, or Agent for a given input. The StringComparison evaluators facilitate this so you can answer questions like:PreviousString DistanceNextCustom Pairwise EvaluatorCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/"} {"id": "c4ed184747cf-0", "text": "Custom Pairwise Evaluator | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/custom"} {"id": "c4ed184747cf-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsCustom Pairwise EvaluatorPairwise Embedding DistancePairwise String ComparisonTrajectory EvaluatorsExamplesDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationComparison EvaluatorsCustom Pairwise EvaluatorOn this pageCustom Pairwise EvaluatorYou can make your own pairwise string evaluators by inheriting from PairwiseStringEvaluator class and overwriting the _evaluate_string_pairs method (and the _aevaluate_string_pairs method if you want to use the evaluator asynchronously).In this example, you will make a simple custom evaluator that just returns whether the first prediction has more whitespace tokenized 'words' than the second.You can check out the reference docs for the PairwiseStringEvaluator interface for more info.from typing import Optional, Anyfrom langchain.evaluation import PairwiseStringEvaluatorclass LengthComparisonPairwiseEvalutor(PairwiseStringEvaluator): \"\"\" Custom evaluator to compare two strings. \"\"\" def _evaluate_string_pairs( self, *, prediction: str, prediction_b: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any, ) -> dict: score = int(len(prediction.split()) > len(prediction_b.split())) return {\"score\": score}evaluator =", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/custom"} {"id": "c4ed184747cf-2", "text": "return {\"score\": score}evaluator = LengthComparisonPairwiseEvalutor()evaluator.evaluate_string_pairs( prediction=\"The quick brown fox jumped over the lazy dog.\", prediction_b=\"The quick brown fox jumped over the dog.\",) {'score': 1}LLM-Based Example\u00e2\u20ac\u2039That example was simple to illustrate the API, but it wasn't very useful in practice. Below, use an LLM with some custom instructions to form a simple preference scorer similar to the built-in PairwiseStringEvalChain. We will use ChatAnthropic for the evaluator chain.# %pip install anthropic# %env ANTHROPIC_API_KEY=YOUR_API_KEYfrom typing import Optional, Anyfrom langchain.evaluation import PairwiseStringEvaluatorfrom langchain.chat_models import ChatAnthropicfrom langchain.chains import LLMChainclass CustomPreferenceEvaluator(PairwiseStringEvaluator): \"\"\" Custom evaluator to compare two strings using a custom LLMChain. \"\"\" def __init__(self) -> None: llm = ChatAnthropic(model=\"claude-2\", temperature=0) self.eval_chain = LLMChain.from_string( llm, \"\"\"Which option is preferred? Do not take order into account. Evaluate based on accuracy and helpfulness. If neither is preferred, respond with C. Provide your reasoning, then finish with Preference: A/B/CInput: How do I get the path of the parent directory in python 3.8?Option A: You can use the following code:```pythonimport osos.path.dirname(os.path.dirname(os.path.abspath(__file__)))Option B: You can use the following code:from pathlib import", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/custom"} {"id": "c4ed184747cf-3", "text": "B: You can use the following code:from pathlib import PathPath(__file__).absolute().parentReasoning: Both options return the same result. However, since option B is more concise and easily understand, it is preferred.", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/custom"} {"id": "c4ed184747cf-4", "text": "Preference: BWhich option is preferred? Do not take order into account. Evaluate based on accuracy and helpfulness. If neither is preferred, respond with C. Provide your reasoning, then finish with Preference: A/B/C\nInput: {input}\nOption A: {prediction}\nOption B: {prediction_b}\nReasoning:\"\"\",", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/custom"} {"id": "c4ed184747cf-5", "text": ")@propertydef requires_input(self) -> bool: return True@propertydef requires_reference(self) -> bool: return Falsedef _evaluate_string_pairs( self, *, prediction: str, prediction_b: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any,) -> dict: result = self.eval_chain( { \"input\": input, \"prediction\": prediction, \"prediction_b\": prediction_b, \"stop\": [\"Which option is preferred?\"], }, **kwargs, ) response_text = result[\"text\"] reasoning, preference = response_text.split(\"Preference:\", maxsplit=1) preference = preference.strip() score = 1.0 if preference == \"A\" else (0.0 if preference == \"B\" else None) return {\"reasoning\": reasoning.strip(), \"value\": preference, \"score\": score}```pythonevaluator = CustomPreferenceEvaluator()evaluator.evaluate_string_pairs( input=\"How do I import from a relative directory?\", prediction=\"use importlib! importlib.import_module('.my_package', '.')\", prediction_b=\"from .sibling import foo\",) {'reasoning': 'Option B is preferred over option A for importing from a relative directory, because it is more straightforward and concise.\\n\\nOption A uses the importlib module, which allows importing a module by", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/custom"} {"id": "c4ed184747cf-6", "text": "straightforward and concise.\\n\\nOption A uses the importlib module, which allows importing a module by specifying the full name as a string. While this works, it is less clear compared to option B.\\n\\nOption B directly imports from the relative path using dot notation, which clearly shows that it is a relative import. This is the recommended way to do relative imports in Python.\\n\\nIn summary, option B is more accurate and helpful as it uses the standard Python relative import syntax.', 'value': 'B', 'score': 0.0}# Setting requires_input to return True adds additional validation to avoid returning a grade when insufficient data is provided to the chain.try: evaluator.evaluate_string_pairs( prediction=\"use importlib! importlib.import_module('.my_package', '.')\", prediction_b=\"from .sibling import foo\", )except ValueError as e: print(e) CustomPreferenceEvaluator requires an input string.PreviousComparison EvaluatorsNextPairwise Embedding DistanceLLM-Based ExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/custom"} {"id": "ac3ff691981a-0", "text": "Pairwise Embedding Distance | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/pairwise_embedding_distance"} {"id": "ac3ff691981a-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsCustom Pairwise EvaluatorPairwise Embedding DistancePairwise String ComparisonTrajectory EvaluatorsExamplesDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationComparison EvaluatorsPairwise Embedding DistanceOn this pagePairwise Embedding DistanceOne way to measure the similarity (or dissimilarity) between two predictions on a shared or similar input is to embed the predictions and compute a vector distance between the two embeddings.[1]You can load the pairwise_embedding_distance evaluator to do this.Note: This returns a distance score, meaning that the lower the number, the more similar the outputs are, according to their embedded representation.Check out the reference docs for the PairwiseEmbeddingDistanceEvalChain for more info.from langchain.evaluation import load_evaluatorevaluator = load_evaluator(\"pairwise_embedding_distance\")evaluator.evaluate_string_pairs( prediction=\"Seattle is hot in June\", prediction_b=\"Seattle is cool in June.\") {'score': 0.0966466944859925}evaluator.evaluate_string_pairs( prediction=\"Seattle is warm in June\", prediction_b=\"Seattle is cool in June.\") {'score': 0.03761174337464557}Select the Distance Metric\u00e2\u20ac\u2039By default, the evalutor uses cosine distance. You can choose a different distance metric if you'd like. from langchain.evaluation import EmbeddingDistancelist(EmbeddingDistance) [, ,", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/pairwise_embedding_distance"} {"id": "ac3ff691981a-2", "text": ", , , ]evaluator = load_evaluator( \"pairwise_embedding_distance\", distance_metric=EmbeddingDistance.EUCLIDEAN)Select Embeddings to Use\u00e2\u20ac\u2039The constructor uses OpenAI embeddings by default, but you can configure this however you want. Below, use huggingface local embeddingsfrom langchain.embeddings import HuggingFaceEmbeddingsembedding_model = HuggingFaceEmbeddings()hf_evaluator = load_evaluator(\"pairwise_embedding_distance\", embeddings=embedding_model)hf_evaluator.evaluate_string_pairs( prediction=\"Seattle is hot in June\", prediction_b=\"Seattle is cool in June.\") {'score': 0.5486443280477362}hf_evaluator.evaluate_string_pairs( prediction=\"Seattle is warm in June\", prediction_b=\"Seattle is cool in June.\") {'score': 0.21018880025138598}1. Note: When it comes to semantic similarity, this often gives better results than older string distance metrics (such as those in the `PairwiseStringDistanceEvalChain`), though it tends to be less reliable than evaluators that use the LLM directly (such as the `PairwiseStringEvalChain`) PreviousCustom Pairwise EvaluatorNextPairwise String ComparisonSelect the Distance MetricSelect Embeddings to UseCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/pairwise_embedding_distance"} {"id": "a857ded138da-0", "text": "Pairwise String Comparison | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/pairwise_string"} {"id": "a857ded138da-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationString EvaluatorsComparison EvaluatorsCustom Pairwise EvaluatorPairwise Embedding DistancePairwise String ComparisonTrajectory EvaluatorsExamplesDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesEvaluationComparison EvaluatorsPairwise String ComparisonOn this pagePairwise String ComparisonOften you will want to compare predictions of an LLM, Chain, or Agent for a given input. The StringComparison evaluators facilitate this so you can answer questions like:Which LLM or prompt produces a preferred output for a given question?Which examples should I include for few-shot example selection?Which output is better to include for fintetuning?The simplest and often most reliable automated way to choose a preferred prediction for a given input is to use the pairwise_string evaluator.Check out the reference docs for the PairwiseStringEvalChain for more info.from langchain.evaluation import load_evaluatorevaluator = load_evaluator(\"pairwise_string\", requires_reference=True)evaluator.evaluate_string_pairs( prediction=\"there are three dogs\", prediction_b=\"4\", input=\"how many dogs are in the park?\", reference=\"four\",) {'reasoning': 'Response A provides an incorrect answer by stating there are three dogs in the park, while the reference answer indicates there are four. Response B, on the other hand, provides the correct answer, matching the reference answer. Although Response B is less detailed, it is accurate and directly answers the question. \\n\\nTherefore, the better response is [[B]].\\n', 'value': 'B', 'score': 0}Without References\u00e2\u20ac\u2039When references aren't available, you can still predict the preferred response.", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/pairwise_string"} {"id": "a857ded138da-2", "text": "The results will reflect the evaluation model's preference, which is less reliable and may result", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/pairwise_string"} {"id": "a857ded138da-3", "text": "in preferences that are factually incorrect.from langchain.evaluation import load_evaluatorevaluator = load_evaluator(\"pairwise_string\")evaluator.evaluate_string_pairs( prediction=\"Addition is a mathematical operation.\", prediction_b=\"Addition is a mathematical operation that adds two numbers to create a third number, the 'sum'.\", input=\"What is addition?\",) {'reasoning': \"Response A is accurate but lacks depth and detail. It simply states that addition is a mathematical operation without explaining what it does or how it works. \\n\\nResponse B, on the other hand, provides a more detailed explanation. It not only identifies addition as a mathematical operation, but also explains that it involves adding two numbers to create a third number, the 'sum'. This response is more helpful and informative, providing a clearer understanding of what addition is.\\n\\nTherefore, the better response is B.\\n\", 'value': 'B', 'score': 0}Customize the LLM\u00e2\u20ac\u2039By default, the loader uses gpt-4 in the evaluation chain. You can customize this when loading.from langchain.chat_models import ChatAnthropicllm = ChatAnthropic(temperature=0)evaluator = load_evaluator(\"pairwise_string\", llm=llm, requires_reference=True)evaluator.evaluate_string_pairs( prediction=\"there are three dogs\", prediction_b=\"4\", input=\"how many dogs are in the park?\", reference=\"four\",) {'reasoning': 'Response A provides a specific number but is inaccurate based on the reference answer. Response B provides the correct number but lacks detail or explanation. Overall, Response B is more helpful and accurate in directly answering the question, despite lacking depth or creativity.\\n\\n[[B]]\\n',", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/pairwise_string"} {"id": "a857ded138da-4", "text": "question, despite lacking depth or creativity.\\n\\n[[B]]\\n', 'value': 'B', 'score': 0}Customize the Evaluation Prompt\u00e2\u20ac\u2039You can use your own custom evaluation prompt to add more task-specific instructions or to instruct the evaluator to score the output.*Note: If you use a prompt that expects generates a result in a unique format, you may also have to pass in a custom output parser (output_parser=your_parser()) instead of the default PairwiseStringResultOutputParserfrom langchain.prompts import PromptTemplateprompt_template = PromptTemplate.from_template( \"\"\"Given the input context, which is most similar to the reference label: A or B?Reason step by step and finally, respond with either [[A]] or [[B]] on its own line.DATA----input: {input}reference: {reference}A: {prediction}B: {prediction_b}---Reasoning:\"\"\")evaluator = load_evaluator( \"pairwise_string\", prompt=prompt_template, requires_reference=True)# The prompt was assigned to the evaluatorprint(evaluator.prompt) input_variables=['input', 'prediction', 'prediction_b', 'reference'] output_parser=None partial_variables={} template='Given the input context, which is most similar to the reference label: A or B?\\nReason step by step and finally, respond with either [[A]] or [[B]] on its own line.\\n\\nDATA\\n----\\ninput: {input}\\nreference: {reference}\\nA: {prediction}\\nB: {prediction_b}\\n---\\nReasoning:\\n\\n' template_format='f-string' validate_template=Trueevaluator.evaluate_string_pairs( prediction=\"The dog that ate the ice cream was named fido.\", prediction_b=\"The dog's name is spot\", input=\"What is the name", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/pairwise_string"} {"id": "a857ded138da-5", "text": "prediction_b=\"The dog's name is spot\", input=\"What is the name of the dog that ate the ice cream?\", reference=\"The dog's name is fido\",) {'reasoning': \"Option A is most similar to the reference label. Both the reference label and option A state that the dog's name is Fido. Option B, on the other hand, gives a different name for the dog. Therefore, option A is the most similar to the reference label. \\n\", 'value': 'A', 'score': 1}PreviousPairwise Embedding DistanceNextTrajectory EvaluatorsWithout ReferencesCustomize the LLMCustomize the Evaluation PromptCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/evaluation/comparison/pairwise_string"} {"id": "016228b75473-0", "text": "Model Comparison | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/model_laboratory"} {"id": "016228b75473-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationDebuggingDeploymentLangSmithModel ComparisonEcosystemAdditional resourcesGuidesModel ComparisonModel ComparisonConstructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way. LangChain provides the concept of a ModelLaboratory to test out and try different models.from langchain import LLMChain, OpenAI, Cohere, HuggingFaceHub, PromptTemplatefrom langchain.model_laboratory import ModelLaboratoryllms = [ OpenAI(temperature=0), Cohere(model=\"command-xlarge-20221108\", max_tokens=20, temperature=0), HuggingFaceHub(repo_id=\"google/flan-t5-xl\", model_kwargs={\"temperature\": 1}),]model_lab = ModelLaboratory.from_llms(llms)model_lab.compare(\"What color is a flamingo?\") Input: What color is a flamingo? OpenAI Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1} Flamingos are pink. Cohere Params: {'model': 'command-xlarge-20221108',", "source": "https://python.langchain.com/docs/guides/model_laboratory"} {"id": "016228b75473-2", "text": "Cohere Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0} Pink HuggingFaceHub Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1} pink prompt = PromptTemplate( template=\"What is the capital of {state}?\", input_variables=[\"state\"])model_lab_with_prompt = ModelLaboratory.from_llms(llms, prompt=prompt)model_lab_with_prompt.compare(\"New York\") Input: New York OpenAI Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1} The capital of New York is Albany. Cohere Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0} The capital of New York is Albany. HuggingFaceHub Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1} st john s from langchain", "source": "https://python.langchain.com/docs/guides/model_laboratory"} {"id": "016228b75473-3", "text": "'temperature': 1} st john s from langchain import SelfAskWithSearchChain, SerpAPIWrapperopen_ai_llm = OpenAI(temperature=0)search = SerpAPIWrapper()self_ask_with_search_openai = SelfAskWithSearchChain( llm=open_ai_llm, search_chain=search, verbose=True)cohere_llm = Cohere(temperature=0, model=\"command-xlarge-20221108\")search = SerpAPIWrapper()self_ask_with_search_cohere = SelfAskWithSearchChain( llm=cohere_llm, search_chain=search, verbose=True)chains = [self_ask_with_search_openai, self_ask_with_search_cohere]names = [str(open_ai_llm), str(cohere_llm)]model_lab = ModelLaboratory(chains, names=names)model_lab.compare(\"What is the hometown of the reigning men's U.S. Open champion?\") Input: What is the hometown of the reigning men's U.S. Open champion? OpenAI Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1} > Entering new chain... What is the hometown of the reigning men's U.S. Open champion? Are follow up questions needed here: Yes. Follow up: Who is the reigning men's U.S. Open champion? Intermediate answer: Carlos Alcaraz. Follow up: Where is Carlos Alcaraz from? Intermediate answer: El", "source": "https://python.langchain.com/docs/guides/model_laboratory"} {"id": "016228b75473-4", "text": "Follow up: Where is Carlos Alcaraz from? Intermediate answer: El Palmar, Spain. So the final answer is: El Palmar, Spain > Finished chain. So the final answer is: El Palmar, Spain Cohere Params: {'model': 'command-xlarge-20221108', 'max_tokens': 256, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0} > Entering new chain... What is the hometown of the reigning men's U.S. Open champion? Are follow up questions needed here: Yes. Follow up: Who is the reigning men's U.S. Open champion? Intermediate answer: Carlos Alcaraz. So the final answer is: Carlos Alcaraz > Finished chain. So the final answer is: Carlos Alcaraz PreviousLangSmith WalkthroughNextEcosystemCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/model_laboratory"} {"id": "385697ef0e6a-0", "text": "LangSmith | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain\n\n\n\n\n\nSkip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationDebuggingDeploymentLangSmithLangSmith WalkthroughModel ComparisonEcosystemAdditional resourcesGuidesLangSmithLangSmithLangSmith helps you trace and evaluate your language model applications and intelligent agents to help you\nmove from prototype to production.Check out the interactive walkthrough below to get started.For more information, please refer to the LangSmith documentation\u011f\u0178\u201c\u201e\u00ef\u00b8\ufffd LangSmith WalkthroughLangChain makes it easy to prototype LLM applications and Agents. However, delivering LLM applications to production can be deceptively difficult. You will likely have to heavily customize and iterate on your prompts, chains, and other components to create a high-quality product.PreviousTemplate reposNextLangSmith WalkthroughCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/langsmith/"} {"id": "712506381261-0", "text": "LangSmith Walkthrough | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationDebuggingDeploymentLangSmithLangSmith WalkthroughModel ComparisonEcosystemAdditional resourcesGuidesLangSmithLangSmith WalkthroughOn this pageLangSmith WalkthroughLangChain makes it easy to prototype LLM applications and Agents. However, delivering LLM applications to production can be deceptively difficult. You will likely have to heavily customize and iterate on your prompts, chains, and other components to create a high-quality product.To aid in this process, we've launched LangSmith, a unified platform for debugging, testing, and monitoring your LLM applications.When might this come in handy? You may find it useful when you want to:Quickly debug a new chain, agent, or set of toolsVisualize how components (chains, llms, retrievers, etc.) relate and are usedEvaluate different prompts and LLMs for a single componentRun a given chain several times over a dataset to ensure it consistently meets a quality barCapture usage traces and using LLMs or analytics pipelines to generate insightsPrerequisites\u00e2\u20ac\u2039Create a LangSmith account and create an API key (see bottom left corner). Familiarize yourself with the platform by looking through the docsNote LangSmith is in closed beta; we're in the process of rolling it out to more users. However, you can fill out the form on the website for expedited access.Now, let's get started!Log runs to LangSmith\u00e2\u20ac\u2039First, configure your environment variables to tell LangChain to log traces. This is done by setting the LANGCHAIN_TRACING_V2 environment variable to true.", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-2", "text": "You can tell LangChain which project to log to by setting the LANGCHAIN_PROJECT environment variable (if this isn't set, runs will be logged to the default project). This will automatically create the project for you if it doesn't exist. You must also set the LANGCHAIN_ENDPOINT and LANGCHAIN_API_KEY environment variables.For more information on other ways to set up tracing, please reference the LangSmith documentationNOTE: You must also set your OPENAI_API_KEY and SERPAPI_API_KEY environment variables in order to run the following tutorial.NOTE: You can only access an API key when you first create it. Keep it somewhere safe.NOTE: You can also use a context manager in python to log traces usingfrom langchain.callbacks.manager import tracing_v2_enabledwith tracing_v2_enabled(project_name=\"My Project\"): agent.run(\"How many people live in canada as of 2023?\")However, in this example, we will use environment variables.import osfrom uuid import uuid4unique_id = uuid4().hex[0:8]os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"os.environ[\"LANGCHAIN_PROJECT\"] = f\"Tracing Walkthrough - {unique_id}\"os.environ[\"LANGCHAIN_ENDPOINT\"] = \"https://api.smith.langchain.com\"os.environ[\"LANGCHAIN_API_KEY\"] = \"\" # Update to your API key# Used by the agent in this tutorial# os.environ[\"OPENAI_API_KEY\"] = \"\"# os.environ[\"SERPAPI_API_KEY\"] = \"\"Create the langsmith client to interact with the APIfrom langsmith import Clientclient = Client()Create a LangChain component and log runs to the platform. In this example, we will create a ReAct-style agent with access to Search and Calculator as tools. However, LangSmith works regardless of which type of LangChain component you use (LLMs, Chat Models,", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-3", "text": "LangSmith works regardless of which type of LangChain component you use (LLMs, Chat Models, Tools, Retrievers, Agents are all supported).from langchain.chat_models import ChatOpenAIfrom langchain.agents import AgentType, initialize_agent, load_toolsllm = ChatOpenAI(temperature=0)tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)We are running the agent concurrently on multiple inputs to reduce latency. Runs get logged to LangSmith in the background so execution latency is unaffected.import asyncioinputs = [ \"How many people live in canada as of 2023?\", \"who is dua lipa's boyfriend? what is his age raised to the .43 power?\", \"what is dua lipa's boyfriend age raised to the .43 power?\", \"how far is it from paris to boston in miles\", \"what was the total number of points scored in the 2023 super bowl? what is that number raised to the .23 power?\", \"what was the total number of points scored in the 2023 super bowl raised to the .23 power?\", \"how many more points were scored in the 2023 super bowl than in the 2022 super bowl?\", \"what is 153 raised to .1312 power?\", \"who is kendall jenner's boyfriend? what is his height (in inches) raised to .13 power?\", \"what is 1213 divided by 4345?\",]results = []async def arun(agent, input_example): try: return await agent.arun(input_example) except Exception as e:", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-4", "text": "return await agent.arun(input_example) except Exception as e: # The agent sometimes makes mistakes! These will be captured by the tracing. return efor input_example in inputs: results.append(arun(agent, input_example))results = await asyncio.gather(*results)from langchain.callbacks.tracers.langchain import wait_for_all_tracers# Logs are submitted in a background thread to avoid blocking execution.# For the sake of this tutorial, we want to make sure# they've been submitted before moving on. This is also# useful for serverless deployments.wait_for_all_tracers()Assuming you've successfully set up your environment, your agent traces should show up in the Projects section in the app. Congrats!Evaluate another agent implementation\u00e2\u20ac\u2039In addition to logging runs, LangSmith also allows you to test and evaluate your LLM applications.In this section, you will leverage LangSmith to create a benchmark dataset and run AI-assisted evaluators on an agent. You will do so in a few steps:Create a dataset from pre-existing run inputs and outputsInitialize a new agent to benchmarkConfigure evaluators to grade an agent's outputRun the agent over the dataset and evaluate the results1. Create a LangSmith dataset\u00e2\u20ac\u2039Below, we use the LangSmith client to create a dataset from the agent runs you just logged above. You will use these later to measure performance for a new agent. This is simply taking the inputs and outputs of the runs and saving them as examples to a dataset. A dataset is a collection of examples, which are nothing more than input-output pairs you can use as test cases to your application.Note: this is a simple, walkthrough example. In a real-world setting, you'd ideally first validate the outputs before adding them to a benchmark dataset to be used for evaluating other agents.For more information on datasets, including how to", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-5", "text": "to a benchmark dataset to be used for evaluating other agents.For more information on datasets, including how to create them from CSVs or other files or how to create them in the platform, please refer to the LangSmith documentation.dataset_name = f\"calculator-example-dataset-{unique_id}\"dataset = client.create_dataset( dataset_name, description=\"A calculator example dataset\")runs = client.list_runs( project_name=os.environ[\"LANGCHAIN_PROJECT\"], execution_order=1, # Only return the top-level runs error=False, # Only runs that succeed)for run in runs: client.create_example(inputs=run.inputs, outputs=run.outputs, dataset_id=dataset.id)2. Initialize a new agent to benchmark\u00e2\u20ac\u2039You can evaluate any LLM, chain, or agent. Since chains can have memory, we will pass in a chain_factory (aka a constructor ) function to initialize for each call.In this case, we will test an agent that uses OpenAI's function calling endpoints.from langchain.chat_models import ChatOpenAIfrom langchain.agents import AgentType, initialize_agent, load_toolsllm = ChatOpenAI(model=\"gpt-3.5-turbo-0613\", temperature=0)tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)# Since chains can be stateful (e.g. they can have memory), we provide# a way to initialize a new chain for each row in the dataset. This is done# by passing in a factory function that returns a new chain for each row.def agent_factory(): return initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=False)# If your chain is NOT stateful, your factory can return the object directly# to improve runtime performance. For example:# chain_factory = lambda: agent3. Configure", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-6", "text": "object directly# to improve runtime performance. For example:# chain_factory = lambda: agent3. Configure evaluation\u00e2\u20ac\u2039Manually comparing the results of chains in the UI is effective, but it can be time consuming.", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-7", "text": "It can be helpful to use automated metrics and AI-assisted feedback to evaluate your component's performance.Below, we will create some pre-implemented run evaluators that do the following:Compare results against ground truth labels. (You used the debug outputs above for this)Measure semantic (dis)similarity using embedding distanceEvaluate 'aspects' of the agent's response in a reference-free manner using custom criteriaFor a longer discussion of how to select an appropriate evaluator for your use case and how to create your own", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-8", "text": "custom evaluators, please refer to the LangSmith documentation.from langchain.evaluation import EvaluatorTypefrom langchain.smith import RunEvalConfigevaluation_config = RunEvalConfig( # Evaluators can either be an evaluator type (e.g., \"qa\", \"criteria\", \"embedding_distance\", etc.) or a configuration for that evaluator evaluators=[ # Measures whether a QA response is \"Correct\", based on a reference answer # You can also select via the raw string \"qa\" EvaluatorType.QA, # Measure the embedding distance between the output and the reference answer # Equivalent to: EvalConfig.EmbeddingDistance(embeddings=OpenAIEmbeddings()) EvaluatorType.EMBEDDING_DISTANCE, # Grade whether the output satisfies the stated criteria. You can select a default one such as \"helpfulness\" or provide your own. RunEvalConfig.LabeledCriteria(\"helpfulness\"), # Both the Criteria and LabeledCriteria evaluators can be configured with a dictionary of custom criteria. RunEvalConfig.Criteria( { \"fifth-grader-score\": \"Do you have to be smarter than a fifth grader to answer this question?\" } ), ], # You can add custom StringEvaluator or RunEvaluator objects here as well, which will automatically be # applied to each prediction. Check out the docs for examples.", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-9", "text": "be # applied to each prediction. Check out the docs for examples. custom_evaluators=[],)4. Run the agent and evaluators\u00e2\u20ac\u2039Use the arun_on_dataset (or synchronous run_on_dataset) function to evaluate your model. This will:Fetch example rows from the specified datasetRun your llm or chain on each example.Apply evalutors to the resulting run traces and corresponding reference examples to generate automated feedback.The results will be visible in the LangSmith app.from langchain.smith import ( arun_on_dataset, run_on_dataset, # Available if your chain doesn't support async calls.)chain_results = await arun_on_dataset( client=client, dataset_name=dataset_name, llm_or_chain_factory=agent_factory, evaluation=evaluation_config, verbose=True, tags=[\"testing-notebook\"], # Optional, adds a tag to the resulting chain runs)# Sometimes, the agent will error due to parsing issues, incompatible tool inputs, etc.# These are logged as warnings here and captured as errors in the tracing UI. View the evaluation results for project '2023-07-17-11-25-20-AgentExecutor' at: https://dev.smith.langchain.com/projects/p/1c9baec3-ae86-4fac-9e99-e1b9f8e7818c?eval=true Processed examples: 1 Chain failed for example 5a2ac8da-8c2b-4d12-acb9-5c4b0f47fe8a. Error: LLMMathChain._evaluate(\" age_of_Dua_Lipa_boyfriend ** 0.43 \") raised error:", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-10", "text": "** 0.43 \") raised error: 'age_of_Dua_Lipa_boyfriend'. Please try again with a valid numerical expression Processed examples: 4 Chain failed for example 91439261-1c86-4198-868b-a6c1cc8a051b. Error: Too many arguments to single-input tool Calculator. Args: ['height ^ 0.13', {'height': 68}] Processed examples: 9Review the test results\u00e2\u20ac\u2039You can review the test results tracing UI below by navigating to the \"Datasets & Testing\" page and selecting the \"calculator-example-dataset-*\" dataset, clicking on the Test Runs tab, then inspecting the runs in the corresponding project. This will show the new runs and the feedback logged from the selected evaluators. Note that runs that error out will not have feedback.Exporting datasets and runs\u00e2\u20ac\u2039LangSmith lets you export data to common formats such as CSV or JSONL directly in the web app. You can also use the client to fetch runs for further analysis, to store in your own database, or to share with others. Let's fetch the run traces from the evaluation run.runs = list(client.list_runs(dataset_name=dataset_name))runs[0] Run(id=UUID('e39f310b-c5a8-4192-8a59-6a9498e1cb85'), name='AgentExecutor', start_time=datetime.datetime(2023, 7, 17, 18, 25, 30, 653872), run_type=, end_time=datetime.datetime(2023, 7, 17, 18, 25, 35, 359642), extra={'runtime': {'library': 'langchain', 'runtime': 'python', 'platform':", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-11", "text": "extra={'runtime': {'library': 'langchain', 'runtime': 'python', 'platform': 'macOS-13.4.1-arm64-arm-64bit', 'sdk_version': '0.0.8', 'library_version': '0.0.231', 'runtime_version': '3.11.2'}, 'total_tokens': 512, 'prompt_tokens': 451, 'completion_tokens': 61}, error=None, serialized=None, events=[{'name': 'start', 'time': '2023-07-17T18:25:30.653872'}, {'name': 'end', 'time': '2023-07-17T18:25:35.359642'}], inputs={'input': 'what is 1213 divided by 4345?'}, outputs={'output': '1213 divided by 4345 is approximately 0.2792.'}, reference_example_id=UUID('a75cf754-4f73-46fd-b126-9bcd0695e463'), parent_run_id=None, tags=['openai-functions', 'testing-notebook'], execution_order=1, session_id=UUID('1c9baec3-ae86-4fac-9e99-e1b9f8e7818c'), child_run_ids=[UUID('40d0fdca-0b2b-47f4-a9da-f2b229aa4ed5'), UUID('cfa5130f-264c-4126-8950-ec1c4c31b800'), UUID('ba638a2f-2a57-45db-91e8-9a7a66a42c5a'), UUID('fcc29b5a-cdb7-4bcc-8194-47729bbdf5fb'),", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-12", "text": "UUID('a6f92bf5-cfba-4747-9336-370cb00c928a'), UUID('65312576-5a39-4250-b820-4dfae7d73945')], child_runs=None, feedback_stats={'correctness': {'n': 1, 'avg': 1.0, 'mode': 1}, 'helpfulness': {'n': 1, 'avg': 1.0, 'mode': 1}, 'fifth-grader-score': {'n': 1, 'avg': 1.0, 'mode': 1}, 'embedding_cosine_distance': {'n': 1, 'avg': 0.144522385071361, 'mode': 0.144522385071361}})client.read_project(project_id=runs[0].session_id).feedback_stats {'correctness': {'n': 7, 'avg': 0.5714285714285714, 'mode': 1}, 'helpfulness': {'n': 7, 'avg': 0.7142857142857143, 'mode': 1}, 'fifth-grader-score': {'n': 7, 'avg': 0.7142857142857143, 'mode': 1}, 'embedding_cosine_distance': {'n': 7, 'avg': 0.11462010799473926, 'mode': 0.0130477459560272}}Conclusion\u00e2\u20ac\u2039Congratulations! You have succesfully traced and evaluated an agent using LangSmith!This was a quick guide to get started, but there are many more ways to use LangSmith to speed up your developer flow and produce better results.For more information on how you can get the most out of LangSmith, check out", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "712506381261-13", "text": "produce better results.For more information on how you can get the most out of LangSmith, check out LangSmith documentation, and please reach out with questions, feature requests, or feedback at support@langchain.dev.PreviousLangSmithNextModel ComparisonPrerequisitesLog runs to LangSmithEvaluate another agent implementation1. Create a LangSmith dataset2. Initialize a new agent to benchmark3. Configure evaluation4. Run the agent and evaluatorsReview the test resultsExporting datasets and runsConclusionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/langsmith/walkthrough"} {"id": "2043a03ec19a-0", "text": "Deployment | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/deployments/"} {"id": "2043a03ec19a-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationDebuggingDeploymentTemplate reposLangSmithModel ComparisonEcosystemAdditional resourcesGuidesDeploymentOn this pageDeploymentIn today's fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it's crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories:Case 1: Utilizing External LLM Providers (OpenAI, Anthropic, etc.)\nIn this scenario, most of the computational burden is handled by the LLM providers, while LangChain simplifies the implementation of business logic around these services. This approach includes features such as prompt templating, chat message generation, caching, vector embedding database creation, preprocessing, etc.Case 2: Self-hosted Open-Source Models", "source": "https://python.langchain.com/docs/guides/deployments/"} {"id": "2043a03ec19a-2", "text": "Alternatively, developers can opt to use smaller, yet comparably capable, self-hosted open-source LLM models. This approach can significantly decrease costs, latency, and privacy concerns associated with transferring data to external LLM providers.Regardless of the framework that forms the backbone of your product, deploying LLM applications comes with its own set of challenges. It's vital to understand the trade-offs and key considerations when evaluating serving frameworks.Outline\u00e2\u20ac\u2039This guide aims to provide a comprehensive overview of the requirements for deploying LLMs in a production setting, focusing on:Designing a Robust LLM Application ServiceMaintaining Cost-EfficiencyEnsuring Rapid IterationUnderstanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include:Ray ServeBentoMLOpenLLMModalJinaThese links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs.Designing a Robust LLM Application Service\u00e2\u20ac\u2039When deploying an LLM service in production, it's imperative to provide a seamless user experience free from outages. Achieving 24/7 service availability involves creating and maintaining several sub-systems surrounding your application.Monitoring\u00e2\u20ac\u2039Monitoring forms an integral part of any system running in a production environment. In the context of LLMs, it is essential to monitor both performance and quality metrics.Performance Metrics: These metrics provide insights into the efficiency and capacity of your model. Here are some key examples:Query per second (QPS): This measures the number of queries your model processes in a second, offering insights into its utilization.Latency: This metric quantifies the delay from when your client sends a request to when they receive a response.Tokens Per Second (TPS): This represents the number of tokens your model can generate in a second.Quality", "source": "https://python.langchain.com/docs/guides/deployments/"} {"id": "2043a03ec19a-3", "text": "Second (TPS): This represents the number of tokens your model can generate in a second.Quality Metrics: These metrics are typically customized according to the business use-case. For instance, how does the output of your system compare to a baseline, such as a previous version? Although these metrics can be calculated offline, you need to log the necessary data to use them later.Fault tolerance\u00e2\u20ac\u2039Your application may encounter errors such as exceptions in your model inference or business logic code, causing failures and disrupting traffic. Other potential issues could arise from the machine running your application, such as unexpected hardware breakdowns or loss of spot-instances during high-demand periods. One way to mitigate these risks is by increasing redundancy through replica scaling and implementing recovery mechanisms for failed replicas. However, model replicas aren't the only potential points of failure. It's essential to build resilience against various failures that could occur at any point in your stack.Zero down time upgrade\u00e2\u20ac\u2039System upgrades are often necessary but can result in service disruptions if not handled correctly. One way to prevent downtime during upgrades is by implementing a smooth transition process from the old version to the new one. Ideally, the new version of your LLM service is deployed, and traffic gradually shifts from the old to the new version, maintaining a constant QPS throughout the process.Load balancing\u00e2\u20ac\u2039Load balancing, in simple terms, is a technique to distribute work evenly across multiple computers, servers, or other resources to optimize the utilization of the system, maximize throughput, minimize response time, and avoid overload of any single resource. Think of it as a traffic officer directing cars (requests) to different roads (servers) so that no single road becomes too congested.There are several strategies for load balancing. For example, one common method is the Round Robin strategy, where each request is sent to the next server in line, cycling back to the first when all servers have received a request. This works well when all servers are equally capable.", "source": "https://python.langchain.com/docs/guides/deployments/"} {"id": "2043a03ec19a-4", "text": "the first when all servers have received a request. This works well when all servers are equally capable. However, if some servers are more powerful than others, you might use a Weighted Round Robin or Least Connections strategy, where more requests are sent to the more powerful servers, or to those currently handling the fewest active requests. Let's imagine you're running a LLM chain. If your application becomes popular, you could have hundreds or even thousands of users asking questions at the same time. If one server gets too busy (high load), the load balancer would direct new requests to another server that is less busy. This way, all your users get a timely response and the system remains stable.Maintaining Cost-Efficiency and Scalability\u00e2\u20ac\u2039Deploying LLM services can be costly, especially when you're handling a large volume of user interactions. Charges by LLM providers are usually based on tokens used, making a chat system inference on these models potentially expensive. However, several strategies can help manage these costs without compromising the quality of the service.Self-hosting models\u00e2\u20ac\u2039Several smaller and open-source LLMs are emerging to tackle the issue of reliance on LLM providers. Self-hosting allows you to maintain similar quality to LLM provider models while managing costs. The challenge lies in building a reliable, high-performing LLM serving system on your own machines. Resource Management and Auto-Scaling\u00e2\u20ac\u2039Computational logic within your application requires precise resource allocation. For instance, if part of your traffic is served by an OpenAI endpoint and another part by a self-hosted model, it's crucial to allocate suitable resources for each. Auto-scaling\u00e2\u20ac\u201dadjusting resource allocation based on traffic\u00e2\u20ac\u201dcan significantly impact the cost of running your application. This strategy requires a balance between cost and responsiveness, ensuring neither resource over-provisioning nor compromised application responsiveness.Utilizing Spot Instances\u00e2\u20ac\u2039On platforms like AWS, spot instances", "source": "https://python.langchain.com/docs/guides/deployments/"} {"id": "2043a03ec19a-5", "text": "nor compromised application responsiveness.Utilizing Spot Instances\u00e2\u20ac\u2039On platforms like AWS, spot instances offer substantial cost savings, typically priced at about a third of on-demand instances. The trade-off is a higher crash rate, necessitating a robust fault-tolerance mechanism for effective use.Independent Scaling\u00e2\u20ac\u2039When self-hosting your models, you should consider independent scaling. For example, if you have two translation models, one fine-tuned for French and another for Spanish, incoming requests might necessitate different scaling requirements for each.Batching requests\u00e2\u20ac\u2039In the context of Large Language Models, batching requests can enhance efficiency by better utilizing your GPU resources. GPUs are inherently parallel processors, designed to handle multiple tasks simultaneously. If you send individual requests to the model, the GPU might not be fully utilized as it's only working on a single task at a time. On the other hand, by batching requests together, you're allowing the GPU to work on multiple tasks at once, maximizing its utilization and improving inference speed. This not only leads to cost savings but can also improve the overall latency of your LLM service.In summary, managing costs while scaling your LLM services requires a strategic approach. Utilizing self-hosting models, managing resources effectively, employing auto-scaling, using spot instances, independently scaling models, and batching requests are key strategies to consider. Open-source libraries such as Ray Serve and BentoML are designed to deal with these complexities. Ensuring Rapid Iteration\u00e2\u20ac\u2039The LLM landscape is evolving at an unprecedented pace, with new libraries and model architectures being introduced constantly. Consequently, it's crucial to avoid tying yourself to a solution specific to one particular framework. This is especially relevant in serving, where changes to your infrastructure can be time-consuming, expensive, and risky. Strive for infrastructure that is not locked into any specific machine learning library or framework, but instead offers a general-purpose, scalable serving layer. Here are some aspects where flexibility plays a key", "source": "https://python.langchain.com/docs/guides/deployments/"} {"id": "2043a03ec19a-6", "text": "but instead offers a general-purpose, scalable serving layer. Here are some aspects where flexibility plays a key role:Model composition\u00e2\u20ac\u2039Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feed back the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together.Cloud providers\u00e2\u20ac\u2039Many hosted solutions are restricted to a single cloud provider, which can limit your options in today's multi-cloud world. Depending on where your other infrastructure components are built, you might prefer to stick with your chosen cloud provider.Infrastructure as Code (IaC)\u00e2\u20ac\u2039Rapid iteration also involves the ability to recreate your infrastructure quickly and reliably. This is where Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Kubernetes YAML files come into play. They allow you to define your infrastructure in code files, which can be version controlled and quickly deployed, enabling faster and more reliable iterations.CI/CD\u00e2\u20ac\u2039In a fast-paced environment, implementing CI/CD pipelines can significantly speed up the iteration process. They help automate the testing and deployment of your LLM applications, reducing the risk of errors and enabling faster feedback and iteration.PreviousDebuggingNextTemplate reposOutlineDesigning a Robust LLM Application ServiceMonitoringFault toleranceZero down time upgradeLoad balancingMaintaining Cost-Efficiency and ScalabilitySelf-hosting modelsResource Management and Auto-ScalingUtilizing Spot InstancesIndependent ScalingBatching requestsEnsuring Rapid IterationModel compositionCloud providersInfrastructure as Code", "source": "https://python.langchain.com/docs/guides/deployments/"} {"id": "2043a03ec19a-7", "text": "Spot InstancesIndependent ScalingBatching requestsEnsuring Rapid IterationModel compositionCloud providersInfrastructure as Code (IaC)CI/CDCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/deployments/"} {"id": "cf7ac48a1b0f-0", "text": "Template repos | \u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 Langchain", "source": "https://python.langchain.com/docs/guides/deployments/template_repos"} {"id": "cf7ac48a1b0f-1", "text": "Skip to main content\u011f\u0178\u00a6\u0153\u00ef\u00b8\ufffd\u011f\u0178\u201d\u2014 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/\u00e2\u20ac\u2039OData connectionChainsMemoryAgentsCallbacksModulesGuidesEvaluationDebuggingDeploymentTemplate reposLangSmithModel ComparisonEcosystemAdditional resourcesGuidesDeploymentTemplate reposOn this pageTemplate reposSo, you've created a really cool chain - now what? How do you deploy it and make it easily shareable with the world?This section covers several options for that. Note that these options are meant for quick deployment of prototypes and demos, not for production systems. If you need help with the deployment of a production system, please contact us directly.What follows is a list of template GitHub repositories designed to be easily forked and modified to use your chain. This list is far from exhaustive, and we are EXTREMELY open to contributions here.Streamlit\u00e2\u20ac\u2039This repo serves as a template for how to deploy a LangChain with Streamlit.\nIt implements a chatbot interface.\nIt also contains instructions for how to deploy this app on the Streamlit platform.Gradio (on Hugging Face)\u00e2\u20ac\u2039This repo serves as a template for how deploy a LangChain with Gradio.\nIt implements a chatbot interface, with a \"Bring-Your-Own-Token\" approach (nice for not wracking up big bills).\nIt also contains instructions for how to deploy this app on the Hugging Face platform.\nThis is heavily influenced by James Weaver's excellent examples.Chainlit\u00e2\u20ac\u2039This repo is a cookbook explaining how to visualize and deploy LangChain agents with Chainlit.\nYou create ChatGPT-like UIs with Chainlit. Some of the key features include intermediary steps visualisation, element management & display (images, text, carousel, etc.) as well as cloud deployment.", "source": "https://python.langchain.com/docs/guides/deployments/template_repos"} {"id": "cf7ac48a1b0f-2", "text": "Chainlit doc on the integration with LangChainBeam\u00e2\u20ac\u2039This repo serves as a template for how deploy a LangChain with Beam.It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API.Vercel\u00e2\u20ac\u2039A minimal example on how to run LangChain on Vercel using Flask.FastAPI + Vercel\u00e2\u20ac\u2039A minimal example on how to run LangChain on Vercel using FastAPI and LangCorn/Uvicorn.Kinsta\u00e2\u20ac\u2039A minimal example on how to deploy LangChain to Kinsta using Flask.Fly.io\u00e2\u20ac\u2039A minimal example of how to deploy LangChain to Fly.io using Flask.Digitalocean App Platform\u00e2\u20ac\u2039A minimal example on how to deploy LangChain to DigitalOcean App Platform.CI/CD Google Cloud Build + Dockerfile + Serverless Google Cloud Run\u00e2\u20ac\u2039Boilerplate LangChain project on how to deploy to Google Cloud Run using Docker with Cloud Build CI/CD pipelineGoogle Cloud Run\u00e2\u20ac\u2039A minimal example on how to deploy LangChain to Google Cloud Run.SteamShip\u00e2\u20ac\u2039This repository contains LangChain adapters for Steamship, enabling LangChain developers to rapidly deploy their apps on Steamship. This includes: production-ready endpoints, horizontal scaling across dependencies, persistent storage of app state, multi-tenancy support, etc.Langchain-serve\u00e2\u20ac\u2039This repository allows users to deploy any LangChain app as REST/WebSocket APIs or, as Slack Bots with ease. Benefit from the scalability and serverless architecture of Jina AI Cloud, or deploy on-premise with Kubernetes.BentoML\u00e2\u20ac\u2039This repository provides an example of how to deploy a LangChain application with BentoML. BentoML is a framework that enables the containerization of machine learning applications as standard OCI images. BentoML also allows for the automatic generation of OpenAPI and", "source": "https://python.langchain.com/docs/guides/deployments/template_repos"} {"id": "cf7ac48a1b0f-3", "text": "learning applications as standard OCI images. BentoML also allows for the automatic generation of OpenAPI and gRPC endpoints. With BentoML, you can integrate models from all popular ML frameworks and deploy them as microservices running on the most optimal hardware and scaling independently.OpenLLM\u00e2\u20ac\u2039OpenLLM is a platform for operating large language models (LLMs) in production. With OpenLLM, you can run inference with any open-source LLM, deploy to the cloud or on-premises, and build powerful AI apps. It supports a wide range of open-source LLMs, offers flexible APIs, and first-class support for LangChain and BentoML.", "source": "https://python.langchain.com/docs/guides/deployments/template_repos"} {"id": "cf7ac48a1b0f-4", "text": "See OpenLLM's integration doc for usage with LangChain.Databutton\u00e2\u20ac\u2039These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examples include a Chatbot interface with conversational memory, a Personal search engine, and a starter template for LangChain apps. Deploying and sharing is just one click away.PreviousDeploymentNextLangSmithStreamlitGradio (on Hugging Face)ChainlitBeamVercelFastAPI + VercelKinstaFly.ioDigitalocean App PlatformCI/CD Google Cloud Build + Dockerfile + Serverless Google Cloud RunGoogle Cloud RunSteamShipLangchain-serveBentoMLOpenLLMDatabuttonCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright \u00c2\u00a9 2023 LangChain, Inc.", "source": "https://python.langchain.com/docs/guides/deployments/template_repos"}