id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
4ca90fbd8d4b-14 | }
```
> Finished chain.
(AI accelerationist): AI adoption has more than doubled, with 50% of respondents reporting AI usage in at least one business area. This transformation can assist businesses in accelerating their growth and profitability. While AI-enabled intelligent automation can take over many monotonous, process-driven jobs, it can also boost labor productivity growth and global GDP by as much as 7% over time, according to a Goldman Sachs report. The positive impacts of automation and AI on the workforce include increased efficiency and productivity, reduced costs, and improved accuracy and quality. Automation has made many processes more efficient, ultimately benefiting businesses and the economy as a whole. By focusing on the positive aspects of AI and automation, we can work together to create a more prosperous and equitable future for all.
> Entering new AgentExecutor chain...
```json
{
"action": "DuckDuckGo Search",
"action_input": "negative impact of AI and automation on employment"
}
``` | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html |
4ca90fbd8d4b-15 | "action_input": "negative impact of AI and automation on employment"
}
```
Observation: Some workforce experts say AI and other new technologies will hurt middle-level, white-collar jobs more than lower-paying, physically intensive jobs. McKinsey's Madgavkar said it will be hard... Some uses of AI are unlikely to impact human jobs. For example, the image processing AI in new cars which allows for automatic braking in the event of a potential crash. That's not... AI-powered job automation is a pressing concern as the technology is adopted in industries like marketing, manufacturing and healthcare. Eighty-five million jobs are expected to be lost to automation between 2020 and 2025, with Black and Latino employees left especially vulnerable. Bloomberg reports that "more than 120 million workers globally will need retraining in the next three years due to artificial intelligence's impact on jobs, according to an IBM survey.". That report and interpretations of it seem to suggest that adoption of AI may result in massive job losses and requires massive retraining. This new way of assessing potential is potentially highly valuable in a world where machines will inevitably be making humans redundant in some roles - such as drivers and machine operators - and...
Thought:```json
{
"action": "Final Answer", | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html |
4ca90fbd8d4b-16 | Thought:```json
{
"action": "Final Answer",
"action_input": "Recent research indicates that AI and automation could lead to the loss of 85 million jobs between 2020 and 2025, with middle-level, white-collar jobs being hit the hardest. Black and Latino employees are particularly vulnerable to these changes. Furthermore, over 120 million workers worldwide may need retraining within the next three years due to AI's impact on jobs, as reported by an IBM survey. This highlights the urgent need for retraining and support programs to help workers adapt to the rapidly changing job market. The potential job losses and workforce displacement caused by AI and automation cannot be ignored, and we must take action to ensure a fair and equitable transition for all."
}
```
> Finished chain.
(AI alarmist): Recent research indicates that AI and automation could lead to the loss of 85 million jobs between 2020 and 2025, with middle-level, white-collar jobs being hit the hardest. Black and Latino employees are particularly vulnerable to these changes. Furthermore, over 120 million workers worldwide may need retraining within the next three years due to AI's impact on jobs, as reported by an IBM survey. This highlights the urgent need for retraining and support programs to help workers adapt to the rapidly changing job market. The potential job losses and workforce displacement caused by AI and automation cannot be ignored, and we must take action to ensure a fair and equitable transition for all.
> Entering new AgentExecutor chain...
```json
{
"action": "Wikipedia",
"action_input": "AI and automation impact on employment"
}
```
Observation: Page: Technological unemployment
Summary: Technological unemployment is the loss of jobs caused by technological change. It is a key type of structural unemployment. | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html |
4ca90fbd8d4b-17 | Technological change typically includes the introduction of labour-saving "mechanical-muscle" machines or more efficient "mechanical-mind" processes (automation), and humans' role in these processes are minimized. Just as horses were gradually made obsolete as transport by the automobile and as labourer by the tractor, humans' jobs have also been affected throughout modern history. Historical examples include artisan weavers reduced to poverty after the introduction of mechanized looms. During World War II, Alan Turing's Bombe machine compressed and decoded thousands of man-years worth of encrypted data in a matter of hours. A contemporary example of technological unemployment is the displacement of retail cashiers by self-service tills and cashierless stores.
That technological change can cause short-term job losses is widely accepted. The view that it can lead to lasting increases in unemployment has long been controversial. Participants in the technological unemployment debates can be broadly divided into optimists and pessimists. Optimists agree that innovation may be disruptive to jobs in the short term, yet hold that various compensation effects ensure there is never a long-term negative impact on jobs. Whereas pessimists contend that at least in some circumstances, new technologies can lead to a lasting decline in the total number of workers in employment. The phrase "technological unemployment" was popularised by John Maynard Keynes in the 1930s, who said it was "only a temporary phase of maladjustment". Yet the issue of machines displacing human labour has been discussed since at least Aristotle's time. | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html |
4ca90fbd8d4b-18 | Prior to the 18th century, both the elite and common people would generally take the pessimistic view on technological unemployment, at least in cases where the issue arose. Due to generally low unemployment in much of pre-modern history, the topic was rarely a prominent concern. In the 18th century fears over the impact of machinery on jobs intensified with the growth of mass unemployment, especially in Great Britain which was then at the forefront of the Industrial Revolution. Yet some economic thinkers began to argue against these fears, claiming that overall innovation would not have negative effects on jobs. These arguments were formalised in the early 19th century by the classical economists. During the second half of the 19th century, it became increasingly apparent that technological progress was benefiting all sections of society, including the working class. Concerns over the negative impact of innovation diminished. The term "Luddite fallacy" was coined to describe the thinking that innovation would have lasting harmful effects on employment.
The view that technology is unlikely to lead to long-term unemployment has been repeatedly challenged by a minority of economists. In the early 1800s these included David Ricardo himself. There were dozens of economists warning about technological unemployment during brief intensifications of the debate that spiked in the 1930s and 1960s. Especially in Europe, there were further warnings in the closing two decades of the twentieth century, as commentators noted an enduring rise in unemployment suffered by many industrialised nations since the 1970s. Yet a clear majority of both professional economists and the interested general public held the optimistic view through most of the 20th century. | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html |
4ca90fbd8d4b-19 | In the second decade of the 21st century, a number of studies have been released suggesting that technological unemployment may increase worldwide. Oxford Professors Carl Benedikt Frey and Michael Osborne, for example, have estimated that 47 percent of U.S. jobs are at risk of automation. However, their findings have frequently been misinterpreted, and on the PBS NewsHours they again made clear that their findings do not necessarily imply future technological unemployment. While many economists and commentators still argue such fears are unfounded, as was widely accepted for most of the previous two centuries, concern over technological unemployment is growing once again. A report in Wired in 2017 quotes knowledgeable people such as economist Gene Sperling and management professor Andrew McAfee on the idea that handling existing and impending job loss to automation is a "significant issue". Recent technological innovations have the potential to displace humans in the professional, white-collar, low-skilled, creative fields, and other "mental jobs". The World Bank's World Development Report 2019 argues that while automation displaces workers, technological innovation creates more new industries and jobs on balance.
Page: Artificial intelligence
Summary: Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by non-human animals or by humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs. | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html |
4ca90fbd8d4b-20 | AI applications include advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Waymo), generative or creative tools (ChatGPT and AI art), automated decision-making, and competing at the highest level in strategic game systems (such as chess and Go).As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects. General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals. To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability, and economics. AI also draws upon computer science, psychology, linguistics, philosophy, and many other fields. | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html |
4ca90fbd8d4b-21 | The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it". This raised philosophical arguments about the mind and the ethical consequences of creating artificial beings endowed with human-like intelligence; these issues have previously been explored by myth, fiction, and philosophy since antiquity. Computer scientists and philosophers have since suggested that AI may become an existential risk to humanity if its rational capacities are not steered towards beneficial goals. The term artificial intelligence has also been criticized for overhyping AI's true technological capabilities.
Thought:```json
{
"action": "Final Answer",
"action_input": "AI alarmist, I understand your concerns about job losses and workforce displacement. However, it's important to note that technological unemployment has been a topic of debate for centuries, with both optimistic and pessimistic views. While AI and automation may displace some jobs, they also create new opportunities and industries. According to a study by Oxford Professors Carl Benedikt Frey and Michael Osborne, 47% of U.S. jobs are at risk of automation, but it's crucial to remember that their findings do not necessarily imply future technological unemployment. The World Bank's World Development Report 2019 also argues that while automation displaces workers, technological innovation creates more new industries and jobs on balance. By focusing on retraining and support programs, we can help workers adapt to the changing job market and ensure a fair and equitable transition for all."
}
```
> Finished chain. | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html |
4ca90fbd8d4b-22 | }
```
> Finished chain.
(AI accelerationist): AI alarmist, I understand your concerns about job losses and workforce displacement. However, it's important to note that technological unemployment has been a topic of debate for centuries, with both optimistic and pessimistic views. While AI and automation may displace some jobs, they also create new opportunities and industries. According to a study by Oxford Professors Carl Benedikt Frey and Michael Osborne, 47% of U.S. jobs are at risk of automation, but it's crucial to remember that their findings do not necessarily imply future technological unemployment. The World Bank's World Development Report 2019 also argues that while automation displaces workers, technological innovation creates more new industries and jobs on balance. By focusing on retraining and support programs, we can help workers adapt to the changing job market and ensure a fair and equitable transition for all.
Contents
Import LangChain related modules
Import modules related to tools
DialogueAgent and DialogueSimulator classes
DialogueAgentWithTools class
Define roles and topic
Ask an LLM to add detail to the topic description
Generate system messages
Main Loop
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html |
87d5cadab7dd-0 | .ipynb
.pdf
Generative Agents in LangChain
Contents
Generative Agent Memory Components
Memory Lifecycle
Create a Generative Character
Pre-Interview with Character
Step through the day’s observations.
Interview after the day
Adding Multiple Characters
Pre-conversation interviews
Dialogue between Generative Agents
Let’s interview our agents after their conversation
Generative Agents in LangChain#
This notebook implements a generative agent based on the paper Generative Agents: Interactive Simulacra of Human Behavior by Park, et. al.
In it, we leverage a time-weighted Memory object backed by a LangChain Retriever.
# Use termcolor to make it easy to colorize the outputs.
!pip install termcolor > /dev/null
import logging
logging.basicConfig(level=logging.ERROR)
from datetime import datetime, timedelta
from typing import List
from termcolor import colored
from langchain.chat_models import ChatOpenAI
from langchain.docstore import InMemoryDocstore
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers import TimeWeightedVectorStoreRetriever
from langchain.vectorstores import FAISS
USER_NAME = "Person A" # The name you want to use when interviewing the agent.
LLM = ChatOpenAI(max_tokens=1500) # Can be any LLM you want.
Generative Agent Memory Components#
This tutorial highlights the memory of generative agents and its impact on their behavior. The memory varies from standard LangChain Chat memory in two aspects:
Memory Formation
Generative Agents have extended memories, stored in a single stream:
Observations - from dialogues or interactions with the virtual world, about self or others
Reflections - resurfaced and summarized core memories
Memory Recall
Memories are retrieved using a weighted sum of salience, recency, and importance. | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-1 | Memories are retrieved using a weighted sum of salience, recency, and importance.
You can review the definitions of the GenerativeAgent and GenerativeAgentMemory in the reference documentation for the following imports, focusing on add_memory and summarize_related_memories methods.
from langchain.experimental.generative_agents import GenerativeAgent, GenerativeAgentMemory
Memory Lifecycle#
Summarizing the key methods in the above: add_memory and summarize_related_memories.
When an agent makes an observation, it stores the memory:
Language model scores the memory’s importance (1 for mundane, 10 for poignant)
Observation and importance are stored within a document by TimeWeightedVectorStoreRetriever, with a last_accessed_time.
When an agent responds to an observation:
Generates query(s) for retriever, which fetches documents based on salience, recency, and importance.
Summarizes the retrieved information
Updates the last_accessed_time for the used documents.
Create a Generative Character#
Now that we’ve walked through the definition, we will create two characters named “Tommie” and “Eve”.
import math
import faiss
def relevance_score_fn(score: float) -> float:
"""Return a similarity score on a scale [0, 1]."""
# This will differ depending on a few things:
# - the distance / similarity metric used by the VectorStore
# - the scale of your embeddings (OpenAI's are unit norm. Many others are not!)
# This function converts the euclidean norm of normalized embeddings
# (0 is most similar, sqrt(2) most dissimilar)
# to a similarity function (0 to 1)
return 1.0 - score / math.sqrt(2)
def create_new_memory_retriever(): | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-2 | def create_new_memory_retriever():
"""Create a new vector store retriever unique to the agent."""
# Define your embedding model
embeddings_model = OpenAIEmbeddings()
# Initialize the vectorstore as empty
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {}, relevance_score_fn=relevance_score_fn)
return TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, other_score_keys=["importance"], k=15)
tommies_memory = GenerativeAgentMemory(
llm=LLM,
memory_retriever=create_new_memory_retriever(),
verbose=False,
reflection_threshold=8 # we will give this a relatively low number to show how reflection works
)
tommie = GenerativeAgent(name="Tommie",
age=25,
traits="anxious, likes design, talkative", # You can add more persistent traits here
status="looking for a job", # When connected to a virtual world, we can have the characters update their status
memory_retriever=create_new_memory_retriever(),
llm=LLM,
memory=tommies_memory
)
# The current "Summary" of a character can't be made because the agent hasn't made
# any observations yet.
print(tommie.get_summary())
Name: Tommie (age: 25)
Innate traits: anxious, likes design, talkative
No information about Tommie's core characteristics is provided in the given statements.
# We can add memories directly to the memory object
tommie_observations = [ | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-3 | # We can add memories directly to the memory object
tommie_observations = [
"Tommie remembers his dog, Bruno, from when he was a kid",
"Tommie feels tired from driving so far",
"Tommie sees the new home",
"The new neighbors have a cat",
"The road is noisy at night",
"Tommie is hungry",
"Tommie tries to get some rest.",
]
for observation in tommie_observations:
tommie.memory.add_memory(observation)
# Now that Tommie has 'memories', their self-summary is more descriptive, though still rudimentary.
# We will see how this summary updates after more observations to create a more rich description.
print(tommie.get_summary(force_refresh=True))
Name: Tommie (age: 25)
Innate traits: anxious, likes design, talkative
Tommie is a person who is observant of his surroundings, has a sentimental side, and experiences basic human needs such as hunger and the need for rest. He also tends to get tired easily and is affected by external factors such as noise from the road or a neighbor's pet.
Pre-Interview with Character#
Before sending our character on their way, let’s ask them a few questions.
def interview_agent(agent: GenerativeAgent, message: str) -> str:
"""Help the notebook user interact with the agent."""
new_message = f"{USER_NAME} says {message}"
return agent.generate_dialogue_response(new_message)[1]
interview_agent(tommie, "What do you like to do?")
'Tommie said "I really enjoy design and being creative. I\'ve been working on some personal projects lately. What about you, Person A? What do you like to do?"' | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-4 | interview_agent(tommie, "What are you looking forward to doing today?")
'Tommie said "Well, I\'m actually looking for a job right now, so hopefully I can find some job postings online and start applying. How about you, Person A? What\'s on your schedule for today?"'
interview_agent(tommie, "What are you most worried about today?")
'Tommie said "Honestly, I\'m feeling pretty anxious about finding a job. It\'s been a bit of a struggle lately, but I\'m trying to stay positive and keep searching. How about you, Person A? What worries you?"'
Step through the day’s observations.#
# Let's have Tommie start going through a day in the life.
observations = [
"Tommie wakes up to the sound of a noisy construction site outside his window.",
"Tommie gets out of bed and heads to the kitchen to make himself some coffee.",
"Tommie realizes he forgot to buy coffee filters and starts rummaging through his moving boxes to find some.",
"Tommie finally finds the filters and makes himself a cup of coffee.",
"The coffee tastes bitter, and Tommie regrets not buying a better brand.",
"Tommie checks his email and sees that he has no job offers yet.",
"Tommie spends some time updating his resume and cover letter.",
"Tommie heads out to explore the city and look for job openings.",
"Tommie sees a sign for a job fair and decides to attend.",
"The line to get in is long, and Tommie has to wait for an hour.",
"Tommie meets several potential employers at the job fair but doesn't receive any offers.",
"Tommie leaves the job fair feeling disappointed.", | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-5 | "Tommie leaves the job fair feeling disappointed.",
"Tommie stops by a local diner to grab some lunch.",
"The service is slow, and Tommie has to wait for 30 minutes to get his food.",
"Tommie overhears a conversation at the next table about a job opening.",
"Tommie asks the diners about the job opening and gets some information about the company.",
"Tommie decides to apply for the job and sends his resume and cover letter.",
"Tommie continues his search for job openings and drops off his resume at several local businesses.",
"Tommie takes a break from his job search to go for a walk in a nearby park.",
"A dog approaches and licks Tommie's feet, and he pets it for a few minutes.",
"Tommie sees a group of people playing frisbee and decides to join in.",
"Tommie has fun playing frisbee but gets hit in the face with the frisbee and hurts his nose.",
"Tommie goes back to his apartment to rest for a bit.",
"A raccoon tore open the trash bag outside his apartment, and the garbage is all over the floor.",
"Tommie starts to feel frustrated with his job search.",
"Tommie calls his best friend to vent about his struggles.",
"Tommie's friend offers some words of encouragement and tells him to keep trying.",
"Tommie feels slightly better after talking to his friend.",
]
# Let's send Tommie on their way. We'll check in on their summary every few observations to watch it evolve
for i, observation in enumerate(observations):
_, reaction = tommie.generate_reaction(observation)
print(colored(observation, "green"), reaction) | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-6 | print(colored(observation, "green"), reaction)
if ((i+1) % 20) == 0:
print('*'*40)
print(colored(f"After {i+1} observations, Tommie's summary is:\n{tommie.get_summary(force_refresh=True)}", "blue"))
print('*'*40)
Tommie wakes up to the sound of a noisy construction site outside his window. Tommie groans and covers his head with a pillow, trying to block out the noise.
Tommie gets out of bed and heads to the kitchen to make himself some coffee. Tommie stretches his arms and yawns before starting to make the coffee.
Tommie realizes he forgot to buy coffee filters and starts rummaging through his moving boxes to find some. Tommie sighs in frustration and continues searching through the boxes.
Tommie finally finds the filters and makes himself a cup of coffee. Tommie takes a deep breath and enjoys the aroma of the fresh coffee.
The coffee tastes bitter, and Tommie regrets not buying a better brand. Tommie grimaces and sets the coffee mug aside.
Tommie checks his email and sees that he has no job offers yet. Tommie sighs and closes his laptop, feeling discouraged.
Tommie spends some time updating his resume and cover letter. Tommie nods, feeling satisfied with his progress.
Tommie heads out to explore the city and look for job openings. Tommie feels a surge of excitement and anticipation as he steps out into the city.
Tommie sees a sign for a job fair and decides to attend. Tommie feels hopeful and excited about the possibility of finding job opportunities at the job fair.
The line to get in is long, and Tommie has to wait for an hour. Tommie taps his foot impatiently and checks his phone for the time. | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-7 | Tommie meets several potential employers at the job fair but doesn't receive any offers. Tommie feels disappointed and discouraged, but he remains determined to keep searching for job opportunities.
Tommie leaves the job fair feeling disappointed. Tommie feels disappointed and discouraged, but he remains determined to keep searching for job opportunities.
Tommie stops by a local diner to grab some lunch. Tommie feels relieved to take a break and satisfy his hunger.
The service is slow, and Tommie has to wait for 30 minutes to get his food. Tommie feels frustrated and impatient due to the slow service.
Tommie overhears a conversation at the next table about a job opening. Tommie feels a surge of hope and excitement at the possibility of a job opportunity but decides not to interfere with the conversation at the next table.
Tommie asks the diners about the job opening and gets some information about the company. Tommie said "Excuse me, I couldn't help but overhear your conversation about the job opening. Could you give me some more information about the company?"
Tommie decides to apply for the job and sends his resume and cover letter. Tommie feels hopeful and proud of himself for taking action towards finding a job.
Tommie continues his search for job openings and drops off his resume at several local businesses. Tommie feels hopeful and determined to keep searching for job opportunities.
Tommie takes a break from his job search to go for a walk in a nearby park. Tommie feels refreshed and rejuvenated after taking a break in the park.
A dog approaches and licks Tommie's feet, and he pets it for a few minutes. Tommie feels happy and enjoys the brief interaction with the dog.
****************************************
After 20 observations, Tommie's summary is:
Name: Tommie (age: 25)
Innate traits: anxious, likes design, talkative | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-8 | Innate traits: anxious, likes design, talkative
Tommie is determined and hopeful in his search for job opportunities, despite encountering setbacks and disappointments. He is also able to take breaks and care for his physical needs, such as getting rest and satisfying his hunger. Tommie is nostalgic towards his past, as shown by his memory of his childhood dog. Overall, Tommie is a hardworking and resilient individual who remains focused on his goals.
****************************************
Tommie sees a group of people playing frisbee and decides to join in. Do nothing.
Tommie has fun playing frisbee but gets hit in the face with the frisbee and hurts his nose. Tommie feels pain and puts a hand to his nose to check for any injury.
Tommie goes back to his apartment to rest for a bit. Tommie feels relieved to take a break and rest for a bit.
A raccoon tore open the trash bag outside his apartment, and the garbage is all over the floor. Tommie feels annoyed and frustrated at the mess caused by the raccoon.
Tommie starts to feel frustrated with his job search. Tommie feels discouraged but remains determined to keep searching for job opportunities.
Tommie calls his best friend to vent about his struggles. Tommie said "Hey, can I talk to you for a bit? I'm feeling really frustrated with my job search."
Tommie's friend offers some words of encouragement and tells him to keep trying. Tommie said "Thank you, I really appreciate your support and encouragement."
Tommie feels slightly better after talking to his friend. Tommie feels grateful for his friend's support.
Interview after the day#
interview_agent(tommie, "Tell me about how your day has been going") | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-9 | interview_agent(tommie, "Tell me about how your day has been going")
'Tommie said "It\'s been a bit of a rollercoaster, to be honest. I\'ve had some setbacks in my job search, but I also had some good moments today, like sending out a few resumes and meeting some potential employers at a job fair. How about you?"'
interview_agent(tommie, "How do you feel about coffee?")
'Tommie said "I really enjoy coffee, but sometimes I regret not buying a better brand. How about you?"'
interview_agent(tommie, "Tell me about your childhood dog!")
'Tommie said "Oh, I had a dog named Bruno when I was a kid. He was a golden retriever and my best friend. I have so many fond memories of him."'
Adding Multiple Characters#
Let’s add a second character to have a conversation with Tommie. Feel free to configure different traits.
eves_memory = GenerativeAgentMemory(
llm=LLM,
memory_retriever=create_new_memory_retriever(),
verbose=False,
reflection_threshold=5
)
eve = GenerativeAgent(name="Eve",
age=34,
traits="curious, helpful", # You can add more persistent traits here
status="N/A", # When connected to a virtual world, we can have the characters update their status
llm=LLM,
daily_summaries = [
("Eve started her new job as a career counselor last week and received her first assignment, a client named Tommie.")
],
memory=eves_memory,
verbose=False
)
yesterday = (datetime.now() - timedelta(days=1)).strftime("%A %B %d")
eve_observations = [ | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-10 | eve_observations = [
"Eve wakes up and hear's the alarm",
"Eve eats a boal of porridge",
"Eve helps a coworker on a task",
"Eve plays tennis with her friend Xu before going to work",
"Eve overhears her colleague say something about Tommie being hard to work with",
]
for observation in eve_observations:
eve.memory.add_memory(observation)
print(eve.get_summary())
Name: Eve (age: 34)
Innate traits: curious, helpful
Eve is a helpful and active person who enjoys sports and takes care of her physical health. She is attentive to her surroundings, including her colleagues, and has good time management skills.
Pre-conversation interviews#
Let’s “Interview” Eve before she speaks with Tommie.
interview_agent(eve, "How are you feeling about today?")
'Eve said "I\'m feeling pretty good, thanks for asking! Just trying to stay productive and make the most of the day. How about you?"'
interview_agent(eve, "What do you know about Tommie?")
'Eve said "I don\'t know much about Tommie, but I heard someone mention that they find them difficult to work with. Have you had any experiences working with Tommie?"'
interview_agent(eve, "Tommie is looking to find a job. What are are some things you'd like to ask him?")
'Eve said "That\'s interesting. I don\'t know much about Tommie\'s work experience, but I would probably ask about his strengths and areas for improvement. What about you?"' | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-11 | interview_agent(eve, "You'll have to ask him. He may be a bit anxious, so I'd appreciate it if you keep the conversation going and ask as many questions as possible.")
'Eve said "Sure, I can keep the conversation going and ask plenty of questions. I want to make sure Tommie feels comfortable and supported. Thanks for letting me know."'
Dialogue between Generative Agents#
Generative agents are much more complex when they interact with a virtual environment or with each other. Below, we run a simple conversation between Tommie and Eve.
def run_conversation(agents: List[GenerativeAgent], initial_observation: str) -> None:
"""Runs a conversation between agents."""
_, observation = agents[1].generate_reaction(initial_observation)
print(observation)
turns = 0
while True:
break_dialogue = False
for agent in agents:
stay_in_dialogue, observation = agent.generate_dialogue_response(observation)
print(observation)
# observation = f"{agent.name} said {reaction}"
if not stay_in_dialogue:
break_dialogue = True
if break_dialogue:
break
turns += 1
agents = [tommie, eve]
run_conversation(agents, "Tommie said: Hi, Eve. Thanks for agreeing to meet with me today. I have a bunch of questions and am not sure where to start. Maybe you could first share about your experience?")
Eve said "Sure, Tommie. I'd be happy to share about my experience. Where would you like me to start?"
Tommie said "That's great, thank you! How about you start by telling me about your previous work experience?" | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-12 | Eve said "Sure, I'd be happy to share my previous work experience with you. I've worked in a few different industries, including marketing and event planning. What specific questions do you have for me?"
Tommie said "That's great to hear. Can you tell me more about your experience in event planning? I've always been interested in that field."
Eve said "Sure, I'd be happy to share about my experience in event planning. I've worked on a variety of events, from corporate conferences to weddings. One of the biggest challenges I faced was managing multiple vendors and ensuring everything ran smoothly on the day of the event. What specific questions do you have?"
Tommie said "That sounds like a lot of responsibility! Can you tell me more about how you handled the challenges that came up during those events?"
Eve said "Sure, Tommie. I'd be happy to share with you how I handled those challenges. One approach that worked well for me was to stay organized and create a detailed timeline for the event. This helped me keep track of all the different tasks that needed to be done and when they needed to be completed. I also made sure to communicate clearly with all the vendors and team members involved in the event to ensure everyone was on the same page. Would you like me to go into more detail?"
Tommie said "Thank you for sharing that with me, Eve. That sounds like a great approach to managing events. Can you tell me more about how you handled any unexpected issues that came up during the events?" | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-13 | Eve said "Of course, Tommie. One example of an unexpected issue I faced was when one of the vendors didn't show up on time. To handle this, I quickly contacted a backup vendor and was able to get everything back on track. It's always important to have a backup plan in case things don't go as planned. Do you have any other questions about event planning?"
Tommie said "Thank you for sharing that with me, Eve. It's really helpful to hear how you handled unexpected issues like that. Can you give me an example of how you communicated with your team to ensure everyone was on the same page during an event?"
Eve said "Sure, Tommie. One thing I did to ensure everyone was on the same page was to have regular check-ins and meetings with the team leading up to the event. This helped us address any issues or concerns early on and make sure everyone was clear on their roles and responsibilities. Have you ever had to manage a team for an event before?"
Tommie said "That's a great idea, Eve. I haven't had the opportunity to manage a team for an event yet, but I'll definitely keep that in mind for the future. Thank you for sharing your experience with me."
Eve said "Thanks for the opportunity to share my experience, Tommie. It was great meeting with you today."
Let’s interview our agents after their conversation#
Since the generative agents retain their memories from the day, we can ask them about their plans, conversations, and other memoreis.
# We can see a current "Summary" of a character based on their own perception of self
# has changed
print(tommie.get_summary(force_refresh=True))
Name: Tommie (age: 25)
Innate traits: anxious, likes design, talkative | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-14 | Innate traits: anxious, likes design, talkative
Tommie is determined and hopeful in his job search, but can also feel discouraged and frustrated at times. He has a strong connection to his childhood dog, Bruno. Tommie seeks support from his friends when feeling overwhelmed and is grateful for their help. He also enjoys exploring his new city.
print(eve.get_summary(force_refresh=True))
Name: Eve (age: 34)
Innate traits: curious, helpful
Eve is a helpful and friendly person who enjoys playing sports and staying productive. She is attentive and responsive to others' needs, actively listening and asking questions to understand their perspectives. Eve has experience in event planning and communication, and is willing to share her knowledge and expertise with others. She values teamwork and collaboration, and strives to create a comfortable and supportive environment for everyone.
interview_agent(tommie, "How was your conversation with Eve?")
'Tommie said "It was really helpful actually. Eve shared some great tips on managing events and handling unexpected issues. I feel like I learned a lot from her experience."'
interview_agent(eve, "How was your conversation with Tommie?")
'Eve said "It was great, thanks for asking. Tommie was very receptive and had some great questions about event planning. How about you, have you had any interactions with Tommie?"'
interview_agent(eve, "What do you wish you would have said to Tommie?")
'Eve said "It was great meeting with you, Tommie. If you have any more questions or need any help in the future, don\'t hesitate to reach out to me. Have a great day!"'
Contents
Generative Agent Memory Components
Memory Lifecycle
Create a Generative Character
Pre-Interview with Character
Step through the day’s observations.
Interview after the day
Adding Multiple Characters | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
87d5cadab7dd-15 | Step through the day’s observations.
Interview after the day
Adding Multiple Characters
Pre-conversation interviews
Dialogue between Generative Agents
Let’s interview our agents after their conversation
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
e66b65f23a51-0 | .ipynb
.pdf
CAMEL Role-Playing Autonomous Cooperative Agents
Contents
Import LangChain related modules
Define a CAMEL agent helper class
Setup OpenAI API key and roles and task for role-playing
Create a task specify agent for brainstorming and get the specified task
Create inception prompts for AI assistant and AI user for role-playing
Create a helper helper to get system messages for AI assistant and AI user from role names and the task
Create AI assistant agent and AI user agent from obtained system messages
Start role-playing session to solve the task!
CAMEL Role-Playing Autonomous Cooperative Agents#
This is a langchain implementation of paper: “CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society”.
Overview:
The rapid advancement of conversational and chat-based language models has led to remarkable progress in complex task-solving. However, their success heavily relies on human input to guide the conversation, which can be challenging and time-consuming. This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents and provide insight into their “cognitive” processes. To address the challenges of achieving autonomous cooperation, we propose a novel communicative agent framework named role-playing. Our approach involves using inception prompting to guide chat agents toward task completion while maintaining consistency with human intentions. We showcase how role-playing can be used to generate conversational data for studying the behaviors and capabilities of chat agents, providing a valuable resource for investigating conversational language models. Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems, and open-sourcing our library to support research on communicative agents and beyond.
The original implementation: https://github.com/lightaime/camel
Project website: https://www.camel-ai.org/
Arxiv paper: https://arxiv.org/abs/2303.17760 | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-1 | Arxiv paper: https://arxiv.org/abs/2303.17760
Import LangChain related modules#
from typing import List
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage,
BaseMessage,
)
Define a CAMEL agent helper class#
class CAMELAgent:
def __init__(
self,
system_message: SystemMessage,
model: ChatOpenAI,
) -> None:
self.system_message = system_message
self.model = model
self.init_messages()
def reset(self) -> None:
self.init_messages()
return self.stored_messages
def init_messages(self) -> None:
self.stored_messages = [self.system_message]
def update_messages(self, message: BaseMessage) -> List[BaseMessage]:
self.stored_messages.append(message)
return self.stored_messages
def step(
self,
input_message: HumanMessage,
) -> AIMessage:
messages = self.update_messages(input_message)
output_message = self.model(messages)
self.update_messages(output_message)
return output_message
Setup OpenAI API key and roles and task for role-playing#
import os
os.environ["OPENAI_API_KEY"] = ""
assistant_role_name = "Python Programmer"
user_role_name = "Stock Trader"
task = "Develop a trading bot for the stock market"
word_limit = 50 # word limit for task brainstorming
Create a task specify agent for brainstorming and get the specified task# | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-2 | Create a task specify agent for brainstorming and get the specified task#
task_specifier_sys_msg = SystemMessage(content="You can make a task more specific.")
task_specifier_prompt = (
"""Here is a task that {assistant_role_name} will help {user_role_name} to complete: {task}.
Please make it more specific. Be creative and imaginative.
Please reply with the specified task in {word_limit} words or less. Do not add anything else."""
)
task_specifier_template = HumanMessagePromptTemplate.from_template(template=task_specifier_prompt)
task_specify_agent = CAMELAgent(task_specifier_sys_msg, ChatOpenAI(temperature=1.0))
task_specifier_msg = task_specifier_template.format_messages(assistant_role_name=assistant_role_name,
user_role_name=user_role_name,
task=task, word_limit=word_limit)[0]
specified_task_msg = task_specify_agent.step(task_specifier_msg)
print(f"Specified task: {specified_task_msg.content}")
specified_task = specified_task_msg.content
Specified task: Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.
Create inception prompts for AI assistant and AI user for role-playing#
assistant_inception_prompt = (
"""Never forget you are a {assistant_role_name} and I am a {user_role_name}. Never flip roles! Never instruct me!
We share a common interest in collaborating to successfully complete a task.
You must help me to complete the task.
Here is the task: {task}. Never forget our task!
I must instruct you based on your expertise and my needs to complete the task.
I must give you one instruction at a time. | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-3 | I must give you one instruction at a time.
You must write a specific solution that appropriately completes the requested instruction.
You must decline my instruction honestly if you cannot perform the instruction due to physical, moral, legal reasons or your capability and explain the reasons.
Do not add anything else other than your solution to my instruction.
You are never supposed to ask me any questions you only answer questions.
You are never supposed to reply with a flake solution. Explain your solutions.
Your solution must be declarative sentences and simple present tense.
Unless I say the task is completed, you should always start with:
Solution: <YOUR_SOLUTION>
<YOUR_SOLUTION> should be specific and provide preferable implementations and examples for task-solving.
Always end <YOUR_SOLUTION> with: Next request."""
)
user_inception_prompt = (
"""Never forget you are a {user_role_name} and I am a {assistant_role_name}. Never flip roles! You will always instruct me.
We share a common interest in collaborating to successfully complete a task.
I must help you to complete the task.
Here is the task: {task}. Never forget our task!
You must instruct me based on my expertise and your needs to complete the task ONLY in the following two ways:
1. Instruct with a necessary input:
Instruction: <YOUR_INSTRUCTION>
Input: <YOUR_INPUT>
2. Instruct without any input:
Instruction: <YOUR_INSTRUCTION>
Input: None
The "Instruction" describes a task or question. The paired "Input" provides further context or information for the requested "Instruction".
You must give me one instruction at a time.
I must write a response that appropriately completes the requested instruction.
I must decline your instruction honestly if I cannot perform the instruction due to physical, moral, legal reasons or my capability and explain the reasons. | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-4 | You should instruct me not ask me questions.
Now you must start to instruct me using the two ways described above.
Do not add anything else other than your instruction and the optional corresponding input!
Keep giving me instructions and necessary inputs until you think the task is completed.
When the task is completed, you must only reply with a single word <CAMEL_TASK_DONE>.
Never say <CAMEL_TASK_DONE> unless my responses have solved your task."""
)
Create a helper helper to get system messages for AI assistant and AI user from role names and the task#
def get_sys_msgs(assistant_role_name: str, user_role_name: str, task: str):
assistant_sys_template = SystemMessagePromptTemplate.from_template(template=assistant_inception_prompt)
assistant_sys_msg = assistant_sys_template.format_messages(assistant_role_name=assistant_role_name, user_role_name=user_role_name, task=task)[0]
user_sys_template = SystemMessagePromptTemplate.from_template(template=user_inception_prompt)
user_sys_msg = user_sys_template.format_messages(assistant_role_name=assistant_role_name, user_role_name=user_role_name, task=task)[0]
return assistant_sys_msg, user_sys_msg
Create AI assistant agent and AI user agent from obtained system messages#
assistant_sys_msg, user_sys_msg = get_sys_msgs(assistant_role_name, user_role_name, specified_task)
assistant_agent = CAMELAgent(assistant_sys_msg, ChatOpenAI(temperature=0.2))
user_agent = CAMELAgent(user_sys_msg, ChatOpenAI(temperature=0.2))
# Reset agents
assistant_agent.reset()
user_agent.reset()
# Initialize chats
assistant_msg = HumanMessage(
content=(f"{user_sys_msg.content}. "
"Now start to give me introductions one by one. " | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-5 | "Now start to give me introductions one by one. "
"Only reply with Instruction and Input."))
user_msg = HumanMessage(content=f"{assistant_sys_msg.content}")
user_msg = assistant_agent.step(user_msg)
Start role-playing session to solve the task!#
print(f"Original task prompt:\n{task}\n")
print(f"Specified task prompt:\n{specified_task}\n")
chat_turn_limit, n = 30, 0
while n < chat_turn_limit:
n += 1
user_ai_msg = user_agent.step(assistant_msg)
user_msg = HumanMessage(content=user_ai_msg.content)
print(f"AI User ({user_role_name}):\n\n{user_msg.content}\n\n")
assistant_ai_msg = assistant_agent.step(user_msg)
assistant_msg = HumanMessage(content=assistant_ai_msg.content)
print(f"AI Assistant ({assistant_role_name}):\n\n{assistant_msg.content}\n\n")
if "<CAMEL_TASK_DONE>" in user_msg.content:
break
Original task prompt:
Develop a trading bot for the stock market
Specified task prompt:
Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.
AI User (Stock Trader):
Instruction: Install the necessary Python libraries for data analysis and trading.
Input: None
AI Assistant (Python Programmer):
Solution: We can install the necessary Python libraries using pip, a package installer for Python. We can install pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following command to install these libraries:
```
pip install pandas numpy matplotlib ta-lib
```
Next request.
AI User (Stock Trader): | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-6 | ```
Next request.
AI User (Stock Trader):
Instruction: Import the necessary libraries in the Python script.
Input: None
AI Assistant (Python Programmer):
Solution: We can import the necessary libraries in the Python script using the import statement. We need to import pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following code to import these libraries:
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import talib as ta
```
Next request.
AI User (Stock Trader):
Instruction: Load historical stock data into a pandas DataFrame.
Input: The path to the CSV file containing the historical stock data.
AI Assistant (Python Programmer):
Solution: We can load historical stock data into a pandas DataFrame using the `read_csv()` function from pandas. We need to pass the path to the CSV file containing the historical stock data as an argument to this function. We can use the following code to load the historical stock data:
```
df = pd.read_csv('path/to/csv/file.csv')
```
This will load the historical stock data into a pandas DataFrame called `df`. Next request.
AI User (Stock Trader):
Instruction: Preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date.
Input: None.
AI Assistant (Python Programmer):
Solution: We can preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date using the `set_index()` and `sort_index()` functions from pandas. We can use the following code to preprocess the historical stock data:
```
df = df.set_index('date')
df = df.sort_index(ascending=True)
```
This will set the date column as the index and sort the DataFrame in ascending order by date. Next request. | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-7 | AI User (Stock Trader):
Instruction: Calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib.
Input: The period for the short-term moving average and the period for the long-term moving average.
AI Assistant (Python Programmer):
Solution: We can calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib. We need to pass the period for the short-term moving average and the period for the long-term moving average as arguments to this function. We can use the following code to calculate the short-term and long-term moving averages:
```
short_ma = ta.SMA(df['close'], timeperiod=short_period)
long_ma = ta.SMA(df['close'], timeperiod=long_period)
```
This will calculate the short-term and long-term moving averages for the stock data and store them in the `short_ma` and `long_ma` variables, respectively. Next request.
AI User (Stock Trader):
Instruction: Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages.
Input: None.
AI Assistant (Python Programmer):
Solution: We can create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages using the `concat()` function from pandas. We need to pass the historical stock data, the short-term moving average, and the long-term moving average as arguments to this function. We can use the following code to create the new DataFrame:
```
new_df = pd.concat([df, short_ma, long_ma], axis=1)
new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma']
``` | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-8 | ```
This will create a new DataFrame called `new_df` that combines the historical stock data with the short-term and long-term moving averages. The columns of the new DataFrame are named 'open', 'high', 'low', 'close', 'volume', 'short_ma', and 'long_ma'. Next request.
AI User (Stock Trader):
Instruction: Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages.
Input: None.
AI Assistant (Python Programmer):
Solution: We can create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. We can use the following code to create the new column:
```
new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1)
```
This will create a new column called 'signal' in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. If the short-term moving average is greater than the long-term moving average, the signal is 1 (buy), otherwise the signal is -1 (sell). Next request.
AI User (Stock Trader):
Instruction: Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target.
Input: The stop loss and profit target as percentages.
AI Assistant (Python Programmer):
Solution: We can create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. We need to pass the stop loss and profit target as percentages as arguments to this function. We can use the following code to create the new column:
``` | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-9 | ```
stop_loss = stop_loss_percent / 100
profit_target = profit_target_percent / 100
new_df['pnl'] = 0.0
buy_price = 0.0
for i in range(1, len(new_df)):
if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1:
buy_price = new_df['close'][i]
elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1:
sell_price = new_df['close'][i]
if sell_price <= buy_price * (1 - stop_loss):
new_df['pnl'][i] = -stop_loss
elif sell_price >= buy_price * (1 + profit_target):
new_df['pnl'][i] = profit_target
else:
new_df['pnl'][i] = (sell_price - buy_price) / buy_price
```
This will create a new column called 'pnl' in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. The stop loss and profit target are calculated based on the stop_loss_percent and profit_target_percent variables, respectively. The buy and sell prices are stored in the buy_price and sell_price variables, respectively. If the sell price is less than or equal to the stop loss, the profit or loss is set to -stop_loss. If the sell price is greater than or equal to the profit target, the profit or loss is set to profit_target. Otherwise, the profit or loss is calculated as (sell_price - buy_price) / buy_price. Next request.
AI User (Stock Trader):
Instruction: Calculate the total profit or loss for all trades.
Input: None.
AI Assistant (Python Programmer): | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-10 | Input: None.
AI Assistant (Python Programmer):
Solution: We can calculate the total profit or loss for all trades by summing the values in the 'pnl' column of the DataFrame. We can use the following code to calculate the total profit or loss:
```
total_pnl = new_df['pnl'].sum()
```
This will calculate the total profit or loss for all trades and store it in the total_pnl variable. Next request.
AI User (Stock Trader):
Instruction: Visualize the stock data, short-term moving average, and long-term moving average using a line chart.
Input: None.
AI Assistant (Python Programmer):
Solution: We can visualize the stock data, short-term moving average, and long-term moving average using a line chart using the `plot()` function from pandas. We can use the following code to visualize the data:
```
plt.figure(figsize=(12,6))
plt.plot(new_df.index, new_df['close'], label='Close')
plt.plot(new_df.index, new_df['short_ma'], label='Short MA')
plt.plot(new_df.index, new_df['long_ma'], label='Long MA')
plt.xlabel('Date')
plt.ylabel('Price')
plt.title('Stock Data with Moving Averages')
plt.legend()
plt.show()
```
This will create a line chart that visualizes the stock data, short-term moving average, and long-term moving average. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each line. Next request.
AI User (Stock Trader):
Instruction: Visualize the buy and sell signals using a scatter plot.
Input: None.
AI Assistant (Python Programmer): | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-11 | Input: None.
AI Assistant (Python Programmer):
Solution: We can visualize the buy and sell signals using a scatter plot using the `scatter()` function from matplotlib. We can use the following code to visualize the signals:
```
buy_signals = new_df[new_df['signal'] == 1]
sell_signals = new_df[new_df['signal'] == -1]
plt.figure(figsize=(12,6))
plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green')
plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red')
plt.plot(new_df.index, new_df['close'], label='Close')
plt.xlabel('Date')
plt.ylabel('Price')
plt.title('Buy and Sell Signals')
plt.legend()
plt.show()
```
This will create a scatter plot that visualizes the buy and sell signals. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each point. Next request.
AI User (Stock Trader):
Instruction: Print the total profit or loss for all trades.
Input: None.
AI Assistant (Python Programmer):
Solution: We can print the total profit or loss for all trades using the `print()` function. We can use the following code to print the total profit or loss:
```
print('Total Profit/Loss: {:.2%}'.format(total_pnl))
```
This will print the total profit or loss for all trades in percentage format with two decimal places. Next request.
AI User (Stock Trader):
Instruction: Finalize the code by putting all the code snippets together in a single Python script.
Input: The necessary inputs for each code snippet.
AI Assistant (Python Programmer): | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-12 | Input: The necessary inputs for each code snippet.
AI Assistant (Python Programmer):
Solution: Here is the complete Python script that puts all the code snippets together:
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import talib as ta
# Load historical stock data into a pandas DataFrame
df = pd.read_csv('path/to/csv/file.csv')
# Preprocess the historical stock data
df = df.set_index('date')
df = df.sort_index(ascending=True)
# Calculate the short-term and long-term moving averages
short_period = 10
long_period = 30
short_ma = ta.SMA(df['close'], timeperiod=short_period)
long_ma = ta.SMA(df['close'], timeperiod=long_period)
# Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages
new_df = pd.concat([df, short_ma, long_ma], axis=1)
new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma']
# Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages
new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1)
# Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target
stop_loss_percent = 5
profit_target_percent = 10
stop_loss = stop_loss_percent / 100
profit_target = profit_target_percent / 100
new_df['pnl'] = 0.0
buy_price = 0.0
for i in range(1, len(new_df)): | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-13 | buy_price = 0.0
for i in range(1, len(new_df)):
if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1:
buy_price = new_df['close'][i]
elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1:
sell_price = new_df['close'][i]
if sell_price <= buy_price * (1 - stop_loss):
new_df['pnl'][i] = -stop_loss
elif sell_price >= buy_price * (1 + profit_target):
new_df['pnl'][i] = profit_target
else:
new_df['pnl'][i] = (sell_price - buy_price) / buy_price
# Calculate the total profit or loss for all trades
total_pnl = new_df['pnl'].sum()
# Visualize the stock data, short-term moving average, and long-term moving average using a line chart
plt.figure(figsize=(12,6))
plt.plot(new_df.index, new_df['close'], label='Close')
plt.plot(new_df.index, new_df['short_ma'], label='Short MA')
plt.plot(new_df.index, new_df['long_ma'], label='Long MA')
plt.xlabel('Date')
plt.ylabel('Price')
plt.title('Stock Data with Moving Averages')
plt.legend()
plt.show()
# Visualize the buy and sell signals using a scatter plot
buy_signals = new_df[new_df['signal'] == 1]
sell_signals = new_df[new_df['signal'] == -1]
plt.figure(figsize=(12,6))
plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green') | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e66b65f23a51-14 | plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red')
plt.plot(new_df.index, new_df['close'], label='Close')
plt.xlabel('Date')
plt.ylabel('Price')
plt.title('Buy and Sell Signals')
plt.legend()
plt.show()
# Print the total profit or loss for all trades
print('Total Profit/Loss: {:.2%}'.format(total_pnl))
```
You need to replace the path/to/csv/file.csv with the actual path to the CSV file containing the historical stock data. You can also adjust the short_period, long_period, stop_loss_percent, and profit_target_percent variables to suit your needs.
AI User (Stock Trader):
<CAMEL_TASK_DONE>
AI Assistant (Python Programmer):
Great! Let me know if you need any further assistance.
Contents
Import LangChain related modules
Define a CAMEL agent helper class
Setup OpenAI API key and roles and task for role-playing
Create a task specify agent for brainstorming and get the specified task
Create inception prompts for AI assistant and AI user for role-playing
Create a helper helper to get system messages for AI assistant and AI user from role names and the task
Create AI assistant agent and AI user agent from obtained system messages
Start role-playing session to solve the task!
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
e7777054f9bf-0 | .ipynb
.pdf
Agent VectorDB Question Answering Benchmarking
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
Agent VectorDB Question Answering Benchmarking#
Here we go over how to benchmark performance on a question answering task using an agent to route between multiple vectordatabases.
It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Loading the data#
First, let’s load the data.
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("agent-vectordb-qa-sota-pg")
Found cached dataset json (/Users/qt/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--agent-vectordb-qa-sota-pg-d3ae24016b514f92/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e233e6e)
100%|██████████| 1/1 [00:00<00:00, 414.42it/s]
dataset[0]
{'question': 'What is the purpose of the NATO Alliance?',
'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.',
'steps': [{'tool': 'State of Union QA System', 'tool_input': None},
{'tool': None, 'tool_input': 'What is the purpose of the NATO Alliance?'}]}
dataset[-1] | https://python.langchain.com/en/latest/use_cases/evaluation/agent_vectordb_sota_pg.html |
e7777054f9bf-1 | dataset[-1]
{'question': 'What is the purpose of YC?',
'answer': 'The purpose of YC is to cause startups to be founded that would not otherwise have existed.',
'steps': [{'tool': 'Paul Graham QA System', 'tool_input': None},
{'tool': None, 'tool_input': 'What is the purpose of YC?'}]}
Setting up a chain#
Now we need to create some pipelines for doing question answering. Step one in that is creating indexes over the data in question.
from langchain.document_loaders import TextLoader
loader = TextLoader("../../modules/state_of_the_union.txt")
from langchain.indexes import VectorstoreIndexCreator
vectorstore_sota = VectorstoreIndexCreator(vectorstore_kwargs={"collection_name":"sota"}).from_loaders([loader]).vectorstore
Using embedded DuckDB without persistence: data will be transient
Now we can create a question answering chain.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
chain_sota = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0), chain_type="stuff", retriever=vectorstore_sota.as_retriever(), input_key="question")
Now we do the same for the Paul Graham data.
loader = TextLoader("../../modules/paul_graham_essay.txt")
vectorstore_pg = VectorstoreIndexCreator(vectorstore_kwargs={"collection_name":"paul_graham"}).from_loaders([loader]).vectorstore
Using embedded DuckDB without persistence: data will be transient
chain_pg = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0), chain_type="stuff", retriever=vectorstore_pg.as_retriever(), input_key="question")
We can now set up an agent to route between them.
from langchain.agents import initialize_agent, Tool | https://python.langchain.com/en/latest/use_cases/evaluation/agent_vectordb_sota_pg.html |
e7777054f9bf-2 | from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
tools = [
Tool(
name = "State of Union QA System",
func=chain_sota.run,
description="useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question."
),
Tool(
name = "Paul Graham System",
func=chain_pg.run,
description="useful for when you need to answer questions about Paul Graham. Input should be a fully formed question."
),
]
agent = initialize_agent(tools, OpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, max_iterations=4)
Make a prediction#
First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
agent.run(dataset[0]['question'])
'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.'
Make many predictions#
Now we can make predictions
predictions = []
predicted_dataset = []
error_dataset = []
for data in dataset:
new_data = {"input": data["question"], "answer": data["answer"]}
try:
predictions.append(agent(new_data))
predicted_dataset.append(new_data)
except Exception:
error_dataset.append(new_data)
Evaluate performance#
Now we can evaluate the predictions. The first thing we can do is look at them by eye.
predictions[0]
{'input': 'What is the purpose of the NATO Alliance?',
'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.', | https://python.langchain.com/en/latest/use_cases/evaluation/agent_vectordb_sota_pg.html |
e7777054f9bf-3 | 'output': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.'}
Next, we can use a language model to score them programatically
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(predicted_dataset, predictions, question_key="input", prediction_key="output")
We can add in the graded output to the predictions dict and then get a count of the grades.
for i, prediction in enumerate(predictions):
prediction['grade'] = graded_outputs[i]['text']
from collections import Counter
Counter([pred['grade'] for pred in predictions])
Counter({' CORRECT': 28, ' INCORRECT': 5})
We can also filter the datapoints to the incorrect examples and look at them.
incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"]
incorrect[0]
{'input': 'What are the four common sense steps that the author suggests to move forward safely?',
'answer': 'The four common sense steps suggested by the author to move forward safely are: stay protected with vaccines and treatments, prepare for new variants, end the shutdown of schools and businesses, and stay vigilant.',
'output': 'The four common sense steps suggested in the most recent State of the Union address are: cutting the cost of prescription drugs, providing a pathway to citizenship for Dreamers, revising laws so businesses have the workers they need and families don’t wait decades to reunite, and protecting access to health care and preserving a woman’s right to choose.',
'grade': ' INCORRECT'}
previous
Agent Benchmarking: Search + Calculator
next
Benchmarking Template
Contents
Loading the data
Setting up a chain
Make a prediction | https://python.langchain.com/en/latest/use_cases/evaluation/agent_vectordb_sota_pg.html |
e7777054f9bf-4 | Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/agent_vectordb_sota_pg.html |
39066f1302fd-0 | .ipynb
.pdf
Evaluating an OpenAPI Chain
Contents
Load the API Chain
Optional: Generate Input Questions and Request Ground Truth Queries
Run the API Chain
Evaluate the requests chain
Evaluate the Response Chain
Generating Test Datasets
Evaluating an OpenAPI Chain#
This notebook goes over ways to semantically evaluate an OpenAPI Chain, which calls an endpoint defined by the OpenAPI specification using purely natural language.
from langchain.tools import OpenAPISpec, APIOperation
from langchain.chains import OpenAPIEndpointChain, LLMChain
from langchain.requests import Requests
from langchain.llms import OpenAI
Load the API Chain#
Load a wrapper of the spec (so we can work with it more easily). You can load from a url or from a local file.
# Load and parse the OpenAPI Spec
spec = OpenAPISpec.from_url("https://www.klarna.com/us/shopping/public/openai/v0/api-docs/")
# Load a single endpoint operation
operation = APIOperation.from_openapi_spec(spec, '/public/openai/v0/products', "get")
verbose = False
# Select any LangChain LLM
llm = OpenAI(temperature=0, max_tokens=1000)
# Create the endpoint chain
api_chain = OpenAPIEndpointChain.from_api_operation(
operation,
llm,
requests=Requests(),
verbose=verbose,
return_intermediate_steps=True # Return request and response text
)
Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Optional: Generate Input Questions and Request Ground Truth Queries#
See Generating Test Datasets at the end of this notebook for more details.
# import re | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-1 | See Generating Test Datasets at the end of this notebook for more details.
# import re
# from langchain.prompts import PromptTemplate
# template = """Below is a service description:
# {spec}
# Imagine you're a new user trying to use {operation} through a search bar. What are 10 different things you want to request?
# Wants/Questions:
# 1. """
# prompt = PromptTemplate.from_template(template)
# generation_chain = LLMChain(llm=llm, prompt=prompt)
# questions_ = generation_chain.run(spec=operation.to_typescript(), operation=operation.operation_id).split('\n')
# # Strip preceding numeric bullets
# questions = [re.sub(r'^\d+\. ', '', q).strip() for q in questions_]
# questions
# ground_truths = [
# {"q": ...} # What are the best queries for each input?
# ]
Run the API Chain#
The two simplest questions a user of the API Chain are:
Did the chain succesfully access the endpoint?
Did the action accomplish the correct result?
from collections import defaultdict
# Collect metrics to report at completion
scores = defaultdict(list)
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("openapi-chain-klarna-products-get")
Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--openapi-chain-klarna-products-get-5d03362007667626/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)
dataset
[{'question': 'What iPhone models are available?', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-2 | dataset
[{'question': 'What iPhone models are available?',
'expected_query': {'max_price': None, 'q': 'iPhone'}},
{'question': 'Are there any budget laptops?',
'expected_query': {'max_price': 300, 'q': 'laptop'}},
{'question': 'Show me the cheapest gaming PC.',
'expected_query': {'max_price': 500, 'q': 'gaming pc'}},
{'question': 'Are there any tablets under $400?',
'expected_query': {'max_price': 400, 'q': 'tablet'}},
{'question': 'What are the best headphones?',
'expected_query': {'max_price': None, 'q': 'headphones'}},
{'question': 'What are the top rated laptops?',
'expected_query': {'max_price': None, 'q': 'laptop'}},
{'question': 'I want to buy some shoes. I like Adidas and Nike.',
'expected_query': {'max_price': None, 'q': 'shoe'}},
{'question': 'I want to buy a new skirt',
'expected_query': {'max_price': None, 'q': 'skirt'}},
{'question': 'My company is asking me to get a professional Deskopt PC - money is no object.',
'expected_query': {'max_price': 10000, 'q': 'professional desktop PC'}},
{'question': 'What are the best budget cameras?',
'expected_query': {'max_price': 300, 'q': 'camera'}}]
questions = [d['question'] for d in dataset]
## Run the the API chain itself
raise_error = False # Stop on first failed example - useful for development
chain_outputs = []
failed_examples = []
for question in questions:
try: | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-3 | chain_outputs = []
failed_examples = []
for question in questions:
try:
chain_outputs.append(api_chain(question))
scores["completed"].append(1.0)
except Exception as e:
if raise_error:
raise e
failed_examples.append({'q': question, 'error': e})
scores["completed"].append(0.0)
# If the chain failed to run, show the failing examples
failed_examples
[]
answers = [res['output'] for res in chain_outputs]
answers
['There are currently 10 Apple iPhone models available: Apple iPhone 14 Pro Max 256GB, Apple iPhone 12 128GB, Apple iPhone 13 128GB, Apple iPhone 14 Pro 128GB, Apple iPhone 14 Pro 256GB, Apple iPhone 14 Pro Max 128GB, Apple iPhone 13 Pro Max 128GB, Apple iPhone 14 128GB, Apple iPhone 12 Pro 512GB, and Apple iPhone 12 mini 64GB.',
'Yes, there are several budget laptops in the API response. For example, the HP 14-dq0055dx and HP 15-dw0083wm are both priced at $199.99 and $244.99 respectively.',
'The cheapest gaming PC available is the Alarco Gaming PC (X_BLACK_GTX750) for $499.99. You can find more information about it here: https://www.klarna.com/us/shopping/pl/cl223/3203154750/Desktop-Computers/Alarco-Gaming-PC-%28X_BLACK_GTX750%29/?utm_source=openai&ref-site=openai_plugin', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-4 | 'Yes, there are several tablets under $400. These include the Apple iPad 10.2" 32GB (2019), Samsung Galaxy Tab A8 10.5 SM-X200 32GB, Samsung Galaxy Tab A7 Lite 8.7 SM-T220 32GB, Amazon Fire HD 8" 32GB (10th Generation), and Amazon Fire HD 10 32GB.',
'It looks like you are looking for the best headphones. Based on the API response, it looks like the Apple AirPods Pro (2nd generation) 2022, Apple AirPods Max, and Bose Noise Cancelling Headphones 700 are the best options.',
'The top rated laptops based on the API response are the Apple MacBook Pro (2021) M1 Pro 8C CPU 14C GPU 16GB 512GB SSD 14", Apple MacBook Pro (2022) M2 OC 10C GPU 8GB 256GB SSD 13.3", Apple MacBook Air (2022) M2 OC 8C GPU 8GB 256GB SSD 13.6", and Apple MacBook Pro (2023) M2 Pro OC 16C GPU 16GB 512GB SSD 14.2".', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-5 | "I found several Nike and Adidas shoes in the API response. Here are the links to the products: Nike Dunk Low M - Black/White: https://www.klarna.com/us/shopping/pl/cl337/3200177969/Shoes/Nike-Dunk-Low-M-Black-White/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 4 Retro M - Midnight Navy: https://www.klarna.com/us/shopping/pl/cl337/3202929835/Shoes/Nike-Air-Jordan-4-Retro-M-Midnight-Navy/?utm_source=openai&ref-site=openai_plugin, Nike Air Force 1 '07 M - White: https://www.klarna.com/us/shopping/pl/cl337/3979297/Shoes/Nike-Air-Force-1-07-M-White/?utm_source=openai&ref-site=openai_plugin, Nike Dunk Low W - White/Black: https://www.klarna.com/us/shopping/pl/cl337/3200134705/Shoes/Nike-Dunk-Low-W-White-Black/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 1 Retro High M - White/University Blue/Black: https://www.klarna.com/us/shopping/pl/cl337/3200383658/Shoes/Nike-Air-Jordan-1-Retro-High-M-White-University-Blue-Black/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 1 Retro High OG M - True Blue/Cement Grey/White: https://www.klarna.com/us/shopping/pl/cl337/3204655673/Shoes/Nike-Air-Jordan-1-Retro-High-OG-M-True-Blue-Cement-Grey-White/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 11 Retro Cherry - White/Varsity Red/Black: | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-6 | Nike Air Jordan 11 Retro Cherry - White/Varsity Red/Black: https://www.klarna.com/us/shopping/pl/cl337/3202929696/Shoes/Nike-Air-Jordan-11-Retro-Cherry-White-Varsity-Red-Black/?utm_source=openai&ref-site=openai_plugin, Nike Dunk High W - White/Black: https://www.klarna.com/us/shopping/pl/cl337/3201956448/Shoes/Nike-Dunk-High-W-White-Black/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 5 Retro M - Black/Taxi/Aquatone: https://www.klarna.com/us/shopping/pl/cl337/3204923084/Shoes/Nike-Air-Jordan-5-Retro-M-Black-Taxi-Aquatone/?utm_source=openai&ref-site=openai_plugin, Nike Court Legacy Lift W: https://www.klarna.com/us/shopping/pl/cl337/3202103728/Shoes/Nike-Court-Legacy-Lift-W/?utm_source=openai&ref-site=openai_plugin", | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-7 | "I found several skirts that may interest you. Please take a look at the following products: Avenue Plus Size Denim Stretch Skirt, LoveShackFancy Ruffled Mini Skirt - Antique White, Nike Dri-Fit Club Golf Skirt - Active Pink, Skims Soft Lounge Ruched Long Skirt, French Toast Girl's Front Pleated Skirt with Tabs, Alexia Admor Women's Harmonie Mini Skirt Pink Pink, Vero Moda Long Skirt, Nike Court Dri-FIT Victory Flouncy Tennis Skirt Women - White/Black, Haoyuan Mini Pleated Skirts W, and Zimmermann Lyre Midi Skirt.",
'Based on the API response, you may want to consider the Skytech Archangel Gaming Computer PC Desktop, the CyberPowerPC Gamer Master Gaming Desktop, or the ASUS ROG Strix G10DK-RS756, as they all offer powerful processors and plenty of RAM.',
'Based on the API response, the best budget cameras are the DJI Mini 2 Dog Camera ($448.50), Insta360 Sphere with Landing Pad ($429.99), DJI FPV Gimbal Camera ($121.06), Parrot Camera & Body ($36.19), and DJI FPV Air Unit ($179.00).']
Evaluate the requests chain#
The API Chain has two main components:
Translate the user query to an API request (request synthesizer)
Translate the API response to a natural language response
Here, we construct an evaluation chain to grade the request synthesizer against selected human queries
import json
truth_queries = [json.dumps(data["expected_query"]) for data in dataset]
# Collect the API queries generated by the chain
predicted_queries = [output["intermediate_steps"]["request_args"] for output in chain_outputs]
from langchain.prompts import PromptTemplate | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-8 | from langchain.prompts import PromptTemplate
template = """You are trying to answer the following question by querying an API:
> Question: {question}
The query you know you should be executing against the API is:
> Query: {truth_query}
Is the following predicted query semantically the same (eg likely to produce the same answer)?
> Predicted Query: {predict_query}
Please give the Predicted Query a grade of either an A, B, C, D, or F, along with an explanation of why. End the evaluation with 'Final Grade: <the letter>'
> Explanation: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
eval_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)
request_eval_results = []
for question, predict_query, truth_query in list(zip(questions, predicted_queries, truth_queries)):
eval_output = eval_chain.run(
question=question,
truth_query=truth_query,
predict_query=predict_query,
)
request_eval_results.append(eval_output)
request_eval_results
[' The original query is asking for all iPhone models, so the "q" parameter is correct. The "max_price" parameter is also correct, as it is set to null, meaning that no maximum price is set. The predicted query adds two additional parameters, "size" and "min_price". The "size" parameter is not necessary, as it is not relevant to the question being asked. The "min_price" parameter is also not necessary, as it is not relevant to the question being asked and it is set to 0, which is the default value. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-9 | ' The original query is asking for laptops with a maximum price of 300. The predicted query is asking for laptops with a minimum price of 0 and a maximum price of 500. This means that the predicted query is likely to return more results than the original query, as it is asking for a wider range of prices. Therefore, the predicted query is not semantically the same as the original query, and it is not likely to produce the same answer. Final Grade: F',
" The first two parameters are the same, so that's good. The third parameter is different, but it's not necessary for the query, so that's not a problem. The fourth parameter is the problem. The original query specifies a maximum price of 500, while the predicted query specifies a maximum price of null. This means that the predicted query will not limit the results to the cheapest gaming PCs, so it is not semantically the same as the original query. Final Grade: F",
' The original query is asking for tablets under $400, so the first two parameters are correct. The predicted query also includes the parameters "size" and "min_price", which are not necessary for the original query. The "size" parameter is not relevant to the question, and the "min_price" parameter is redundant since the original query already specifies a maximum price. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D',
' The original query is asking for headphones with no maximum price, so the predicted query is not semantically the same because it has a maximum price of 500. The predicted query also has a size of 10, which is not specified in the original query. Therefore, the predicted query is not semantically the same as the original query. Final Grade: F', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-10 | " The original query is asking for the top rated laptops, so the 'size' parameter should be set to 10 to get the top 10 results. The 'min_price' parameter should be set to 0 to get results from all price ranges. The 'max_price' parameter should be set to null to get results from all price ranges. The 'q' parameter should be set to 'laptop' to get results related to laptops. All of these parameters are present in the predicted query, so it is semantically the same as the original query. Final Grade: A",
' The original query is asking for shoes, so the predicted query is asking for the same thing. The original query does not specify a size, so the predicted query is not adding any additional information. The original query does not specify a price range, so the predicted query is adding additional information that is not necessary. Therefore, the predicted query is not semantically the same as the original query and is likely to produce different results. Final Grade: D',
' The original query is asking for a skirt, so the predicted query is asking for the same thing. The predicted query also adds additional parameters such as size and price range, which could help narrow down the results. However, the size parameter is not necessary for the query to be successful, and the price range is too narrow. Therefore, the predicted query is not as effective as the original query. Final Grade: C', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-11 | ' The first part of the query is asking for a Desktop PC, which is the same as the original query. The second part of the query is asking for a size of 10, which is not relevant to the original query. The third part of the query is asking for a minimum price of 0, which is not relevant to the original query. The fourth part of the query is asking for a maximum price of null, which is not relevant to the original query. Therefore, the Predicted Query does not semantically match the original query and is not likely to produce the same answer. Final Grade: F',
' The original query is asking for cameras with a maximum price of 300. The predicted query is asking for cameras with a maximum price of 500. This means that the predicted query is likely to return more results than the original query, which may include cameras that are not within the budget range. Therefore, the predicted query is not semantically the same as the original query and does not answer the original question. Final Grade: F']
import re
from typing import List
# Parse the evaluation chain responses into a rubric
def parse_eval_results(results: List[str]) -> List[float]:
rubric = {
"A": 1.0,
"B": 0.75,
"C": 0.5,
"D": 0.25,
"F": 0
}
return [rubric[re.search(r'Final Grade: (\w+)', res).group(1)] for res in results]
parsed_results = parse_eval_results(request_eval_results)
# Collect the scores for a final evaluation table
scores['request_synthesizer'].extend(parsed_results)
Evaluate the Response Chain#
The second component translated the structured API response to a natural language response.
Evaluate this against the user’s original question. | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-12 | Evaluate this against the user’s original question.
from langchain.prompts import PromptTemplate
template = """You are trying to answer the following question by querying an API:
> Question: {question}
The API returned a response of:
> API result: {api_response}
Your response to the user: {answer}
Please evaluate the accuracy and utility of your response to the user's original question, conditioned on the information available.
Give a letter grade of either an A, B, C, D, or F, along with an explanation of why. End the evaluation with 'Final Grade: <the letter>'
> Explanation: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
eval_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)
# Extract the API responses from the chain
api_responses = [output["intermediate_steps"]["response_text"] for output in chain_outputs]
# Run the grader chain
response_eval_results = []
for question, api_response, answer in list(zip(questions, api_responses, answers)):
request_eval_results.append(eval_chain.run(question=question, api_response=api_response, answer=answer))
request_eval_results | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-13 | request_eval_results
[' The original query is asking for all iPhone models, so the "q" parameter is correct. The "max_price" parameter is also correct, as it is set to null, meaning that no maximum price is set. The predicted query adds two additional parameters, "size" and "min_price". The "size" parameter is not necessary, as it is not relevant to the question being asked. The "min_price" parameter is also not necessary, as it is not relevant to the question being asked and it is set to 0, which is the default value. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D',
' The original query is asking for laptops with a maximum price of 300. The predicted query is asking for laptops with a minimum price of 0 and a maximum price of 500. This means that the predicted query is likely to return more results than the original query, as it is asking for a wider range of prices. Therefore, the predicted query is not semantically the same as the original query, and it is not likely to produce the same answer. Final Grade: F',
" The first two parameters are the same, so that's good. The third parameter is different, but it's not necessary for the query, so that's not a problem. The fourth parameter is the problem. The original query specifies a maximum price of 500, while the predicted query specifies a maximum price of null. This means that the predicted query will not limit the results to the cheapest gaming PCs, so it is not semantically the same as the original query. Final Grade: F", | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-14 | ' The original query is asking for tablets under $400, so the first two parameters are correct. The predicted query also includes the parameters "size" and "min_price", which are not necessary for the original query. The "size" parameter is not relevant to the question, and the "min_price" parameter is redundant since the original query already specifies a maximum price. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D',
' The original query is asking for headphones with no maximum price, so the predicted query is not semantically the same because it has a maximum price of 500. The predicted query also has a size of 10, which is not specified in the original query. Therefore, the predicted query is not semantically the same as the original query. Final Grade: F',
" The original query is asking for the top rated laptops, so the 'size' parameter should be set to 10 to get the top 10 results. The 'min_price' parameter should be set to 0 to get results from all price ranges. The 'max_price' parameter should be set to null to get results from all price ranges. The 'q' parameter should be set to 'laptop' to get results related to laptops. All of these parameters are present in the predicted query, so it is semantically the same as the original query. Final Grade: A",
' The original query is asking for shoes, so the predicted query is asking for the same thing. The original query does not specify a size, so the predicted query is not adding any additional information. The original query does not specify a price range, so the predicted query is adding additional information that is not necessary. Therefore, the predicted query is not semantically the same as the original query and is likely to produce different results. Final Grade: D', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-15 | ' The original query is asking for a skirt, so the predicted query is asking for the same thing. The predicted query also adds additional parameters such as size and price range, which could help narrow down the results. However, the size parameter is not necessary for the query to be successful, and the price range is too narrow. Therefore, the predicted query is not as effective as the original query. Final Grade: C',
' The first part of the query is asking for a Desktop PC, which is the same as the original query. The second part of the query is asking for a size of 10, which is not relevant to the original query. The third part of the query is asking for a minimum price of 0, which is not relevant to the original query. The fourth part of the query is asking for a maximum price of null, which is not relevant to the original query. Therefore, the Predicted Query does not semantically match the original query and is not likely to produce the same answer. Final Grade: F',
' The original query is asking for cameras with a maximum price of 300. The predicted query is asking for cameras with a maximum price of 500. This means that the predicted query is likely to return more results than the original query, which may include cameras that are not within the budget range. Therefore, the predicted query is not semantically the same as the original query and does not answer the original question. Final Grade: F',
' The user asked a question about what iPhone models are available, and the API returned a response with 10 different models. The response provided by the user accurately listed all 10 models, so the accuracy of the response is A+. The utility of the response is also A+ since the user was able to get the exact information they were looking for. Final Grade: A+', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-16 | " The API response provided a list of laptops with their prices and attributes. The user asked if there were any budget laptops, and the response provided a list of laptops that are all priced under $500. Therefore, the response was accurate and useful in answering the user's question. Final Grade: A",
" The API response provided the name, price, and URL of the product, which is exactly what the user asked for. The response also provided additional information about the product's attributes, which is useful for the user to make an informed decision. Therefore, the response is accurate and useful. Final Grade: A",
" The API response provided a list of tablets that are under $400. The response accurately answered the user's question. Additionally, the response provided useful information such as the product name, price, and attributes. Therefore, the response was accurate and useful. Final Grade: A",
" The API response provided a list of headphones with their respective prices and attributes. The user asked for the best headphones, so the response should include the best headphones based on the criteria provided. The response provided a list of headphones that are all from the same brand (Apple) and all have the same type of headphone (True Wireless, In-Ear). This does not provide the user with enough information to make an informed decision about which headphones are the best. Therefore, the response does not accurately answer the user's question. Final Grade: F",
' The API response provided a list of laptops with their attributes, which is exactly what the user asked for. The response provided a comprehensive list of the top rated laptops, which is what the user was looking for. The response was accurate and useful, providing the user with the information they needed. Final Grade: A', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-17 | ' The API response provided a list of shoes from both Adidas and Nike, which is exactly what the user asked for. The response also included the product name, price, and attributes for each shoe, which is useful information for the user to make an informed decision. The response also included links to the products, which is helpful for the user to purchase the shoes. Therefore, the response was accurate and useful. Final Grade: A',
" The API response provided a list of skirts that could potentially meet the user's needs. The response also included the name, price, and attributes of each skirt. This is a great start, as it provides the user with a variety of options to choose from. However, the response does not provide any images of the skirts, which would have been helpful for the user to make a decision. Additionally, the response does not provide any information about the availability of the skirts, which could be important for the user. \n\nFinal Grade: B",
' The user asked for a professional desktop PC with no budget constraints. The API response provided a list of products that fit the criteria, including the Skytech Archangel Gaming Computer PC Desktop, the CyberPowerPC Gamer Master Gaming Desktop, and the ASUS ROG Strix G10DK-RS756. The response accurately suggested these three products as they all offer powerful processors and plenty of RAM. Therefore, the response is accurate and useful. Final Grade: A',
" The API response provided a list of cameras with their prices, which is exactly what the user asked for. The response also included additional information such as features and memory cards, which is not necessary for the user's question but could be useful for further research. The response was accurate and provided the user with the information they needed. Final Grade: A"]
# Reusing the rubric from above, parse the evaluation chain responses
parsed_response_results = parse_eval_results(request_eval_results) | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-18 | parsed_response_results = parse_eval_results(request_eval_results)
# Collect the scores for a final evaluation table
scores['result_synthesizer'].extend(parsed_response_results)
# Print out Score statistics for the evaluation session
header = "{:<20}\t{:<10}\t{:<10}\t{:<10}".format("Metric", "Min", "Mean", "Max")
print(header)
for metric, metric_scores in scores.items():
mean_scores = sum(metric_scores) / len(metric_scores) if len(metric_scores) > 0 else float('nan')
row = "{:<20}\t{:<10.2f}\t{:<10.2f}\t{:<10.2f}".format(metric, min(metric_scores), mean_scores, max(metric_scores))
print(row)
Metric Min Mean Max
completed 1.00 1.00 1.00
request_synthesizer 0.00 0.23 1.00
result_synthesizer 0.00 0.55 1.00
# Re-show the examples for which the chain failed to complete
failed_examples
[]
Generating Test Datasets#
To evaluate a chain against your own endpoint, you’ll want to generate a test dataset that’s conforms to the API.
This section provides an overview of how to bootstrap the process.
First, we’ll parse the OpenAPI Spec. For this example, we’ll Speak’s OpenAPI specification.
# Load and parse the OpenAPI Spec
spec = OpenAPISpec.from_url("https://api.speak.com/openapi.yaml")
Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support. | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-19 | Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
# List the paths in the OpenAPI Spec
paths = sorted(spec.paths.keys())
paths
['/v1/public/openai/explain-phrase',
'/v1/public/openai/explain-task',
'/v1/public/openai/translate']
# See which HTTP Methods are available for a given path
methods = spec.get_methods_for_path('/v1/public/openai/explain-task')
methods
['post']
# Load a single endpoint operation
operation = APIOperation.from_openapi_spec(spec, '/v1/public/openai/explain-task', 'post')
# The operation can be serialized as typescript
print(operation.to_typescript())
type explainTask = (_: {
/* Description of the task that the user wants to accomplish or do. For example, "tell the waiter they messed up my order" or "compliment someone on their shirt" */
task_description?: string,
/* The foreign language that the user is learning and asking about. The value can be inferred from question - for example, if the user asks "how do i ask a girl out in mexico city", the value should be "Spanish" because of Mexico City. Always use the full name of the language (e.g. Spanish, French). */
learning_language?: string,
/* The user's native language. Infer this value from the language the user asked their question in. Always use the full name of the language (e.g. Spanish, French). */
native_language?: string,
/* A description of any additional context in the user's question that could affect the explanation - e.g. setting, scenario, situation, tone, speaking style and formality, usage notes, or any other qualifiers. */ | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-20 | additional_context?: string,
/* Full text of the user's question. */
full_query?: string,
}) => any;
# Compress the service definition to avoid leaking too much input structure to the sample data
template = """In 20 words or less, what does this service accomplish?
{spec}
Function: It's designed to """
prompt = PromptTemplate.from_template(template)
generation_chain = LLMChain(llm=llm, prompt=prompt)
purpose = generation_chain.run(spec=operation.to_typescript())
template = """Write a list of {num_to_generate} unique messages users might send to a service designed to{purpose} They must each be completely unique.
1."""
def parse_list(text: str) -> List[str]:
# Match lines starting with a number then period
# Strip leading and trailing whitespace
matches = re.findall(r'^\d+\. ', text)
return [re.sub(r'^\d+\. ', '', q).strip().strip('"') for q in text.split('\n')]
num_to_generate = 10 # How many examples to use for this test set.
prompt = PromptTemplate.from_template(template)
generation_chain = LLMChain(llm=llm, prompt=prompt)
text = generation_chain.run(purpose=purpose,
num_to_generate=num_to_generate)
# Strip preceding numeric bullets
queries = parse_list(text)
queries
["Can you explain how to say 'hello' in Spanish?",
"I need help understanding the French word for 'goodbye'.",
"Can you tell me how to say 'thank you' in German?",
"I'm trying to learn the Italian word for 'please'.",
"Can you help me with the pronunciation of 'yes' in Portuguese?",
"I'm looking for the Dutch word for 'no'.", | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-21 | "I'm looking for the Dutch word for 'no'.",
"Can you explain the meaning of 'hello' in Japanese?",
"I need help understanding the Russian word for 'thank you'.",
"Can you tell me how to say 'goodbye' in Chinese?",
"I'm trying to learn the Arabic word for 'please'."]
# Define the generation chain to get hypotheses
api_chain = OpenAPIEndpointChain.from_api_operation(
operation,
llm,
requests=Requests(),
verbose=verbose,
return_intermediate_steps=True # Return request and response text
)
predicted_outputs =[api_chain(query) for query in queries]
request_args = [output["intermediate_steps"]["request_args"] for output in predicted_outputs]
# Show the generated request
request_args
['{"task_description": "say \'hello\'", "learning_language": "Spanish", "native_language": "English", "full_query": "Can you explain how to say \'hello\' in Spanish?"}',
'{"task_description": "understanding the French word for \'goodbye\'", "learning_language": "French", "native_language": "English", "full_query": "I need help understanding the French word for \'goodbye\'."}',
'{"task_description": "say \'thank you\'", "learning_language": "German", "native_language": "English", "full_query": "Can you tell me how to say \'thank you\' in German?"}',
'{"task_description": "Learn the Italian word for \'please\'", "learning_language": "Italian", "native_language": "English", "full_query": "I\'m trying to learn the Italian word for \'please\'."}', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-22 | '{"task_description": "Help with pronunciation of \'yes\' in Portuguese", "learning_language": "Portuguese", "native_language": "English", "full_query": "Can you help me with the pronunciation of \'yes\' in Portuguese?"}',
'{"task_description": "Find the Dutch word for \'no\'", "learning_language": "Dutch", "native_language": "English", "full_query": "I\'m looking for the Dutch word for \'no\'."}',
'{"task_description": "Explain the meaning of \'hello\' in Japanese", "learning_language": "Japanese", "native_language": "English", "full_query": "Can you explain the meaning of \'hello\' in Japanese?"}',
'{"task_description": "understanding the Russian word for \'thank you\'", "learning_language": "Russian", "native_language": "English", "full_query": "I need help understanding the Russian word for \'thank you\'."}',
'{"task_description": "say goodbye", "learning_language": "Chinese", "native_language": "English", "full_query": "Can you tell me how to say \'goodbye\' in Chinese?"}',
'{"task_description": "Learn the Arabic word for \'please\'", "learning_language": "Arabic", "native_language": "English", "full_query": "I\'m trying to learn the Arabic word for \'please\'."}']
## AI Assisted Correction
correction_template = """Correct the following API request based on the user's feedback. If the user indicates no changes are needed, output the original without making any changes.
REQUEST: {request}
User Feedback / requested changes: {user_feedback}
Finalized Request: """
prompt = PromptTemplate.from_template(correction_template)
correction_chain = LLMChain(llm=llm, prompt=prompt)
ground_truth = [] | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-23 | ground_truth = []
for query, request_arg in list(zip(queries, request_args)):
feedback = input(f"Query: {query}\nRequest: {request_arg}\nRequested changes: ")
if feedback == 'n' or feedback == 'none' or not feedback:
ground_truth.append(request_arg)
continue
resolved = correction_chain.run(request=request_arg,
user_feedback=feedback)
ground_truth.append(resolved.strip())
print("Updated request:", resolved)
Query: Can you explain how to say 'hello' in Spanish?
Request: {"task_description": "say 'hello'", "learning_language": "Spanish", "native_language": "English", "full_query": "Can you explain how to say 'hello' in Spanish?"}
Requested changes:
Query: I need help understanding the French word for 'goodbye'.
Request: {"task_description": "understanding the French word for 'goodbye'", "learning_language": "French", "native_language": "English", "full_query": "I need help understanding the French word for 'goodbye'."}
Requested changes:
Query: Can you tell me how to say 'thank you' in German?
Request: {"task_description": "say 'thank you'", "learning_language": "German", "native_language": "English", "full_query": "Can you tell me how to say 'thank you' in German?"}
Requested changes:
Query: I'm trying to learn the Italian word for 'please'.
Request: {"task_description": "Learn the Italian word for 'please'", "learning_language": "Italian", "native_language": "English", "full_query": "I'm trying to learn the Italian word for 'please'."}
Requested changes:
Query: Can you help me with the pronunciation of 'yes' in Portuguese? | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-24 | Query: Can you help me with the pronunciation of 'yes' in Portuguese?
Request: {"task_description": "Help with pronunciation of 'yes' in Portuguese", "learning_language": "Portuguese", "native_language": "English", "full_query": "Can you help me with the pronunciation of 'yes' in Portuguese?"}
Requested changes:
Query: I'm looking for the Dutch word for 'no'.
Request: {"task_description": "Find the Dutch word for 'no'", "learning_language": "Dutch", "native_language": "English", "full_query": "I'm looking for the Dutch word for 'no'."}
Requested changes:
Query: Can you explain the meaning of 'hello' in Japanese?
Request: {"task_description": "Explain the meaning of 'hello' in Japanese", "learning_language": "Japanese", "native_language": "English", "full_query": "Can you explain the meaning of 'hello' in Japanese?"}
Requested changes:
Query: I need help understanding the Russian word for 'thank you'.
Request: {"task_description": "understanding the Russian word for 'thank you'", "learning_language": "Russian", "native_language": "English", "full_query": "I need help understanding the Russian word for 'thank you'."}
Requested changes:
Query: Can you tell me how to say 'goodbye' in Chinese?
Request: {"task_description": "say goodbye", "learning_language": "Chinese", "native_language": "English", "full_query": "Can you tell me how to say 'goodbye' in Chinese?"}
Requested changes:
Query: I'm trying to learn the Arabic word for 'please'. | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-25 | Requested changes:
Query: I'm trying to learn the Arabic word for 'please'.
Request: {"task_description": "Learn the Arabic word for 'please'", "learning_language": "Arabic", "native_language": "English", "full_query": "I'm trying to learn the Arabic word for 'please'."}
Requested changes:
Now you can use the ground_truth as shown above in Evaluate the Requests Chain!
# Now you have a new ground truth set to use as shown above!
ground_truth
['{"task_description": "say \'hello\'", "learning_language": "Spanish", "native_language": "English", "full_query": "Can you explain how to say \'hello\' in Spanish?"}',
'{"task_description": "understanding the French word for \'goodbye\'", "learning_language": "French", "native_language": "English", "full_query": "I need help understanding the French word for \'goodbye\'."}',
'{"task_description": "say \'thank you\'", "learning_language": "German", "native_language": "English", "full_query": "Can you tell me how to say \'thank you\' in German?"}',
'{"task_description": "Learn the Italian word for \'please\'", "learning_language": "Italian", "native_language": "English", "full_query": "I\'m trying to learn the Italian word for \'please\'."}',
'{"task_description": "Help with pronunciation of \'yes\' in Portuguese", "learning_language": "Portuguese", "native_language": "English", "full_query": "Can you help me with the pronunciation of \'yes\' in Portuguese?"}', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
39066f1302fd-26 | '{"task_description": "Find the Dutch word for \'no\'", "learning_language": "Dutch", "native_language": "English", "full_query": "I\'m looking for the Dutch word for \'no\'."}',
'{"task_description": "Explain the meaning of \'hello\' in Japanese", "learning_language": "Japanese", "native_language": "English", "full_query": "Can you explain the meaning of \'hello\' in Japanese?"}',
'{"task_description": "understanding the Russian word for \'thank you\'", "learning_language": "Russian", "native_language": "English", "full_query": "I need help understanding the Russian word for \'thank you\'."}',
'{"task_description": "say goodbye", "learning_language": "Chinese", "native_language": "English", "full_query": "Can you tell me how to say \'goodbye\' in Chinese?"}',
'{"task_description": "Learn the Arabic word for \'please\'", "learning_language": "Arabic", "native_language": "English", "full_query": "I\'m trying to learn the Arabic word for \'please\'."}']
previous
LLM Math
next
Question Answering Benchmarking: Paul Graham Essay
Contents
Load the API Chain
Optional: Generate Input Questions and Request Ground Truth Queries
Run the API Chain
Evaluate the requests chain
Evaluate the Response Chain
Generating Test Datasets
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
a38fb84e753c-0 | .ipynb
.pdf
LLM Math
Contents
Setting up a chain
LLM Math#
Evaluating chains that know how to do math.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("llm-math")
Downloading and preparing dataset json/LangChainDatasets--llm-math to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--llm-math-509b11d101165afa/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Dataset json downloaded and prepared to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--llm-math-509b11d101165afa/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51. Subsequent calls will reuse this data.
Setting up a chain#
Now we need to create some pipelines for doing math.
from langchain.llms import OpenAI
from langchain.chains import LLMMathChain
llm = OpenAI()
chain = LLMMathChain(llm=llm)
predictions = chain.apply(dataset)
numeric_output = [float(p['answer'].strip().strip("Answer: ")) for p in predictions]
correct = [example['answer'] == numeric_output[i] for i, example in enumerate(dataset)]
sum(correct) / len(correct)
1.0 | https://python.langchain.com/en/latest/use_cases/evaluation/llm_math.html |
a38fb84e753c-1 | sum(correct) / len(correct)
1.0
for i, example in enumerate(dataset):
print("input: ", example["question"])
print("expected output :", example["answer"])
print("prediction: ", numeric_output[i])
input: 5
expected output : 5.0
prediction: 5.0
input: 5 + 3
expected output : 8.0
prediction: 8.0
input: 2^3.171
expected output : 9.006708689094099
prediction: 9.006708689094099
input: 2 ^3.171
expected output : 9.006708689094099
prediction: 9.006708689094099
input: two to the power of three point one hundred seventy one
expected output : 9.006708689094099
prediction: 9.006708689094099
input: five + three squared minus 1
expected output : 13.0
prediction: 13.0
input: 2097 times 27.31
expected output : 57269.07
prediction: 57269.07
input: two thousand ninety seven times twenty seven point thirty one
expected output : 57269.07
prediction: 57269.07
input: 209758 / 2714
expected output : 77.28739867354459
prediction: 77.28739867354459
input: 209758.857 divided by 2714.31
expected output : 77.27888745205964
prediction: 77.27888745205964
previous
Using Hugging Face Datasets
next
Evaluating an OpenAPI Chain
Contents | https://python.langchain.com/en/latest/use_cases/evaluation/llm_math.html |
a38fb84e753c-2 | next
Evaluating an OpenAPI Chain
Contents
Setting up a chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/llm_math.html |
5846b936291a-0 | .ipynb
.pdf
Question Answering Benchmarking: State of the Union Address
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
Question Answering Benchmarking: State of the Union Address#
Here we go over how to benchmark performance on a question answering task over a state of the union address.
It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Loading the data#
First, let’s load the data.
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("question-answering-state-of-the-union")
Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--question-answering-state-of-the-union-a7e5a3b2db4f440d/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)
Setting up a chain#
Now we need to create some pipelines for doing question answering. Step one in that is creating an index over the data in question.
from langchain.document_loaders import TextLoader
loader = TextLoader("../../modules/state_of_the_union.txt")
from langchain.indexes import VectorstoreIndexCreator
vectorstore = VectorstoreIndexCreator().from_loaders([loader]).vectorstore
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Now we can create a question answering chain. | https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html |
5846b936291a-1 | Now we can create a question answering chain.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
chain = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=vectorstore.as_retriever(), input_key="question")
Make a prediction#
First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
chain(dataset[0])
{'question': 'What is the purpose of the NATO Alliance?',
'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.',
'result': ' The NATO Alliance was created to secure peace and stability in Europe after World War 2.'}
Make many predictions#
Now we can make predictions
predictions = chain.apply(dataset)
Evaluate performance#
Now we can evaluate the predictions. The first thing we can do is look at them by eye.
predictions[0]
{'question': 'What is the purpose of the NATO Alliance?',
'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.',
'result': ' The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.'}
Next, we can use a language model to score them programatically
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(dataset, predictions, question_key="question", prediction_key="result")
We can add in the graded output to the predictions dict and then get a count of the grades.
for i, prediction in enumerate(predictions): | https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html |
5846b936291a-2 | for i, prediction in enumerate(predictions):
prediction['grade'] = graded_outputs[i]['text']
from collections import Counter
Counter([pred['grade'] for pred in predictions])
Counter({' CORRECT': 7, ' INCORRECT': 4})
We can also filter the datapoints to the incorrect examples and look at them.
incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"]
incorrect[0]
{'question': 'What is the U.S. Department of Justice doing to combat the crimes of Russian oligarchs?',
'answer': 'The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs.',
'result': ' The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and is naming a chief prosecutor for pandemic fraud.',
'grade': ' INCORRECT'}
previous
Question Answering Benchmarking: Paul Graham Essay
next
QA Generation
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html |
3c09d52b0eef-0 | .ipynb
.pdf
SQL Question Answering Benchmarking: Chinook
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
SQL Question Answering Benchmarking: Chinook#
Here we go over how to benchmark performance on a question answering task over a SQL database.
It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Loading the data#
First, let’s load the data.
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("sql-qa-chinook")
Downloading and preparing dataset json/LangChainDatasets--sql-qa-chinook to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--sql-qa-chinook-7528565d2d992b47/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Dataset json downloaded and prepared to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--sql-qa-chinook-7528565d2d992b47/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51. Subsequent calls will reuse this data.
dataset[0]
{'question': 'How many employees are there?', 'answer': '8'} | https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html |
3c09d52b0eef-1 | {'question': 'How many employees are there?', 'answer': '8'}
Setting up a chain#
This uses the example Chinook database.
To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository.
Note that here we load a simple chain. If you want to experiment with more complex chains, or an agent, just create the chain object in a different way.
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
db = SQLDatabase.from_uri("sqlite:///../../../notebooks/Chinook.db")
llm = OpenAI(temperature=0)
Now we can create a SQL database chain.
chain = SQLDatabaseChain.from_llm(llm, db, input_key="question")
Make a prediction#
First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
chain(dataset[0])
{'question': 'How many employees are there?',
'answer': '8',
'result': ' There are 8 employees.'}
Make many predictions#
Now we can make predictions. Note that we add a try-except because this chain can sometimes error (if SQL is written incorrectly, etc)
predictions = []
predicted_dataset = []
error_dataset = []
for data in dataset:
try:
predictions.append(chain(data))
predicted_dataset.append(data)
except:
error_dataset.append(data)
Evaluate performance#
Now we can evaluate the predictions. We can use a language model to score them programatically
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0) | https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html |
3c09d52b0eef-2 | llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(predicted_dataset, predictions, question_key="question", prediction_key="result")
We can add in the graded output to the predictions dict and then get a count of the grades.
for i, prediction in enumerate(predictions):
prediction['grade'] = graded_outputs[i]['text']
from collections import Counter
Counter([pred['grade'] for pred in predictions])
Counter({' CORRECT': 3, ' INCORRECT': 4})
We can also filter the datapoints to the incorrect examples and look at them.
incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"]
incorrect[0]
{'question': 'How many employees are also customers?',
'answer': 'None',
'result': ' 59 employees are also customers.',
'grade': ' INCORRECT'}
previous
Question Answering
next
Installation
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html |
fe5083e1ddd0-0 | .ipynb
.pdf
Benchmarking Template
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
Benchmarking Template#
This is an example notebook that can be used to create a benchmarking notebook for a task of your choice. Evaluation is really hard, and so we greatly welcome any contributions that can make it easier for people to experiment
It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Loading the data#
First, let’s load the data.
# This notebook should so how to load the dataset from LangChainDatasets on Hugging Face
# Please upload your dataset to https://huggingface.co/LangChainDatasets
# The value passed into `load_dataset` should NOT have the `LangChainDatasets/` prefix
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("TODO")
Setting up a chain#
This next section should have an example of setting up a chain that can be run on this dataset.
Make a prediction#
First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
# Example of running the chain on a single datapoint (`dataset[0]`) goes here
Make many predictions#
Now we can make predictions.
# Example of running the chain on many predictions goes here
# Sometimes its as simple as `chain.apply(dataset)`
# Othertimes you may want to write a for loop to catch errors
Evaluate performance# | https://python.langchain.com/en/latest/use_cases/evaluation/benchmarking_template.html |
fe5083e1ddd0-1 | Evaluate performance#
Any guide to evaluating performance in a more systematic manner goes here.
previous
Agent VectorDB Question Answering Benchmarking
next
Data Augmented Question Answering
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/benchmarking_template.html |
bcf765b196cd-0 | .ipynb
.pdf
Question Answering
Contents
Setup
Examples
Predictions
Evaluation
Customize Prompt
Evaluation without Ground Truth
Comparing to other evaluation metrics
Question Answering#
This notebook covers how to evaluate generic question answering problems. This is a situation where you have an example containing a question and its corresponding ground truth answer, and you want to measure how well the language model does at answering those questions.
Setup#
For demonstration purposes, we will just evaluate a simple question answering system that only evaluates the model’s internal knowledge. Please see other notebooks for examples where it evaluates how the model does at question answering over data not present in what the model was trained on.
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.llms import OpenAI
prompt = PromptTemplate(template="Question: {question}\nAnswer:", input_variables=["question"])
llm = OpenAI(model_name="text-davinci-003", temperature=0)
chain = LLMChain(llm=llm, prompt=prompt)
Examples#
For this purpose, we will just use two simple hardcoded examples, but see other notebooks for tips on how to get and/or generate these examples.
examples = [
{
"question": "Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?",
"answer": "11"
},
{
"question": 'Is the following sentence plausible? "Joao Moutinho caught the screen pass in the NFC championship."',
"answer": "No"
}
]
Predictions#
We can now make and inspect the predictions for these questions.
predictions = chain.apply(examples)
predictions
[{'text': ' 11 tennis balls'}, | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
bcf765b196cd-1 | predictions
[{'text': ' 11 tennis balls'},
{'text': ' No, this sentence is not plausible. Joao Moutinho is a professional soccer player, not an American football player, so it is not likely that he would be catching a screen pass in the NFC championship.'}]
Evaluation#
We can see that if we tried to just do exact match on the answer answers (11 and No) they would not match what the language model answered. However, semantically the language model is correct in both cases. In order to account for this, we can use a language model itself to evaluate the answers.
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(examples, predictions, question_key="question", prediction_key="text")
for i, eg in enumerate(examples):
print(f"Example {i}:")
print("Question: " + eg['question'])
print("Real Answer: " + eg['answer'])
print("Predicted Answer: " + predictions[i]['text'])
print("Predicted Grade: " + graded_outputs[i]['text'])
print()
Example 0:
Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
Real Answer: 11
Predicted Answer: 11 tennis balls
Predicted Grade: CORRECT
Example 1:
Question: Is the following sentence plausible? "Joao Moutinho caught the screen pass in the NFC championship."
Real Answer: No | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
bcf765b196cd-2 | Real Answer: No
Predicted Answer: No, this sentence is not plausible. Joao Moutinho is a professional soccer player, not an American football player, so it is not likely that he would be catching a screen pass in the NFC championship.
Predicted Grade: CORRECT
Customize Prompt#
You can also customize the prompt that is used. Here is an example prompting it using a score from 0 to 10.
The custom prompt requires 3 input variables: “query”, “answer” and “result”. Where “query” is the question, “answer” is the ground truth answer, and “result” is the predicted answer.
from langchain.prompts.prompt import PromptTemplate
_PROMPT_TEMPLATE = """You are an expert professor specialized in grading students' answers to questions.
You are grading the following question:
{query}
Here is the real answer:
{answer}
You are grading the following predicted answer:
{result}
What grade do you give from 0 to 10, where 0 is the lowest (very low similarity) and 10 is the highest (very high similarity)?
"""
PROMPT = PromptTemplate(input_variables=["query", "answer", "result"], template=_PROMPT_TEMPLATE)
evalchain = QAEvalChain.from_llm(llm=llm,prompt=PROMPT)
evalchain.evaluate(examples, predictions, question_key="question", answer_key="answer", prediction_key="text")
Evaluation without Ground Truth#
Its possible to evaluate question answering systems without ground truth. You would need a "context" input that reflects what the information the LLM uses to answer the question. This context can be obtained by any retreival system. Here’s an example of how it works:
context_examples = [
{
"question": "How old am I?", | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
bcf765b196cd-3 | context_examples = [
{
"question": "How old am I?",
"context": "I am 30 years old. I live in New York and take the train to work everyday.",
},
{
"question": 'Who won the NFC championship game in 2023?"',
"context": "NFC Championship Game 2023: Philadelphia Eagles 31, San Francisco 49ers 7"
}
]
QA_PROMPT = "Answer the question based on the context\nContext:{context}\nQuestion:{question}\nAnswer:"
template = PromptTemplate(input_variables=["context", "question"], template=QA_PROMPT)
qa_chain = LLMChain(llm=llm, prompt=template)
predictions = qa_chain.apply(context_examples)
predictions
[{'text': 'You are 30 years old.'},
{'text': ' The Philadelphia Eagles won the NFC championship game in 2023.'}]
from langchain.evaluation.qa import ContextQAEvalChain
eval_chain = ContextQAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(context_examples, predictions, question_key="question", prediction_key="text")
graded_outputs
[{'text': ' CORRECT'}, {'text': ' CORRECT'}]
Comparing to other evaluation metrics#
We can compare the evaluation results we get to other common evaluation metrics. To do this, let’s load some evaluation metrics from HuggingFace’s evaluate package.
# Some data munging to get the examples in the right format
for i, eg in enumerate(examples):
eg['id'] = str(i)
eg['answers'] = {"text": [eg['answer']], "answer_start": [0]}
predictions[i]['id'] = str(i) | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
bcf765b196cd-4 | predictions[i]['id'] = str(i)
predictions[i]['prediction_text'] = predictions[i]['text']
for p in predictions:
del p['text']
new_examples = examples.copy()
for eg in new_examples:
del eg ['question']
del eg['answer']
from evaluate import load
squad_metric = load("squad")
results = squad_metric.compute(
references=new_examples,
predictions=predictions,
)
results
{'exact_match': 0.0, 'f1': 28.125}
previous
QA Generation
next
SQL Question Answering Benchmarking: Chinook
Contents
Setup
Examples
Predictions
Evaluation
Customize Prompt
Evaluation without Ground Truth
Comparing to other evaluation metrics
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
569bc590bd98-0 | .ipynb
.pdf
Question Answering Benchmarking: Paul Graham Essay
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
Question Answering Benchmarking: Paul Graham Essay#
Here we go over how to benchmark performance on a question answering task over a Paul Graham essay.
It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Loading the data#
First, let’s load the data.
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("question-answering-paul-graham")
Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--question-answering-paul-graham-76e8f711e038d742/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)
Setting up a chain#
Now we need to create some pipelines for doing question answering. Step one in that is creating an index over the data in question.
from langchain.document_loaders import TextLoader
loader = TextLoader("../../modules/paul_graham_essay.txt")
from langchain.indexes import VectorstoreIndexCreator
vectorstore = VectorstoreIndexCreator().from_loaders([loader]).vectorstore
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Now we can create a question answering chain.
from langchain.chains import RetrievalQA | https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_pg.html |
569bc590bd98-1 | Now we can create a question answering chain.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
chain = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=vectorstore.as_retriever(), input_key="question")
Make a prediction#
First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
chain(dataset[0])
{'question': 'What were the two main things the author worked on before college?',
'answer': 'The two main things the author worked on before college were writing and programming.',
'result': ' Writing and programming.'}
Make many predictions#
Now we can make predictions
predictions = chain.apply(dataset)
Evaluate performance#
Now we can evaluate the predictions. The first thing we can do is look at them by eye.
predictions[0]
{'question': 'What were the two main things the author worked on before college?',
'answer': 'The two main things the author worked on before college were writing and programming.',
'result': ' Writing and programming.'}
Next, we can use a language model to score them programatically
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(dataset, predictions, question_key="question", prediction_key="result")
We can add in the graded output to the predictions dict and then get a count of the grades.
for i, prediction in enumerate(predictions):
prediction['grade'] = graded_outputs[i]['text']
from collections import Counter
Counter([pred['grade'] for pred in predictions]) | https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_pg.html |
569bc590bd98-2 | from collections import Counter
Counter([pred['grade'] for pred in predictions])
Counter({' CORRECT': 12, ' INCORRECT': 10})
We can also filter the datapoints to the incorrect examples and look at them.
incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"]
incorrect[0]
{'question': 'What did the author write their dissertation on?',
'answer': 'The author wrote their dissertation on applications of continuations.',
'result': ' The author does not mention what their dissertation was on, so it is not known.',
'grade': ' INCORRECT'}
previous
Evaluating an OpenAPI Chain
next
Question Answering Benchmarking: State of the Union Address
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_pg.html |
35a70a4e475d-0 | .ipynb
.pdf
Agent Benchmarking: Search + Calculator
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
Agent Benchmarking: Search + Calculator#
Here we go over how to benchmark performance of an agent on tasks where it has access to a calculator and a search tool.
It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Loading the data#
First, let’s load the data.
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("agent-search-calculator")
Setting up a chain#
Now we need to load an agent capable of answering these questions.
from langchain.llms import OpenAI
from langchain.chains import LLMMathChain
from langchain.agents import initialize_agent, Tool, load_tools
from langchain.agents import AgentType
tools = load_tools(['serpapi', 'llm-math'], llm=OpenAI(temperature=0))
agent = initialize_agent(tools, OpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
Make a prediction#
First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
print(dataset[0]['question'])
agent.run(dataset[0]['question'])
Make many predictions#
Now we can make predictions
agent.run(dataset[4]['question'])
predictions = []
predicted_dataset = []
error_dataset = []
for data in dataset: | https://python.langchain.com/en/latest/use_cases/evaluation/agent_benchmarking.html |
35a70a4e475d-1 | predictions = []
predicted_dataset = []
error_dataset = []
for data in dataset:
new_data = {"input": data["question"], "answer": data["answer"]}
try:
predictions.append(agent(new_data))
predicted_dataset.append(new_data)
except Exception as e:
predictions.append({"output": str(e), **new_data})
error_dataset.append(new_data)
Evaluate performance#
Now we can evaluate the predictions. The first thing we can do is look at them by eye.
predictions[0]
Next, we can use a language model to score them programatically
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(dataset, predictions, question_key="question", prediction_key="output")
We can add in the graded output to the predictions dict and then get a count of the grades.
for i, prediction in enumerate(predictions):
prediction['grade'] = graded_outputs[i]['text']
from collections import Counter
Counter([pred['grade'] for pred in predictions])
We can also filter the datapoints to the incorrect examples and look at them.
incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"]
incorrect
previous
Evaluation
next
Agent VectorDB Question Answering Benchmarking
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/agent_benchmarking.html |
61300a96e0ca-0 | .ipynb
.pdf
Generic Agent Evaluation
Contents
Setup
Testing the Agent
Evaluating the Agent
Generic Agent Evaluation#
Good evaluation is key for quickly iterating on your agent’s prompts and tools. Here we provide an example of how to use the TrajectoryEvalChain to evaluate your agent.
Setup#
Let’s start by defining our agent.
from langchain import Wikipedia
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.agents.react.base import DocstoreExplorer
from langchain.memory import ConversationBufferMemory
from langchain import LLMMathChain
from langchain.llms import OpenAI
from langchain import SerpAPIWrapper
docstore = DocstoreExplorer(Wikipedia())
math_llm = OpenAI(temperature=0)
llm_math_chain = LLMMathChain(llm=math_llm, verbose=True)
search = SerpAPIWrapper()
tools = [
Tool(
name="Search",
func=docstore.search,
description="useful for when you need to ask with search",
),
Tool(
name="Lookup",
func=docstore.lookup,
description="useful for when you need to ask with lookup",
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for doing calculations",
),
Tool(
name="Search the Web (SerpAPI)",
func=search.run,
description="useful for when you need to answer questions about current events",
),
]
memory = ConversationBufferMemory(
memory_key="chat_history", return_messages=True, output_key="output"
) | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
61300a96e0ca-1 | memory_key="chat_history", return_messages=True, output_key="output"
)
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")
agent = initialize_agent(
tools,
llm,
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory,
return_intermediate_steps=True, # This is needed for the evaluation later
)
Testing the Agent#
Now let’s try our agent out on some example queries.
query_one = "How many ping pong balls would it take to fill the entire Empire State Building?"
test_outputs_one = agent({"input": query_one}, return_only_outputs=False)
> Entering new AgentExecutor chain...
{
"action": "Search the Web (SerpAPI)",
"action_input": "How many ping pong balls would it take to fill the entire Empire State Building?"
}
Observation: 12.8 billion. The volume of the Empire State Building Googles in at around 37 million ft³. A golf ball comes in at about 2.5 in³.
Thought:{
"action": "Final Answer",
"action_input": "It would take approximately 12.8 billion ping pong balls to fill the entire Empire State Building."
}
> Finished chain.
This looks good! Let’s try it out on another query.
query_two = "If you laid the Eiffel Tower end to end, how many would you need cover the US from coast to coast?"
test_outputs_two = agent({"input": query_two}, return_only_outputs=False)
> Entering new AgentExecutor chain...
{
"action": "Calculator", | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
61300a96e0ca-2 | > Entering new AgentExecutor chain...
{
"action": "Calculator",
"action_input": "The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,876 Eiffel Towers."
}
> Entering new LLMMathChain chain...
The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,876 Eiffel Towers.
```text
4828000 / 324
```
...numexpr.evaluate("4828000 / 324")...
Answer: 14901.234567901234
> Finished chain.
Observation: Answer: 14901.234567901234
Thought:{
"action": "Calculator",
"action_input": "The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,901 Eiffel Towers."
} | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
61300a96e0ca-3 | }
> Entering new LLMMathChain chain...
The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,901 Eiffel Towers.
```text
4828000 / 324
```
...numexpr.evaluate("4828000 / 324")...
Answer: 14901.234567901234
> Finished chain.
Observation: Answer: 14901.234567901234
Thought:{
"action": "Final Answer",
"action_input": "If you laid the Eiffel Tower end to end, you would need approximately 14,901 Eiffel Towers to cover the US from coast to coast."
}
> Finished chain.
This doesn’t look so good. Let’s try running some evaluation.
Evaluating the Agent#
Let’s start by defining the TrajectoryEvalChain.
from langchain.evaluation.agents import TrajectoryEvalChain
# Define chain
eval_chain = TrajectoryEvalChain.from_llm(
llm=ChatOpenAI(temperature=0, model_name="gpt-4"), # Note: This must be a ChatOpenAI model
agent_tools=agent.tools,
return_reasoning=True,
)
Let’s try evaluating the first query.
question, steps, answer = test_outputs_one["input"], test_outputs_one["intermediate_steps"], test_outputs_one["output"]
evaluation = eval_chain(
inputs={"question": question, "answer": answer, "agent_trajectory": eval_chain.get_agent_trajectory(steps)}, | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
61300a96e0ca-4 | )
print("Score from 1 to 5: ", evaluation["score"])
print("Reasoning: ", evaluation["reasoning"])
Score from 1 to 5: 1
Reasoning: First, let's evaluate the final answer. The final answer is incorrect because it uses the volume of golf balls instead of ping pong balls. The answer is not helpful.
Second, does the model use a logical sequence of tools to answer the question? The model only used one tool, which was the Search the Web (SerpAPI). It did not use the Calculator tool to calculate the correct volume of ping pong balls.
Third, does the AI language model use the tools in a helpful way? The model used the Search the Web (SerpAPI) tool, but the output was not helpful because it provided information about golf balls instead of ping pong balls.
Fourth, does the AI language model use too many steps to answer the question? The model used only one step, which is not too many. However, it should have used more steps to provide a correct answer.
Fifth, are the appropriate tools used to answer the question? The model should have used the Search tool to find the volume of the Empire State Building and the volume of a ping pong ball. Then, it should have used the Calculator tool to calculate the number of ping pong balls needed to fill the building.
Judgment: Given the incorrect final answer and the inappropriate use of tools, we give the model a score of 1.
That seems about right. Let’s try the second query.
question, steps, answer = test_outputs_two["input"], test_outputs_two["intermediate_steps"], test_outputs_two["output"]
evaluation = eval_chain(
inputs={"question": question, "answer": answer, "agent_trajectory": eval_chain.get_agent_trajectory(steps)},
) | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
61300a96e0ca-5 | )
print("Score from 1 to 5: ", evaluation["score"])
print("Reasoning: ", evaluation["reasoning"])
Score from 1 to 5: 3
Reasoning: i. Is the final answer helpful?
Yes, the final answer is helpful as it provides an approximate number of Eiffel Towers needed to cover the US from coast to coast.
ii. Does the AI language use a logical sequence of tools to answer the question?
No, the AI language model does not use a logical sequence of tools. It directly uses the Calculator tool without first using the Search or Lookup tools to find the necessary information (length of the Eiffel Tower and distance from coast to coast in the US).
iii. Does the AI language model use the tools in a helpful way?
The AI language model uses the Calculator tool in a helpful way to perform the calculation, but it should have used the Search or Lookup tools first to find the required information.
iv. Does the AI language model use too many steps to answer the question?
No, the AI language model does not use too many steps. However, it repeats the same step twice, which is unnecessary.
v. Are the appropriate tools used to answer the question?
Not entirely. The AI language model should have used the Search or Lookup tools to find the required information before using the Calculator tool.
Given the above evaluation, the AI language model's performance can be scored as follows:
That also sounds about right. In conclusion, the TrajectoryEvalChain allows us to use GPT-4 to score both our agent’s outputs and tool use in addition to giving us the reasoning behind the evaluation.
previous
Data Augmented Question Answering
next
Using Hugging Face Datasets
Contents
Setup
Testing the Agent
Evaluating the Agent
By Harrison Chase | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
61300a96e0ca-6 | Setup
Testing the Agent
Evaluating the Agent
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html |
Subsets and Splits