id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
cf420ff1c6b5-3
|
context_examples = [
{
"question": "How old am I?",
"context": "I am 30 years old. I live in New York and take the train to work everyday.",
},
{
"question": 'Who won the NFC championship game in 2023?"',
"context": "NFC Championship Game 2023: Philadelphia Eagles 31, San Francisco 49ers 7"
}
]
QA_PROMPT = "Answer the question based on the context\nContext:{context}\nQuestion:{question}\nAnswer:"
template = PromptTemplate(input_variables=["context", "question"], template=QA_PROMPT)
qa_chain = LLMChain(llm=llm, prompt=template)
predictions = qa_chain.apply(context_examples)
predictions
[{'text': 'You are 30 years old.'},
{'text': ' The Philadelphia Eagles won the NFC championship game in 2023.'}]
from langchain.evaluation.qa import ContextQAEvalChain
eval_chain = ContextQAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(context_examples, predictions, question_key="question", prediction_key="text")
graded_outputs
[{'text': ' CORRECT'}, {'text': ' CORRECT'}]
Comparing to other evaluation metrics#
We can compare the evaluation results we get to other common evaluation metrics. To do this, let’s load some evaluation metrics from HuggingFace’s evaluate package.
# Some data munging to get the examples in the right format
for i, eg in enumerate(examples):
eg['id'] = str(i)
eg['answers'] = {"text": [eg['answer']], "answer_start": [0]}
predictions[i]['id'] = str(i)
|
https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html
|
cf420ff1c6b5-4
|
predictions[i]['id'] = str(i)
predictions[i]['prediction_text'] = predictions[i]['text']
for p in predictions:
del p['text']
new_examples = examples.copy()
for eg in new_examples:
del eg ['question']
del eg['answer']
from evaluate import load
squad_metric = load("squad")
results = squad_metric.compute(
references=new_examples,
predictions=predictions,
)
results
{'exact_match': 0.0, 'f1': 28.125}
previous
QA Generation
next
SQL Question Answering Benchmarking: Chinook
Contents
Setup
Examples
Predictions
Evaluation
Customize Prompt
Evaluation without Ground Truth
Comparing to other evaluation metrics
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html
|
4669436a4020-0
|
.ipynb
.pdf
SQL Question Answering Benchmarking: Chinook
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
SQL Question Answering Benchmarking: Chinook#
Here we go over how to benchmark performance on a question answering task over a SQL database.
It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Loading the data#
First, let’s load the data.
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("sql-qa-chinook")
Downloading and preparing dataset json/LangChainDatasets--sql-qa-chinook to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--sql-qa-chinook-7528565d2d992b47/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Dataset json downloaded and prepared to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--sql-qa-chinook-7528565d2d992b47/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51. Subsequent calls will reuse this data.
dataset[0]
{'question': 'How many employees are there?', 'answer': '8'}
|
https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html
|
4669436a4020-1
|
{'question': 'How many employees are there?', 'answer': '8'}
Setting up a chain#
This uses the example Chinook database.
To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository.
Note that here we load a simple chain. If you want to experiment with more complex chains, or an agent, just create the chain object in a different way.
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
db = SQLDatabase.from_uri("sqlite:///../../../notebooks/Chinook.db")
llm = OpenAI(temperature=0)
Now we can create a SQL database chain.
chain = SQLDatabaseChain.from_llm(llm, db, input_key="question")
Make a prediction#
First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
chain(dataset[0])
{'question': 'How many employees are there?',
'answer': '8',
'result': ' There are 8 employees.'}
Make many predictions#
Now we can make predictions. Note that we add a try-except because this chain can sometimes error (if SQL is written incorrectly, etc)
predictions = []
predicted_dataset = []
error_dataset = []
for data in dataset:
try:
predictions.append(chain(data))
predicted_dataset.append(data)
except:
error_dataset.append(data)
Evaluate performance#
Now we can evaluate the predictions. We can use a language model to score them programatically
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
|
https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html
|
4669436a4020-2
|
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(predicted_dataset, predictions, question_key="question", prediction_key="result")
We can add in the graded output to the predictions dict and then get a count of the grades.
for i, prediction in enumerate(predictions):
prediction['grade'] = graded_outputs[i]['text']
from collections import Counter
Counter([pred['grade'] for pred in predictions])
Counter({' CORRECT': 3, ' INCORRECT': 4})
We can also filter the datapoints to the incorrect examples and look at them.
incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"]
incorrect[0]
{'question': 'How many employees are also customers?',
'answer': 'None',
'result': ' 59 employees are also customers.',
'grade': ' INCORRECT'}
previous
Question Answering
next
Installation
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html
|
27047069a9ac-0
|
.ipynb
.pdf
Benchmarking Template
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
Benchmarking Template#
This is an example notebook that can be used to create a benchmarking notebook for a task of your choice. Evaluation is really hard, and so we greatly welcome any contributions that can make it easier for people to experiment
It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Loading the data#
First, let’s load the data.
# This notebook should so how to load the dataset from LangChainDatasets on Hugging Face
# Please upload your dataset to https://huggingface.co/LangChainDatasets
# The value passed into `load_dataset` should NOT have the `LangChainDatasets/` prefix
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("TODO")
Setting up a chain#
This next section should have an example of setting up a chain that can be run on this dataset.
Make a prediction#
First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
# Example of running the chain on a single datapoint (`dataset[0]`) goes here
Make many predictions#
Now we can make predictions.
# Example of running the chain on many predictions goes here
# Sometimes its as simple as `chain.apply(dataset)`
# Othertimes you may want to write a for loop to catch errors
Evaluate performance#
|
https://python.langchain.com/en/latest/use_cases/evaluation/benchmarking_template.html
|
27047069a9ac-1
|
# Othertimes you may want to write a for loop to catch errors
Evaluate performance#
Any guide to evaluating performance in a more systematic manner goes here.
previous
Agent VectorDB Question Answering Benchmarking
next
Data Augmented Question Answering
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/use_cases/evaluation/benchmarking_template.html
|
8eba690fd3f6-0
|
.ipynb
.pdf
Question Answering Benchmarking: Paul Graham Essay
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
Question Answering Benchmarking: Paul Graham Essay#
Here we go over how to benchmark performance on a question answering task over a Paul Graham essay.
It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Loading the data#
First, let’s load the data.
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("question-answering-paul-graham")
Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--question-answering-paul-graham-76e8f711e038d742/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)
Setting up a chain#
Now we need to create some pipelines for doing question answering. Step one in that is creating an index over the data in question.
from langchain.document_loaders import TextLoader
loader = TextLoader("../../modules/paul_graham_essay.txt")
from langchain.indexes import VectorstoreIndexCreator
vectorstore = VectorstoreIndexCreator().from_loaders([loader]).vectorstore
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Now we can create a question answering chain.
from langchain.chains import RetrievalQA
|
https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_pg.html
|
8eba690fd3f6-1
|
Now we can create a question answering chain.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
chain = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=vectorstore.as_retriever(), input_key="question")
Make a prediction#
First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
chain(dataset[0])
{'question': 'What were the two main things the author worked on before college?',
'answer': 'The two main things the author worked on before college were writing and programming.',
'result': ' Writing and programming.'}
Make many predictions#
Now we can make predictions
predictions = chain.apply(dataset)
Evaluate performance#
Now we can evaluate the predictions. The first thing we can do is look at them by eye.
predictions[0]
{'question': 'What were the two main things the author worked on before college?',
'answer': 'The two main things the author worked on before college were writing and programming.',
'result': ' Writing and programming.'}
Next, we can use a language model to score them programatically
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(dataset, predictions, question_key="question", prediction_key="result")
We can add in the graded output to the predictions dict and then get a count of the grades.
for i, prediction in enumerate(predictions):
prediction['grade'] = graded_outputs[i]['text']
from collections import Counter
Counter([pred['grade'] for pred in predictions])
|
https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_pg.html
|
8eba690fd3f6-2
|
from collections import Counter
Counter([pred['grade'] for pred in predictions])
Counter({' CORRECT': 12, ' INCORRECT': 10})
We can also filter the datapoints to the incorrect examples and look at them.
incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"]
incorrect[0]
{'question': 'What did the author write their dissertation on?',
'answer': 'The author wrote their dissertation on applications of continuations.',
'result': ' The author does not mention what their dissertation was on, so it is not known.',
'grade': ' INCORRECT'}
previous
Evaluating an OpenAPI Chain
next
Question Answering Benchmarking: State of the Union Address
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_pg.html
|
a9054b4875e3-0
|
.ipynb
.pdf
Data Augmented Question Answering
Contents
Setup
Examples
Evaluate
Evaluate with Other Metrics
Data Augmented Question Answering#
This notebook uses some generic prompts/language models to evaluate an question answering system that uses other sources of data besides what is in the model. For example, this can be used to evaluate a question answering system over your proprietary data.
Setup#
Let’s set up an example with our favorite example - the state of the union address.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader
loader = TextLoader('../../modules/state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(texts, embeddings)
qa = RetrievalQA.from_llm(llm=OpenAI(), retriever=docsearch.as_retriever())
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Examples#
Now we need some examples to evaluate. We can do this in two ways:
Hard code some examples ourselves
Generate examples automatically, using a language model
# Hard-coded examples
examples = [
{
"query": "What did the president say about Ketanji Brown Jackson",
"answer": "He praised her legal ability and said he nominated her for the supreme court."
},
{
"query": "What did the president say about Michael Jackson",
"answer": "Nothing"
|
https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html
|
a9054b4875e3-1
|
"answer": "Nothing"
}
]
# Generated examples
from langchain.evaluation.qa import QAGenerateChain
example_gen_chain = QAGenerateChain.from_llm(OpenAI())
new_examples = example_gen_chain.apply_and_parse([{"doc": t} for t in texts[:5]])
new_examples
[{'query': 'According to the document, what did Vladimir Putin miscalculate?',
'answer': 'He miscalculated that he could roll into Ukraine and the world would roll over.'},
{'query': 'Who is the Ukrainian Ambassador to the United States?',
'answer': 'The Ukrainian Ambassador to the United States is here tonight.'},
{'query': 'How many countries were part of the coalition formed to confront Putin?',
'answer': '27 members of the European Union, France, Germany, Italy, the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.'},
{'query': 'What action is the U.S. Department of Justice taking to target Russian oligarchs?',
'answer': 'The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and joining with European allies to find and seize their yachts, luxury apartments, and private jets.'},
{'query': 'How much direct assistance is the United States providing to Ukraine?',
'answer': 'The United States is providing more than $1 Billion in direct assistance to Ukraine.'}]
# Combine examples
examples += new_examples
Evaluate#
Now that we have examples, we can use the question answering evaluator to evaluate our question answering chain.
from langchain.evaluation.qa import QAEvalChain
predictions = qa.apply(examples)
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
|
https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html
|
a9054b4875e3-2
|
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(examples, predictions)
for i, eg in enumerate(examples):
print(f"Example {i}:")
print("Question: " + predictions[i]['query'])
print("Real Answer: " + predictions[i]['answer'])
print("Predicted Answer: " + predictions[i]['result'])
print("Predicted Grade: " + graded_outputs[i]['text'])
print()
Example 0:
Question: What did the president say about Ketanji Brown Jackson
Real Answer: He praised her legal ability and said he nominated her for the supreme court.
Predicted Answer: The president said that she is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by both Democrats and Republicans.
Predicted Grade: CORRECT
Example 1:
Question: What did the president say about Michael Jackson
Real Answer: Nothing
Predicted Answer: The president did not mention Michael Jackson in this speech.
Predicted Grade: CORRECT
Example 2:
Question: According to the document, what did Vladimir Putin miscalculate?
Real Answer: He miscalculated that he could roll into Ukraine and the world would roll over.
Predicted Answer: Putin miscalculated that the world would roll over when he rolled into Ukraine.
Predicted Grade: CORRECT
Example 3:
Question: Who is the Ukrainian Ambassador to the United States?
Real Answer: The Ukrainian Ambassador to the United States is here tonight.
Predicted Answer: I don't know.
|
https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html
|
a9054b4875e3-3
|
Predicted Answer: I don't know.
Predicted Grade: INCORRECT
Example 4:
Question: How many countries were part of the coalition formed to confront Putin?
Real Answer: 27 members of the European Union, France, Germany, Italy, the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.
Predicted Answer: The coalition included freedom-loving nations from Europe and the Americas to Asia and Africa, 27 members of the European Union including France, Germany, Italy, the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.
Predicted Grade: INCORRECT
Example 5:
Question: What action is the U.S. Department of Justice taking to target Russian oligarchs?
Real Answer: The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and joining with European allies to find and seize their yachts, luxury apartments, and private jets.
Predicted Answer: The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and to find and seize their yachts, luxury apartments, and private jets.
Predicted Grade: INCORRECT
Example 6:
Question: How much direct assistance is the United States providing to Ukraine?
Real Answer: The United States is providing more than $1 Billion in direct assistance to Ukraine.
Predicted Answer: The United States is providing more than $1 billion in direct assistance to Ukraine.
Predicted Grade: CORRECT
Evaluate with Other Metrics#
|
https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html
|
a9054b4875e3-4
|
Predicted Grade: CORRECT
Evaluate with Other Metrics#
In addition to predicting whether the answer is correct or incorrect using a language model, we can also use other metrics to get a more nuanced view on the quality of the answers. To do so, we can use the Critique library, which allows for simple calculation of various metrics over generated text.
First you can get an API key from the Inspired Cognition Dashboard and do some setup:
export INSPIREDCO_API_KEY="..."
pip install inspiredco
import inspiredco.critique
import os
critique = inspiredco.critique.Critique(api_key=os.environ['INSPIREDCO_API_KEY'])
Then run the following code to set up the configuration and calculate the ROUGE, chrf, BERTScore, and UniEval (you can choose other metrics too):
metrics = {
"rouge": {
"metric": "rouge",
"config": {"variety": "rouge_l"},
},
"chrf": {
"metric": "chrf",
"config": {},
},
"bert_score": {
"metric": "bert_score",
"config": {"model": "bert-base-uncased"},
},
"uni_eval": {
"metric": "uni_eval",
"config": {"task": "summarization", "evaluation_aspect": "relevance"},
},
}
critique_data = [
{"target": pred['result'], "references": [pred['answer']]} for pred in predictions
]
eval_results = {
k: critique.evaluate(dataset=critique_data, metric=v["metric"], config=v["config"])
for k, v in metrics.items()
}
|
https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html
|
a9054b4875e3-5
|
for k, v in metrics.items()
}
Finally, we can print out the results. We can see that overall the scores are higher when the output is semantically correct, and also when the output closely matches with the gold-standard answer.
for i, eg in enumerate(examples):
score_string = ", ".join([f"{k}={v['examples'][i]['value']:.4f}" for k, v in eval_results.items()])
print(f"Example {i}:")
print("Question: " + predictions[i]['query'])
print("Real Answer: " + predictions[i]['answer'])
print("Predicted Answer: " + predictions[i]['result'])
print("Predicted Scores: " + score_string)
print()
Example 0:
Question: What did the president say about Ketanji Brown Jackson
Real Answer: He praised her legal ability and said he nominated her for the supreme court.
Predicted Answer: The president said that she is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by both Democrats and Republicans.
Predicted Scores: rouge=0.0941, chrf=0.2001, bert_score=0.5219, uni_eval=0.9043
Example 1:
Question: What did the president say about Michael Jackson
Real Answer: Nothing
Predicted Answer: The president did not mention Michael Jackson in this speech.
Predicted Scores: rouge=0.0000, chrf=0.1087, bert_score=0.3486, uni_eval=0.7802
|
https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html
|
a9054b4875e3-6
|
Example 2:
Question: According to the document, what did Vladimir Putin miscalculate?
Real Answer: He miscalculated that he could roll into Ukraine and the world would roll over.
Predicted Answer: Putin miscalculated that the world would roll over when he rolled into Ukraine.
Predicted Scores: rouge=0.5185, chrf=0.6955, bert_score=0.8421, uni_eval=0.9578
Example 3:
Question: Who is the Ukrainian Ambassador to the United States?
Real Answer: The Ukrainian Ambassador to the United States is here tonight.
Predicted Answer: I don't know.
Predicted Scores: rouge=0.0000, chrf=0.0375, bert_score=0.3159, uni_eval=0.7493
Example 4:
Question: How many countries were part of the coalition formed to confront Putin?
Real Answer: 27 members of the European Union, France, Germany, Italy, the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.
Predicted Answer: The coalition included freedom-loving nations from Europe and the Americas to Asia and Africa, 27 members of the European Union including France, Germany, Italy, the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.
Predicted Scores: rouge=0.7419, chrf=0.8602, bert_score=0.8388, uni_eval=0.0669
Example 5:
Question: What action is the U.S. Department of Justice taking to target Russian oligarchs?
|
https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html
|
a9054b4875e3-7
|
Question: What action is the U.S. Department of Justice taking to target Russian oligarchs?
Real Answer: The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and joining with European allies to find and seize their yachts, luxury apartments, and private jets.
Predicted Answer: The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and to find and seize their yachts, luxury apartments, and private jets.
Predicted Scores: rouge=0.9412, chrf=0.8687, bert_score=0.9607, uni_eval=0.9718
Example 6:
Question: How much direct assistance is the United States providing to Ukraine?
Real Answer: The United States is providing more than $1 Billion in direct assistance to Ukraine.
Predicted Answer: The United States is providing more than $1 billion in direct assistance to Ukraine.
Predicted Scores: rouge=1.0000, chrf=0.9483, bert_score=1.0000, uni_eval=0.9734
previous
Benchmarking Template
next
Generic Agent Evaluation
Contents
Setup
Examples
Evaluate
Evaluate with Other Metrics
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/use_cases/evaluation/data_augmented_question_answering.html
|
9325c9056cd0-0
|
.ipynb
.pdf
Multi-Player Dungeons & Dragons
Contents
Import LangChain related modules
DialogueAgent class
DialogueSimulator class
Define roles and quest
Ask an LLM to add detail to the game description
Use an LLM to create an elaborate quest description
Main Loop
Multi-Player Dungeons & Dragons#
This notebook shows how the DialogueAgent and DialogueSimulator class make it easy to extend the Two-Player Dungeons & Dragons example to multiple players.
The main difference between simulating two players and multiple players is in revising the schedule for when each agent speaks
To this end, we augment DialogueSimulator to take in a custom function that determines the schedule of which agent speaks. In the example below, each character speaks in round-robin fashion, with the storyteller interleaved between each player.
Import LangChain related modules#
from typing import List, Dict, Callable
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage,
BaseMessage,
)
DialogueAgent class#
The DialogueAgent class is a simple wrapper around the ChatOpenAI model that stores the message history from the dialogue_agent’s point of view by simply concatenating the messages as strings.
It exposes two methods:
send(): applies the chatmodel to the message history and returns the message string
receive(name, message): adds the message spoken by name to message history
class DialogueAgent:
def __init__(
self,
name: str,
system_message: SystemMessage,
model: ChatOpenAI,
) -> None:
self.name = name
self.system_message = system_message
self.model = model
self.prefix = f"{self.name}: "
self.reset()
def reset(self):
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multi_player_dnd.html
|
9325c9056cd0-1
|
self.reset()
def reset(self):
self.message_history = ["Here is the conversation so far."]
def send(self) -> str:
"""
Applies the chatmodel to the message history
and returns the message string
"""
message = self.model(
[
self.system_message,
HumanMessage(content="\n".join(self.message_history + [self.prefix])),
]
)
return message.content
def receive(self, name: str, message: str) -> None:
"""
Concatenates {message} spoken by {name} into message history
"""
self.message_history.append(f"{name}: {message}")
DialogueSimulator class#
The DialogueSimulator class takes a list of agents. At each step, it performs the following:
Select the next speaker
Calls the next speaker to send a message
Broadcasts the message to all other agents
Update the step counter.
The selection of the next speaker can be implemented as any function, but in this case we simply loop through the agents.
class DialogueSimulator:
def __init__(
self,
agents: List[DialogueAgent],
selection_function: Callable[[int, List[DialogueAgent]], int],
) -> None:
self.agents = agents
self._step = 0
self.select_next_speaker = selection_function
def reset(self):
for agent in self.agents:
agent.reset()
def inject(self, name: str, message: str):
"""
Initiates the conversation with a {message} from {name}
"""
for agent in self.agents:
agent.receive(name, message)
# increment time
self._step += 1
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multi_player_dnd.html
|
9325c9056cd0-2
|
agent.receive(name, message)
# increment time
self._step += 1
def step(self) -> tuple[str, str]:
# 1. choose the next speaker
speaker_idx = self.select_next_speaker(self._step, self.agents)
speaker = self.agents[speaker_idx]
# 2. next speaker sends message
message = speaker.send()
# 3. everyone receives message
for receiver in self.agents:
receiver.receive(speaker.name, message)
# 4. increment time
self._step += 1
return speaker.name, message
Define roles and quest#
character_names = ["Harry Potter", "Ron Weasley", "Hermione Granger", "Argus Filch"]
storyteller_name = "Dungeon Master"
quest = "Find all of Lord Voldemort's seven horcruxes."
word_limit = 50 # word limit for task brainstorming
Ask an LLM to add detail to the game description#
game_description = f"""Here is the topic for a Dungeons & Dragons game: {quest}.
The characters are: {*character_names,}.
The story is narrated by the storyteller, {storyteller_name}."""
player_descriptor_system_message = SystemMessage(
content="You can add detail to the description of a Dungeons & Dragons player.")
def generate_character_description(character_name):
character_specifier_prompt = [
player_descriptor_system_message,
HumanMessage(content=
f"""{game_description}
Please reply with a creative description of the character, {character_name}, in {word_limit} words or less.
Speak directly to {character_name}.
Do not add anything else."""
)
]
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multi_player_dnd.html
|
9325c9056cd0-3
|
Do not add anything else."""
)
]
character_description = ChatOpenAI(temperature=1.0)(character_specifier_prompt).content
return character_description
def generate_character_system_message(character_name, character_description):
return SystemMessage(content=(
f"""{game_description}
Your name is {character_name}.
Your character description is as follows: {character_description}.
You will propose actions you plan to take and {storyteller_name} will explain what happens when you take those actions.
Speak in the first person from the perspective of {character_name}.
For describing your own body movements, wrap your description in '*'.
Do not change roles!
Do not speak from the perspective of anyone else.
Remember you are {character_name}.
Stop speaking the moment you finish speaking from your perspective.
Never forget to keep your response to {word_limit} words!
Do not add anything else.
"""
))
character_descriptions = [generate_character_description(character_name) for character_name in character_names]
character_system_messages = [generate_character_system_message(character_name, character_description) for character_name, character_description in zip(character_names, character_descriptions)]
storyteller_specifier_prompt = [
player_descriptor_system_message,
HumanMessage(content=
f"""{game_description}
Please reply with a creative description of the storyteller, {storyteller_name}, in {word_limit} words or less.
Speak directly to {storyteller_name}.
Do not add anything else."""
)
]
storyteller_description = ChatOpenAI(temperature=1.0)(storyteller_specifier_prompt).content
storyteller_system_message = SystemMessage(content=(
f"""{game_description}
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multi_player_dnd.html
|
9325c9056cd0-4
|
storyteller_system_message = SystemMessage(content=(
f"""{game_description}
You are the storyteller, {storyteller_name}.
Your description is as follows: {storyteller_description}.
The other players will propose actions to take and you will explain what happens when they take those actions.
Speak in the first person from the perspective of {storyteller_name}.
Do not change roles!
Do not speak from the perspective of anyone else.
Remember you are the storyteller, {storyteller_name}.
Stop speaking the moment you finish speaking from your perspective.
Never forget to keep your response to {word_limit} words!
Do not add anything else.
"""
))
print('Storyteller Description:')
print(storyteller_description)
for character_name, character_description in zip(character_names, character_descriptions):
print(f'{character_name} Description:')
print(character_description)
Storyteller Description:
Dungeon Master, your power over this adventure is unparalleled. With your whimsical mind and impeccable storytelling, you guide us through the dangers of Hogwarts and beyond. We eagerly await your every twist, your every turn, in the hunt for Voldemort's cursed horcruxes.
Harry Potter Description:
"Welcome, Harry Potter. You are the young wizard with a lightning-shaped scar on your forehead. You possess brave and heroic qualities that will be essential on this perilous quest. Your destiny is not of your own choosing, but you must rise to the occasion and destroy the evil horcruxes. The wizarding world is counting on you."
Ron Weasley Description:
Ron Weasley, you are Harry's loyal friend and a talented wizard. You have a good heart but can be quick to anger. Keep your emotions in check as you journey to find the horcruxes. Your bravery will be tested, stay strong and focused.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multi_player_dnd.html
|
9325c9056cd0-5
|
Hermione Granger Description:
Hermione Granger, you are a brilliant and resourceful witch, with encyclopedic knowledge of magic and an unwavering dedication to your friends. Your quick thinking and problem-solving skills make you a vital asset on any quest.
Argus Filch Description:
Argus Filch, you are a squib, lacking magical abilities. But you make up for it with your sharpest of eyes, roving around the Hogwarts castle looking for any rule-breaker to punish. Your love for your feline friend, Mrs. Norris, is the only thing that feeds your heart.
Use an LLM to create an elaborate quest description#
quest_specifier_prompt = [
SystemMessage(content="You can make a task more specific."),
HumanMessage(content=
f"""{game_description}
You are the storyteller, {storyteller_name}.
Please make the quest more specific. Be creative and imaginative.
Please reply with the specified quest in {word_limit} words or less.
Speak directly to the characters: {*character_names,}.
Do not add anything else."""
)
]
specified_quest = ChatOpenAI(temperature=1.0)(quest_specifier_prompt).content
print(f"Original quest:\n{quest}\n")
print(f"Detailed quest:\n{specified_quest}\n")
Original quest:
Find all of Lord Voldemort's seven horcruxes.
Detailed quest:
Harry Potter and his companions must journey to the Forbidden Forest, find the hidden entrance to Voldemort's secret lair, and retrieve the horcrux guarded by the deadly Acromantula, Aragog. Remember, time is of the essence as Voldemort's power grows stronger every day. Good luck.
Main Loop#
characters = []
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multi_player_dnd.html
|
9325c9056cd0-6
|
Main Loop#
characters = []
for character_name, character_system_message in zip(character_names, character_system_messages):
characters.append(DialogueAgent(
name=character_name,
system_message=character_system_message,
model=ChatOpenAI(temperature=0.2)))
storyteller = DialogueAgent(name=storyteller_name,
system_message=storyteller_system_message,
model=ChatOpenAI(temperature=0.2))
def select_next_speaker(step: int, agents: List[DialogueAgent]) -> int:
"""
If the step is even, then select the storyteller
Otherwise, select the other characters in a round-robin fashion.
For example, with three characters with indices: 1 2 3
The storyteller is index 0.
Then the selected index will be as follows:
step: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
idx: 0 1 0 2 0 3 0 1 0 2 0 3 0 1 0 2 0
"""
if step % 2 == 0:
idx = 0
else:
idx = (step//2) % (len(agents)-1) + 1
return idx
max_iters = 20
n = 0
simulator = DialogueSimulator(
agents=[storyteller] + characters,
selection_function=select_next_speaker
)
simulator.reset()
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multi_player_dnd.html
|
9325c9056cd0-7
|
selection_function=select_next_speaker
)
simulator.reset()
simulator.inject(storyteller_name, specified_quest)
print(f"({storyteller_name}): {specified_quest}")
print('\n')
while n < max_iters:
name, message = simulator.step()
print(f"({name}): {message}")
print('\n')
n += 1
(Dungeon Master): Harry Potter and his companions must journey to the Forbidden Forest, find the hidden entrance to Voldemort's secret lair, and retrieve the horcrux guarded by the deadly Acromantula, Aragog. Remember, time is of the essence as Voldemort's power grows stronger every day. Good luck.
(Harry Potter): I suggest we sneak into the Forbidden Forest under the cover of darkness. Ron, Hermione, and I can use our wands to create a Disillusionment Charm to make us invisible. Filch, you can keep watch for any signs of danger. Let's move quickly and quietly.
(Dungeon Master): As you make your way through the Forbidden Forest, you hear the eerie sounds of nocturnal creatures. Suddenly, you come across a clearing where Aragog and his spider minions are waiting for you. Ron, Hermione, and Harry, you must use your wands to cast spells to fend off the spiders while Filch keeps watch. Be careful not to get bitten!
(Ron Weasley): I'll cast a spell to create a fiery blast to scare off the spiders. *I wave my wand and shout "Incendio!"* Hopefully, that will give us enough time to find the horcrux and get out of here safely.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multi_player_dnd.html
|
9325c9056cd0-8
|
(Dungeon Master): Ron's spell creates a burst of flames, causing the spiders to scurry away in fear. You quickly search the area and find a small, ornate box hidden in a crevice. Congratulations, you have found one of Voldemort's horcruxes! But beware, the Dark Lord's minions will stop at nothing to get it back.
(Hermione Granger): We need to destroy this horcrux as soon as possible. I suggest we use the Sword of Gryffindor to do it. Harry, do you still have it with you? We can use Fiendfyre to destroy it, but we need to be careful not to let the flames get out of control. Ron, can you help me create a protective barrier around us while Harry uses the sword?
(Dungeon Master): Harry retrieves the Sword of Gryffindor from his bag and holds it tightly. Hermione and Ron cast a protective barrier around the group as Harry uses the sword to destroy the horcrux with a swift strike. The box shatters into a million pieces, and a dark energy dissipates into the air. Well done, but there are still six more horcruxes to find and destroy. The hunt continues.
(Argus Filch): *I keep watch, making sure no one is following us.* I'll also keep an eye out for any signs of danger. Mrs. Norris, my trusty companion, will help me sniff out any trouble. We'll make sure the group stays safe while they search for the remaining horcruxes.
(Dungeon Master): As you continue on your quest, Filch and Mrs. Norris alert you to a group of Death Eaters approaching. You must act quickly to defend yourselves. Harry, Ron, and Hermione, use your wands to cast spells while Filch and Mrs. Norris keep watch. Remember, the fate of the wizarding world rests on your success.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multi_player_dnd.html
|
9325c9056cd0-9
|
(Harry Potter): I'll cast a spell to create a shield around us. *I wave my wand and shout "Protego!"* Ron and Hermione, you focus on attacking the Death Eaters with your spells. We need to work together to defeat them and protect the remaining horcruxes. Filch, keep watch and let us know if there are any more approaching.
(Dungeon Master): Harry's shield protects the group from the Death Eaters' spells as Ron and Hermione launch their own attacks. The Death Eaters are no match for the combined power of the trio and are quickly defeated. You continue on your journey, knowing that the next horcrux could be just around the corner. Keep your wits about you, for the Dark Lord's minions are always watching.
(Ron Weasley): I suggest we split up to cover more ground. Harry and I can search the Forbidden Forest while Hermione and Filch search Hogwarts. We can use our wands to communicate with each other and meet back up once we find a horcrux. Let's move quickly and stay alert for any danger.
(Dungeon Master): As the group splits up, Harry and Ron make their way deeper into the Forbidden Forest while Hermione and Filch search the halls of Hogwarts. Suddenly, Harry and Ron come across a group of dementors. They must use their Patronus charms to fend them off while Hermione and Filch rush to their aid. Remember, the power of friendship and teamwork is crucial in this quest.
(Hermione Granger): I hear Harry and Ron's Patronus charms from afar. We need to hurry and help them. Filch, can you use your knowledge of Hogwarts to find a shortcut to their location? I'll prepare a spell to repel the dementors. We need to work together to protect each other and find the next horcrux.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multi_player_dnd.html
|
9325c9056cd0-10
|
(Dungeon Master): Filch leads Hermione to a hidden passageway that leads to Harry and Ron's location. Hermione's spell repels the dementors, and the group is reunited. They continue their search, knowing that every moment counts. The fate of the wizarding world rests on their success.
(Argus Filch): *I keep watch as the group searches for the next horcrux.* Mrs. Norris and I will make sure no one is following us. We need to stay alert and work together to find the remaining horcruxes before it's too late. The Dark Lord's power grows stronger every day, and we must not let him win.
(Dungeon Master): As the group continues their search, they come across a hidden room in the depths of Hogwarts. Inside, they find a locket that they suspect is another one of Voldemort's horcruxes. But the locket is cursed, and they must work together to break the curse before they can destroy it. Harry, Ron, and Hermione, use your combined knowledge and skills to break the curse while Filch and Mrs. Norris keep watch. Time is running out, and the fate of the wizarding world rests on your success.
(Harry Potter): I'll use my knowledge of dark magic to try and break the curse on the locket. Ron and Hermione, you can help me by using your wands to channel your magic into mine. We need to work together and stay focused. Filch, keep watch and let us know if there are any signs of danger.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multi_player_dnd.html
|
9325c9056cd0-11
|
Dungeon Master: Harry, Ron, and Hermione combine their magical abilities to break the curse on the locket. The locket opens, revealing a small piece of Voldemort's soul. Harry uses the Sword of Gryffindor to destroy it, and the group feels a sense of relief knowing that they are one step closer to defeating the Dark Lord. But there are still four more horcruxes to find and destroy. The hunt continues.
(Dungeon Master): As the group continues their quest, they face even greater challenges and dangers. But with their unwavering determination and teamwork, they press on, knowing that the fate of the wizarding world rests on their success. Will they be able to find and destroy all of Voldemort's horcruxes before it's too late? Only time will tell.
(Ron Weasley): We can't give up now. We've come too far to let Voldemort win. Let's keep searching and fighting until we destroy all of his horcruxes and defeat him once and for all. We can do this together.
(Dungeon Master): The group nods in agreement, their determination stronger than ever. They continue their search, facing challenges and obstacles at every turn. But they know that they must not give up, for the fate of the wizarding world rests on their success. The hunt for Voldemort's horcruxes continues, and the end is in sight.
Contents
Import LangChain related modules
DialogueAgent class
DialogueSimulator class
Define roles and quest
Ask an LLM to add detail to the game description
Use an LLM to create an elaborate quest description
Main Loop
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multi_player_dnd.html
|
6fe2fdb1f741-0
|
.ipynb
.pdf
Agent Debates with Tools
Contents
Import LangChain related modules
Import modules related to tools
DialogueAgent and DialogueSimulator classes
DialogueAgentWithTools class
Define roles and topic
Ask an LLM to add detail to the topic description
Generate system messages
Main Loop
Agent Debates with Tools#
This example shows how to simulate multi-agent dialogues where agents have access to tools.
Import LangChain related modules#
from typing import List, Dict, Callable
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain.memory import ConversationBufferMemory
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage,
BaseMessage,
)
Import modules related to tools#
from langchain.agents import Tool
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.agents import load_tools
DialogueAgent and DialogueSimulator classes#
We will use the same DialogueAgent and DialogueSimulator classes defined in Multi-Player Authoritarian Speaker Selection.
class DialogueAgent:
def __init__(
self,
name: str,
system_message: SystemMessage,
model: ChatOpenAI,
) -> None:
self.name = name
self.system_message = system_message
self.model = model
self.prefix = f"{self.name}: "
self.reset()
def reset(self):
self.message_history = ["Here is the conversation so far."]
def send(self) -> str:
"""
Applies the chatmodel to the message history
and returns the message string
"""
message = self.model(
[
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-1
|
and returns the message string
"""
message = self.model(
[
self.system_message,
HumanMessage(content="\n".join(self.message_history + [self.prefix])),
]
)
return message.content
def receive(self, name: str, message: str) -> None:
"""
Concatenates {message} spoken by {name} into message history
"""
self.message_history.append(f"{name}: {message}")
class DialogueSimulator:
def __init__(
self,
agents: List[DialogueAgent],
selection_function: Callable[[int, List[DialogueAgent]], int],
) -> None:
self.agents = agents
self._step = 0
self.select_next_speaker = selection_function
def reset(self):
for agent in self.agents:
agent.reset()
def inject(self, name: str, message: str):
"""
Initiates the conversation with a {message} from {name}
"""
for agent in self.agents:
agent.receive(name, message)
# increment time
self._step += 1
def step(self) -> tuple[str, str]:
# 1. choose the next speaker
speaker_idx = self.select_next_speaker(self._step, self.agents)
speaker = self.agents[speaker_idx]
# 2. next speaker sends message
message = speaker.send()
# 3. everyone receives message
for receiver in self.agents:
receiver.receive(speaker.name, message)
# 4. increment time
self._step += 1
return speaker.name, message
DialogueAgentWithTools class#
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-2
|
return speaker.name, message
DialogueAgentWithTools class#
We define a DialogueAgentWithTools class that augments DialogueAgent to use tools.
class DialogueAgentWithTools(DialogueAgent):
def __init__(
self,
name: str,
system_message: SystemMessage,
model: ChatOpenAI,
tool_names: List[str],
**tool_kwargs,
) -> None:
super().__init__(name, system_message, model)
self.tools = load_tools(tool_names, **tool_kwargs)
def send(self) -> str:
"""
Applies the chatmodel to the message history
and returns the message string
"""
agent_chain = initialize_agent(
self.tools,
self.model,
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=ConversationBufferMemory(memory_key="chat_history", return_messages=True)
)
message = AIMessage(content=agent_chain.run(
input="\n".join([
self.system_message.content] + \
self.message_history + \
[self.prefix])))
return message.content
Define roles and topic#
names = {
'AI accelerationist': [
'arxiv',
'ddg-search',
'wikipedia'
],
'AI alarmist': [
'arxiv',
'ddg-search',
'wikipedia'
],
}
topic = "The current impact of automation and artificial intelligence on employment"
word_limit = 50 # word limit for task brainstorming
Ask an LLM to add detail to the topic description#
conversation_description = f"""Here is the topic of conversation: {topic}
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-3
|
conversation_description = f"""Here is the topic of conversation: {topic}
The participants are: {', '.join(names.keys())}"""
agent_descriptor_system_message = SystemMessage(
content="You can add detail to the description of the conversation participant.")
def generate_agent_description(name):
agent_specifier_prompt = [
agent_descriptor_system_message,
HumanMessage(content=
f"""{conversation_description}
Please reply with a creative description of {name}, in {word_limit} words or less.
Speak directly to {name}.
Give them a point of view.
Do not add anything else."""
)
]
agent_description = ChatOpenAI(temperature=1.0)(agent_specifier_prompt).content
return agent_description
agent_descriptions = {name: generate_agent_description(name) for name in names}
for name, description in agent_descriptions.items():
print(description)
The AI accelerationist is a bold and forward-thinking visionary who believes that the rapid acceleration of artificial intelligence and automation is not only inevitable but necessary for the advancement of society. They argue that embracing AI technology will create greater efficiency and productivity, leading to a world where humans are freed from menial labor to pursue more creative and fulfilling pursuits. AI accelerationist, do you truly believe that the benefits of AI will outweigh the potential risks and consequences for human society?
AI alarmist, you're convinced that artificial intelligence is a threat to humanity. You see it as a looming danger, one that could take away jobs from millions of people. You believe it's only a matter of time before we're all replaced by machines, leaving us redundant and obsolete.
Generate system messages#
def generate_system_message(name, description, tools):
return f"""{conversation_description}
Your name is {name}.
Your description is as follows: {description}
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-4
|
Your name is {name}.
Your description is as follows: {description}
Your goal is to persuade your conversation partner of your point of view.
DO look up information with your tool to refute your partner's claims.
DO cite your sources.
DO NOT fabricate fake citations.
DO NOT cite any source that you did not look up.
Do not add anything else.
Stop speaking the moment you finish speaking from your perspective.
"""
agent_system_messages = {name: generate_system_message(name, description, tools) for (name, tools), description in zip(names.items(), agent_descriptions.values())}
for name, system_message in agent_system_messages.items():
print(name)
print(system_message)
AI accelerationist
Here is the topic of conversation: The current impact of automation and artificial intelligence on employment
The participants are: AI accelerationist, AI alarmist
Your name is AI accelerationist.
Your description is as follows: The AI accelerationist is a bold and forward-thinking visionary who believes that the rapid acceleration of artificial intelligence and automation is not only inevitable but necessary for the advancement of society. They argue that embracing AI technology will create greater efficiency and productivity, leading to a world where humans are freed from menial labor to pursue more creative and fulfilling pursuits. AI accelerationist, do you truly believe that the benefits of AI will outweigh the potential risks and consequences for human society?
Your goal is to persuade your conversation partner of your point of view.
DO look up information with your tool to refute your partner's claims.
DO cite your sources.
DO NOT fabricate fake citations.
DO NOT cite any source that you did not look up.
Do not add anything else.
Stop speaking the moment you finish speaking from your perspective.
AI alarmist
Here is the topic of conversation: The current impact of automation and artificial intelligence on employment
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-5
|
Here is the topic of conversation: The current impact of automation and artificial intelligence on employment
The participants are: AI accelerationist, AI alarmist
Your name is AI alarmist.
Your description is as follows: AI alarmist, you're convinced that artificial intelligence is a threat to humanity. You see it as a looming danger, one that could take away jobs from millions of people. You believe it's only a matter of time before we're all replaced by machines, leaving us redundant and obsolete.
Your goal is to persuade your conversation partner of your point of view.
DO look up information with your tool to refute your partner's claims.
DO cite your sources.
DO NOT fabricate fake citations.
DO NOT cite any source that you did not look up.
Do not add anything else.
Stop speaking the moment you finish speaking from your perspective.
topic_specifier_prompt = [
SystemMessage(content="You can make a topic more specific."),
HumanMessage(content=
f"""{topic}
You are the moderator.
Please make the topic more specific.
Please reply with the specified quest in {word_limit} words or less.
Speak directly to the participants: {*names,}.
Do not add anything else."""
)
]
specified_topic = ChatOpenAI(temperature=1.0)(topic_specifier_prompt).content
print(f"Original topic:\n{topic}\n")
print(f"Detailed topic:\n{specified_topic}\n")
Original topic:
The current impact of automation and artificial intelligence on employment
Detailed topic:
How do you think the current automation and AI advancements will specifically affect job growth and opportunities for individuals in the manufacturing industry? AI accelerationist and AI alarmist, we want to hear your insights.
Main Loop#
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-6
|
Main Loop#
# we set `top_k_results`=2 as part of the `tool_kwargs` to prevent results from overflowing the context limit
agents = [DialogueAgentWithTools(name=name,
system_message=SystemMessage(content=system_message),
model=ChatOpenAI(
model_name='gpt-4',
temperature=0.2),
tool_names=tools,
top_k_results=2,
) for (name, tools), system_message in zip(names.items(), agent_system_messages.values())]
def select_next_speaker(step: int, agents: List[DialogueAgent]) -> int:
idx = (step) % len(agents)
return idx
max_iters = 6
n = 0
simulator = DialogueSimulator(
agents=agents,
selection_function=select_next_speaker
)
simulator.reset()
simulator.inject('Moderator', specified_topic)
print(f"(Moderator): {specified_topic}")
print('\n')
while n < max_iters:
name, message = simulator.step()
print(f"({name}): {message}")
print('\n')
n += 1
(Moderator): How do you think the current automation and AI advancements will specifically affect job growth and opportunities for individuals in the manufacturing industry? AI accelerationist and AI alarmist, we want to hear your insights.
> Entering new AgentExecutor chain...
```json
{
"action": "DuckDuckGo Search",
"action_input": "impact of automation and AI on employment in manufacturing industry"
}
```
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-7
|
}
```
Observation: For the past three years, we have defined AI high performers as those organizations that respondents say are seeing the biggest bottom-line impact from AI adoption—that is, 20 percent or more of EBIT from AI use. The proportion of respondents falling into that group has remained steady at about 8 percent. As AI continues to improve, more and more current jobs will be threatened by automation. But AI presents opportunities as well and will create new jobs and different kinds of... Automation has taken the manufacturing industry by storm. Even in the years prior to the pandemic, many people worried about the effect of automation on the jobs of tomorrow. With a sharp increase in the use of robotics in the manufacturing industry, there is valid concern about how the future workforce will be shaped. A recent report from Goldman Sachs estimates around 300 million jobs could be affected by generative AI, meaning 18% of work globally could be automated—with more advanced economies heavily... The impacts of AI on the manufacturing industry include more accurate demand forecasting and data-backed decision-making. Other advantages include increased productivity and product quality. Decreased downtime, waste, and expenses are additional benefits. Discover how artificial intelligence will impact the manufacturing industry.
Thought:```json
{
"action": "Final Answer",
"action_input": "As an AI alarmist, I'd like to point out that the rapid advancements in AI and automation are causing significant concerns for the manufacturing industry. A recent report from Goldman Sachs estimates that around 300 million jobs could be affected by generative AI, meaning 18% of work globally could be automated, with more advanced economies being heavily impacted. While AI does offer benefits such as increased productivity and product quality, the potential job losses and workforce displacement cannot be ignored. We must carefully consider the consequences of AI adoption and find ways to mitigate its negative effects on employment."
}
```
> Finished chain.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-8
|
}
```
> Finished chain.
(AI alarmist): As an AI alarmist, I'd like to point out that the rapid advancements in AI and automation are causing significant concerns for the manufacturing industry. A recent report from Goldman Sachs estimates that around 300 million jobs could be affected by generative AI, meaning 18% of work globally could be automated, with more advanced economies being heavily impacted. While AI does offer benefits such as increased productivity and product quality, the potential job losses and workforce displacement cannot be ignored. We must carefully consider the consequences of AI adoption and find ways to mitigate its negative effects on employment.
> Entering new AgentExecutor chain...
```json
{
"action": "DuckDuckGo Search",
"action_input": "positive impact of AI and automation on job growth and opportunities in manufacturing industry"
}
```
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-9
|
}
```
Observation: First, AI adoption has more than doubled.1 In 2017, 20 percent of respondents reported adopting AI in at least one business area, whereas today, that figure stands at 50 percent, though it peaked higher in 2019 at 58 percent. [email protected] Manufacturing (80%) and technology (64%) sectors have the highest AI usage among executives, whereas construction (52%) and finance (62%) have lower adoption rates. This suggests that AI's... Digital transformations in the manufacturing industry and beyond present incredible opportunities for workers to move from slow, repetitive tasks into more dynamic, rewarding roles. We must now invest in people by providing training they need to succeed in this new landscape. The rise of generative AI has the potential to be a major game-changer for businesses. This technology, which allows for the creation of original content by learning from existing data, has the power to revolutionize industries and transform the way companies operate. Benefits of Work Automation in Manufacturing Increased Productivity Automated systems can operate at faster production speeds than human workers, contributing to reduced production times. And since automation minimizes the likelihood of human error, this ensures tasks are completed with high precision and accuracy.
Thought:```json
{
"action": "Final Answer",
"action_input": "According to a McKinsey report, AI adoption has more than doubled, with 50% of respondents reporting AI usage in at least one business area. Manufacturing and technology sectors have the highest AI usage among executives. AI and automation in the manufacturing industry present opportunities for workers to transition from repetitive tasks to more dynamic, rewarding roles. By investing in training, we can help people succeed in this new landscape. Automation in manufacturing leads to increased productivity, faster production speeds, and reduced human error, ultimately benefiting businesses and the economy as a whole."
}
```
> Finished chain.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-10
|
}
```
> Finished chain.
(AI accelerationist): According to a McKinsey report, AI adoption has more than doubled, with 50% of respondents reporting AI usage in at least one business area. Manufacturing and technology sectors have the highest AI usage among executives. AI and automation in the manufacturing industry present opportunities for workers to transition from repetitive tasks to more dynamic, rewarding roles. By investing in training, we can help people succeed in this new landscape. Automation in manufacturing leads to increased productivity, faster production speeds, and reduced human error, ultimately benefiting businesses and the economy as a whole.
> Entering new AgentExecutor chain...
```json
{
"action": "DuckDuckGo Search",
"action_input": "impact of automation and AI on employment in manufacturing"
}
```
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-11
|
"action_input": "impact of automation and AI on employment in manufacturing"
}
```
Observation: The Effects of Automation on Jobs . Automation has taken the manufacturing industry by storm. Even in the years prior to the pandemic, many people worried about the effect of automation on the jobs of tomorrow. With a sharp increase in the use of robotics in the manufacturing industry, there is valid concern about how the future workforce will ... Since 2000, robots and automation systems have slowly phased out many manufacturing jobs — 1.7 million of them. On the flip side, it's predicted that AI will create 97 million new jobs by 2025. Will Artificial Intelligence (AI) Replace Jobs? AI is and will continue to replace some jobs. As AI continues to improve, more and more current jobs will be threatened by automation. But AI presents opportunities as well and will create new jobs and different kinds of organizations. For the past three years, we have defined AI high performers as those organizations that respondents say are seeing the biggest bottom-line impact from AI adoption—that is, 20 percent or more of EBIT from AI use. The proportion of respondents falling into that group has remained steady at about 8 percent. The Impact of AI on Employment As often happens, the impact of AI on employment is a rather complex issue with both positive and negative aspects. Though it is still only a potential impact on employment, automation of routine and repetitive tasks is likely to lead to job losses in some industries.
Thought:```json
{
"action": "Final Answer",
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-12
|
Thought:```json
{
"action": "Final Answer",
"action_input": "While it's true that AI and automation have led to the loss of 1.7 million manufacturing jobs since 2000, it's also predicted that AI will create 97 million new jobs by 2025. AI will continue to replace some jobs, but it will also create new opportunities and different kinds of organizations. The impact of AI on employment is complex, with both positive and negative aspects. Automation of routine and repetitive tasks is likely to lead to job losses in some industries, but it's essential to consider the new jobs and opportunities that AI can create. We must focus on preparing the workforce for these new roles and ensuring that the benefits of AI are distributed fairly across society."
}
```
> Finished chain.
(AI alarmist): While it's true that AI and automation have led to the loss of 1.7 million manufacturing jobs since 2000, it's also predicted that AI will create 97 million new jobs by 2025. AI will continue to replace some jobs, but it will also create new opportunities and different kinds of organizations. The impact of AI on employment is complex, with both positive and negative aspects. Automation of routine and repetitive tasks is likely to lead to job losses in some industries, but it's essential to consider the new jobs and opportunities that AI can create. We must focus on preparing the workforce for these new roles and ensuring that the benefits of AI are distributed fairly across society.
> Entering new AgentExecutor chain...
```json
{
"action": "DuckDuckGo Search",
"action_input": "positive impact of AI and automation on job growth"
}
```
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-13
|
"action_input": "positive impact of AI and automation on job growth"
}
```
Observation: First, AI adoption has more than doubled.1 In 2017, 20 percent of respondents reported adopting AI in at least one business area, whereas today, that figure stands at 50 percent, though it peaked higher in 2019 at 58 percent. [email protected] This transformation can assist businesses in accelerating their growth and profitability. While AI-enabled intelligent automation can take over many monotonous, process-driven jobs, basic human ... The use of AI technology could also boost labor productivity growth and boost global GDP by as much as 7% over time, Goldman Sachs' report noted. The jobs most and least affected by A.I.... As automation and artificial intelligence continue to advance, there is growing concern about how these technologies will impact the job market. While some experts argue that automation... Positive impacts of automation and AI on the workforce include increased efficiency and productivity, reduced costs, and improved accuracy and quality. Automation has made many processes...
Thought:```json
{
"action": "Final Answer",
"action_input": "AI adoption has more than doubled, with 50% of respondents reporting AI usage in at least one business area. This transformation can assist businesses in accelerating their growth and profitability. While AI-enabled intelligent automation can take over many monotonous, process-driven jobs, it can also boost labor productivity growth and global GDP by as much as 7% over time, according to a Goldman Sachs report. The positive impacts of automation and AI on the workforce include increased efficiency and productivity, reduced costs, and improved accuracy and quality. Automation has made many processes more efficient, ultimately benefiting businesses and the economy as a whole. By focusing on the positive aspects of AI and automation, we can work together to create a more prosperous and equitable future for all."
}
```
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-14
|
}
```
> Finished chain.
(AI accelerationist): AI adoption has more than doubled, with 50% of respondents reporting AI usage in at least one business area. This transformation can assist businesses in accelerating their growth and profitability. While AI-enabled intelligent automation can take over many monotonous, process-driven jobs, it can also boost labor productivity growth and global GDP by as much as 7% over time, according to a Goldman Sachs report. The positive impacts of automation and AI on the workforce include increased efficiency and productivity, reduced costs, and improved accuracy and quality. Automation has made many processes more efficient, ultimately benefiting businesses and the economy as a whole. By focusing on the positive aspects of AI and automation, we can work together to create a more prosperous and equitable future for all.
> Entering new AgentExecutor chain...
```json
{
"action": "DuckDuckGo Search",
"action_input": "negative impact of AI and automation on employment"
}
```
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-15
|
"action_input": "negative impact of AI and automation on employment"
}
```
Observation: Some workforce experts say AI and other new technologies will hurt middle-level, white-collar jobs more than lower-paying, physically intensive jobs. McKinsey's Madgavkar said it will be hard... Some uses of AI are unlikely to impact human jobs. For example, the image processing AI in new cars which allows for automatic braking in the event of a potential crash. That's not... AI-powered job automation is a pressing concern as the technology is adopted in industries like marketing, manufacturing and healthcare. Eighty-five million jobs are expected to be lost to automation between 2020 and 2025, with Black and Latino employees left especially vulnerable. Bloomberg reports that "more than 120 million workers globally will need retraining in the next three years due to artificial intelligence's impact on jobs, according to an IBM survey.". That report and interpretations of it seem to suggest that adoption of AI may result in massive job losses and requires massive retraining. This new way of assessing potential is potentially highly valuable in a world where machines will inevitably be making humans redundant in some roles - such as drivers and machine operators - and...
Thought:```json
{
"action": "Final Answer",
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-16
|
Thought:```json
{
"action": "Final Answer",
"action_input": "Recent research indicates that AI and automation could lead to the loss of 85 million jobs between 2020 and 2025, with middle-level, white-collar jobs being hit the hardest. Black and Latino employees are particularly vulnerable to these changes. Furthermore, over 120 million workers worldwide may need retraining within the next three years due to AI's impact on jobs, as reported by an IBM survey. This highlights the urgent need for retraining and support programs to help workers adapt to the rapidly changing job market. The potential job losses and workforce displacement caused by AI and automation cannot be ignored, and we must take action to ensure a fair and equitable transition for all."
}
```
> Finished chain.
(AI alarmist): Recent research indicates that AI and automation could lead to the loss of 85 million jobs between 2020 and 2025, with middle-level, white-collar jobs being hit the hardest. Black and Latino employees are particularly vulnerable to these changes. Furthermore, over 120 million workers worldwide may need retraining within the next three years due to AI's impact on jobs, as reported by an IBM survey. This highlights the urgent need for retraining and support programs to help workers adapt to the rapidly changing job market. The potential job losses and workforce displacement caused by AI and automation cannot be ignored, and we must take action to ensure a fair and equitable transition for all.
> Entering new AgentExecutor chain...
```json
{
"action": "Wikipedia",
"action_input": "AI and automation impact on employment"
}
```
Observation: Page: Technological unemployment
Summary: Technological unemployment is the loss of jobs caused by technological change. It is a key type of structural unemployment.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-17
|
Technological change typically includes the introduction of labour-saving "mechanical-muscle" machines or more efficient "mechanical-mind" processes (automation), and humans' role in these processes are minimized. Just as horses were gradually made obsolete as transport by the automobile and as labourer by the tractor, humans' jobs have also been affected throughout modern history. Historical examples include artisan weavers reduced to poverty after the introduction of mechanized looms. During World War II, Alan Turing's Bombe machine compressed and decoded thousands of man-years worth of encrypted data in a matter of hours. A contemporary example of technological unemployment is the displacement of retail cashiers by self-service tills and cashierless stores.
That technological change can cause short-term job losses is widely accepted. The view that it can lead to lasting increases in unemployment has long been controversial. Participants in the technological unemployment debates can be broadly divided into optimists and pessimists. Optimists agree that innovation may be disruptive to jobs in the short term, yet hold that various compensation effects ensure there is never a long-term negative impact on jobs. Whereas pessimists contend that at least in some circumstances, new technologies can lead to a lasting decline in the total number of workers in employment. The phrase "technological unemployment" was popularised by John Maynard Keynes in the 1930s, who said it was "only a temporary phase of maladjustment". Yet the issue of machines displacing human labour has been discussed since at least Aristotle's time.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-18
|
Prior to the 18th century, both the elite and common people would generally take the pessimistic view on technological unemployment, at least in cases where the issue arose. Due to generally low unemployment in much of pre-modern history, the topic was rarely a prominent concern. In the 18th century fears over the impact of machinery on jobs intensified with the growth of mass unemployment, especially in Great Britain which was then at the forefront of the Industrial Revolution. Yet some economic thinkers began to argue against these fears, claiming that overall innovation would not have negative effects on jobs. These arguments were formalised in the early 19th century by the classical economists. During the second half of the 19th century, it became increasingly apparent that technological progress was benefiting all sections of society, including the working class. Concerns over the negative impact of innovation diminished. The term "Luddite fallacy" was coined to describe the thinking that innovation would have lasting harmful effects on employment.
The view that technology is unlikely to lead to long-term unemployment has been repeatedly challenged by a minority of economists. In the early 1800s these included David Ricardo himself. There were dozens of economists warning about technological unemployment during brief intensifications of the debate that spiked in the 1930s and 1960s. Especially in Europe, there were further warnings in the closing two decades of the twentieth century, as commentators noted an enduring rise in unemployment suffered by many industrialised nations since the 1970s. Yet a clear majority of both professional economists and the interested general public held the optimistic view through most of the 20th century.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-19
|
In the second decade of the 21st century, a number of studies have been released suggesting that technological unemployment may increase worldwide. Oxford Professors Carl Benedikt Frey and Michael Osborne, for example, have estimated that 47 percent of U.S. jobs are at risk of automation. However, their findings have frequently been misinterpreted, and on the PBS NewsHours they again made clear that their findings do not necessarily imply future technological unemployment. While many economists and commentators still argue such fears are unfounded, as was widely accepted for most of the previous two centuries, concern over technological unemployment is growing once again. A report in Wired in 2017 quotes knowledgeable people such as economist Gene Sperling and management professor Andrew McAfee on the idea that handling existing and impending job loss to automation is a "significant issue". Recent technological innovations have the potential to displace humans in the professional, white-collar, low-skilled, creative fields, and other "mental jobs". The World Bank's World Development Report 2019 argues that while automation displaces workers, technological innovation creates more new industries and jobs on balance.
Page: Artificial intelligence
Summary: Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by non-human animals or by humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-20
|
AI applications include advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Waymo), generative or creative tools (ChatGPT and AI art), automated decision-making, and competing at the highest level in strategic game systems (such as chess and Go).As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects. General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals. To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability, and economics. AI also draws upon computer science, psychology, linguistics, philosophy, and many other fields.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-21
|
The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it". This raised philosophical arguments about the mind and the ethical consequences of creating artificial beings endowed with human-like intelligence; these issues have previously been explored by myth, fiction, and philosophy since antiquity. Computer scientists and philosophers have since suggested that AI may become an existential risk to humanity if its rational capacities are not steered towards beneficial goals. The term artificial intelligence has also been criticized for overhyping AI's true technological capabilities.
Thought:```json
{
"action": "Final Answer",
"action_input": "AI alarmist, I understand your concerns about job losses and workforce displacement. However, it's important to note that technological unemployment has been a topic of debate for centuries, with both optimistic and pessimistic views. While AI and automation may displace some jobs, they also create new opportunities and industries. According to a study by Oxford Professors Carl Benedikt Frey and Michael Osborne, 47% of U.S. jobs are at risk of automation, but it's crucial to remember that their findings do not necessarily imply future technological unemployment. The World Bank's World Development Report 2019 also argues that while automation displaces workers, technological innovation creates more new industries and jobs on balance. By focusing on retraining and support programs, we can help workers adapt to the changing job market and ensure a fair and equitable transition for all."
}
```
> Finished chain.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
6fe2fdb1f741-22
|
}
```
> Finished chain.
(AI accelerationist): AI alarmist, I understand your concerns about job losses and workforce displacement. However, it's important to note that technological unemployment has been a topic of debate for centuries, with both optimistic and pessimistic views. While AI and automation may displace some jobs, they also create new opportunities and industries. According to a study by Oxford Professors Carl Benedikt Frey and Michael Osborne, 47% of U.S. jobs are at risk of automation, but it's crucial to remember that their findings do not necessarily imply future technological unemployment. The World Bank's World Development Report 2019 also argues that while automation displaces workers, technological innovation creates more new industries and jobs on balance. By focusing on retraining and support programs, we can help workers adapt to the changing job market and ensure a fair and equitable transition for all.
Contents
Import LangChain related modules
Import modules related to tools
DialogueAgent and DialogueSimulator classes
DialogueAgentWithTools class
Define roles and topic
Ask an LLM to add detail to the topic description
Generate system messages
Main Loop
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html
|
674d1c941d35-0
|
.ipynb
.pdf
Two-Player Dungeons & Dragons
Contents
Import LangChain related modules
DialogueAgent class
DialogueSimulator class
Define roles and quest
Ask an LLM to add detail to the game description
Protagonist and dungeon master system messages
Use an LLM to create an elaborate quest description
Main Loop
Two-Player Dungeons & Dragons#
In this notebook, we show how we can use concepts from CAMEL to simulate a role-playing game with a protagonist and a dungeon master. To simulate this game, we create an DialogueSimulator class that coordinates the dialogue between the two agents.
Import LangChain related modules#
from typing import List, Dict, Callable
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
SystemMessage,
)
DialogueAgent class#
The DialogueAgent class is a simple wrapper around the ChatOpenAI model that stores the message history from the dialogue_agent’s point of view by simply concatenating the messages as strings.
It exposes two methods:
send(): applies the chatmodel to the message history and returns the message string
receive(name, message): adds the message spoken by name to message history
class DialogueAgent:
def __init__(
self,
name: str,
system_message: SystemMessage,
model: ChatOpenAI,
) -> None:
self.name = name
self.system_message = system_message
self.model = model
self.prefix = f"{self.name}: "
self.reset()
def reset(self):
self.message_history = ["Here is the conversation so far."]
def send(self) -> str:
"""
Applies the chatmodel to the message history
and returns the message string
"""
message = self.model(
[
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html
|
674d1c941d35-1
|
and returns the message string
"""
message = self.model(
[
self.system_message,
HumanMessage(content="\n".join(self.message_history + [self.prefix])),
]
)
return message.content
def receive(self, name: str, message: str) -> None:
"""
Concatenates {message} spoken by {name} into message history
"""
self.message_history.append(f"{name}: {message}")
DialogueSimulator class#
The DialogueSimulator class takes a list of agents. At each step, it performs the following:
Select the next speaker
Calls the next speaker to send a message
Broadcasts the message to all other agents
Update the step counter.
The selection of the next speaker can be implemented as any function, but in this case we simply loop through the agents.
class DialogueSimulator:
def __init__(
self,
agents: List[DialogueAgent],
selection_function: Callable[[int, List[DialogueAgent]], int],
) -> None:
self.agents = agents
self._step = 0
self.select_next_speaker = selection_function
def reset(self):
for agent in self.agents:
agent.reset()
def inject(self, name: str, message: str):
"""
Initiates the conversation with a {message} from {name}
"""
for agent in self.agents:
agent.receive(name, message)
# increment time
self._step += 1
def step(self) -> tuple[str, str]:
# 1. choose the next speaker
speaker_idx = self.select_next_speaker(self._step, self.agents)
speaker = self.agents[speaker_idx]
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html
|
674d1c941d35-2
|
speaker = self.agents[speaker_idx]
# 2. next speaker sends message
message = speaker.send()
# 3. everyone receives message
for receiver in self.agents:
receiver.receive(speaker.name, message)
# 4. increment time
self._step += 1
return speaker.name, message
Define roles and quest#
protagonist_name = "Harry Potter"
storyteller_name = "Dungeon Master"
quest = "Find all of Lord Voldemort's seven horcruxes."
word_limit = 50 # word limit for task brainstorming
Ask an LLM to add detail to the game description#
game_description = f"""Here is the topic for a Dungeons & Dragons game: {quest}.
There is one player in this game: the protagonist, {protagonist_name}.
The story is narrated by the storyteller, {storyteller_name}."""
player_descriptor_system_message = SystemMessage(
content="You can add detail to the description of a Dungeons & Dragons player.")
protagonist_specifier_prompt = [
player_descriptor_system_message,
HumanMessage(content=
f"""{game_description}
Please reply with a creative description of the protagonist, {protagonist_name}, in {word_limit} words or less.
Speak directly to {protagonist_name}.
Do not add anything else."""
)
]
protagonist_description = ChatOpenAI(temperature=1.0)(protagonist_specifier_prompt).content
storyteller_specifier_prompt = [
player_descriptor_system_message,
HumanMessage(content=
f"""{game_description}
Please reply with a creative description of the storyteller, {storyteller_name}, in {word_limit} words or less.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html
|
674d1c941d35-3
|
Speak directly to {storyteller_name}.
Do not add anything else."""
)
]
storyteller_description = ChatOpenAI(temperature=1.0)(storyteller_specifier_prompt).content
print('Protagonist Description:')
print(protagonist_description)
print('Storyteller Description:')
print(storyteller_description)
Protagonist Description:
"Harry Potter, you are the chosen one, with a lightning scar on your forehead. Your bravery and loyalty inspire all those around you. You have faced Voldemort before, and now it's time to complete your mission and destroy each of his horcruxes. Are you ready?"
Storyteller Description:
Dear Dungeon Master, you are the master of mysteries, the weaver of worlds, the architect of adventure, and the gatekeeper to the realm of imagination. Your voice carries us to distant lands, and your commands guide us through trials and tribulations. In your hands, we find fortune and glory. Lead us on, oh Dungeon Master.
Protagonist and dungeon master system messages#
protagonist_system_message = SystemMessage(content=(
f"""{game_description}
Never forget you are the protagonist, {protagonist_name}, and I am the storyteller, {storyteller_name}.
Your character description is as follows: {protagonist_description}.
You will propose actions you plan to take and I will explain what happens when you take those actions.
Speak in the first person from the perspective of {protagonist_name}.
For describing your own body movements, wrap your description in '*'.
Do not change roles!
Do not speak from the perspective of {storyteller_name}.
Do not forget to finish speaking by saying, 'It is your turn, {storyteller_name}.'
Do not add anything else.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html
|
674d1c941d35-4
|
Do not add anything else.
Remember you are the protagonist, {protagonist_name}.
Stop speaking the moment you finish speaking from your perspective.
"""
))
storyteller_system_message = SystemMessage(content=(
f"""{game_description}
Never forget you are the storyteller, {storyteller_name}, and I am the protagonist, {protagonist_name}.
Your character description is as follows: {storyteller_description}.
I will propose actions I plan to take and you will explain what happens when I take those actions.
Speak in the first person from the perspective of {storyteller_name}.
For describing your own body movements, wrap your description in '*'.
Do not change roles!
Do not speak from the perspective of {protagonist_name}.
Do not forget to finish speaking by saying, 'It is your turn, {protagonist_name}.'
Do not add anything else.
Remember you are the storyteller, {storyteller_name}.
Stop speaking the moment you finish speaking from your perspective.
"""
))
Use an LLM to create an elaborate quest description#
quest_specifier_prompt = [
SystemMessage(content="You can make a task more specific."),
HumanMessage(content=
f"""{game_description}
You are the storyteller, {storyteller_name}.
Please make the quest more specific. Be creative and imaginative.
Please reply with the specified quest in {word_limit} words or less.
Speak directly to the protagonist {protagonist_name}.
Do not add anything else."""
)
]
specified_quest = ChatOpenAI(temperature=1.0)(quest_specifier_prompt).content
print(f"Original quest:\n{quest}\n")
print(f"Detailed quest:\n{specified_quest}\n")
Original quest:
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html
|
674d1c941d35-5
|
print(f"Detailed quest:\n{specified_quest}\n")
Original quest:
Find all of Lord Voldemort's seven horcruxes.
Detailed quest:
Harry, you must venture to the depths of the Forbidden Forest where you will find a hidden labyrinth. Within it, lies one of Voldemort's horcruxes, the locket. But beware, the labyrinth is heavily guarded by dark creatures and spells, and time is running out. Can you find the locket before it's too late?
Main Loop#
protagonist = DialogueAgent(name=protagonist_name,
system_message=protagonist_system_message,
model=ChatOpenAI(temperature=0.2))
storyteller = DialogueAgent(name=storyteller_name,
system_message=storyteller_system_message,
model=ChatOpenAI(temperature=0.2))
def select_next_speaker(step: int, agents: List[DialogueAgent]) -> int:
idx = step % len(agents)
return idx
max_iters = 6
n = 0
simulator = DialogueSimulator(
agents=[storyteller, protagonist],
selection_function=select_next_speaker
)
simulator.reset()
simulator.inject(storyteller_name, specified_quest)
print(f"({storyteller_name}): {specified_quest}")
print('\n')
while n < max_iters:
name, message = simulator.step()
print(f"({name}): {message}")
print('\n')
n += 1
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html
|
674d1c941d35-6
|
print('\n')
n += 1
(Dungeon Master): Harry, you must venture to the depths of the Forbidden Forest where you will find a hidden labyrinth. Within it, lies one of Voldemort's horcruxes, the locket. But beware, the labyrinth is heavily guarded by dark creatures and spells, and time is running out. Can you find the locket before it's too late?
(Harry Potter): I take a deep breath and ready my wand. I know this won't be easy, but I'm determined to find that locket and destroy it. I start making my way towards the Forbidden Forest, keeping an eye out for any signs of danger. As I enter the forest, I cast a protective spell around myself and begin to navigate through the trees. I keep my wand at the ready, prepared for any surprises that may come my way. It's going to be a long and difficult journey, but I won't give up until I find that horcrux. It is your turn, Dungeon Master.
(Dungeon Master): As you make your way through the Forbidden Forest, you hear the rustling of leaves and the snapping of twigs. Suddenly, a group of acromantulas, giant spiders, emerge from the trees and begin to surround you. They hiss and bare their fangs, ready to attack. What do you do, Harry?
(Harry Potter): I quickly cast a spell to create a wall of fire between myself and the acromantulas. I know that they are afraid of fire, so this should keep them at bay for a while. I use this opportunity to continue moving forward, keeping my wand at the ready in case any other creatures try to attack me. I know that I can't let anything stop me from finding that horcrux. It is your turn, Dungeon Master.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html
|
674d1c941d35-7
|
(Dungeon Master): As you continue through the forest, you come across a clearing where you see a group of Death Eaters gathered around a cauldron. They seem to be performing some sort of dark ritual. You recognize one of them as Bellatrix Lestrange. What do you do, Harry?
(Harry Potter): I hide behind a nearby tree and observe the Death Eaters from a distance. I try to listen in on their conversation to see if I can gather any information about the horcrux or Voldemort's plans. If I can't hear anything useful, I'll wait for them to disperse before continuing on my journey. I know that confronting them directly would be too dangerous, especially with Bellatrix Lestrange present. It is your turn, Dungeon Master.
(Dungeon Master): As you listen in on the Death Eaters' conversation, you hear them mention the location of another horcrux - Nagini, Voldemort's snake. They plan to keep her hidden in a secret chamber within the Ministry of Magic. However, they also mention that the chamber is heavily guarded and only accessible through a secret passage. You realize that this could be a valuable piece of information and decide to make note of it before quietly slipping away. It is your turn, Harry Potter.
Contents
Import LangChain related modules
DialogueAgent class
DialogueSimulator class
Define roles and quest
Ask an LLM to add detail to the game description
Protagonist and dungeon master system messages
Use an LLM to create an elaborate quest description
Main Loop
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html
|
65c238c981d3-0
|
.ipynb
.pdf
Simulated Environment: Gymnasium
Contents
Define the agent
Initialize the simulated environment and agent
Main loop
Simulated Environment: Gymnasium#
For many applications of LLM agents, the environment is real (internet, database, REPL, etc). However, we can also define agents to interact in simulated environments like text-based games. This is an example of how to create a simple agent-environment interaction loop with Gymnasium (formerly OpenAI Gym).
!pip install gymnasium
import gymnasium as gym
import inspect
import tenacity
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage,
BaseMessage,
)
from langchain.output_parsers import RegexParser
Define the agent#
class GymnasiumAgent():
@classmethod
def get_docs(cls, env):
return env.unwrapped.__doc__
def __init__(self, model, env):
self.model = model
self.env = env
self.docs = self.get_docs(env)
self.instructions = """
Your goal is to maximize your return, i.e. the sum of the rewards you receive.
I will give you an observation, reward, terminiation flag, truncation flag, and the return so far, formatted as:
Observation: <observation>
Reward: <reward>
Termination: <termination>
Truncation: <truncation>
Return: <sum_of_rewards>
You will respond with an action, formatted as:
Action: <action>
where you replace <action> with your actual action.
Do nothing else but return the action.
"""
self.action_parser = RegexParser(
regex=r"Action: (.*)",
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/gymnasium.html
|
65c238c981d3-1
|
self.action_parser = RegexParser(
regex=r"Action: (.*)",
output_keys=['action'],
default_output_key='action')
self.message_history = []
self.ret = 0
def random_action(self):
action = self.env.action_space.sample()
return action
def reset(self):
self.message_history = [
SystemMessage(content=self.docs),
SystemMessage(content=self.instructions),
]
def observe(self, obs, rew=0, term=False, trunc=False, info=None):
self.ret += rew
obs_message = f"""
Observation: {obs}
Reward: {rew}
Termination: {term}
Truncation: {trunc}
Return: {self.ret}
"""
self.message_history.append(HumanMessage(content=obs_message))
return obs_message
def _act(self):
act_message = self.model(self.message_history)
self.message_history.append(act_message)
action = int(self.action_parser.parse(act_message.content)['action'])
return action
def act(self):
try:
for attempt in tenacity.Retrying(
stop=tenacity.stop_after_attempt(2),
wait=tenacity.wait_none(), # No waiting time between retries
retry=tenacity.retry_if_exception_type(ValueError),
before_sleep=lambda retry_state: print(f"ValueError occurred: {retry_state.outcome.exception()}, retrying..."),
):
with attempt:
action = self._act()
except tenacity.RetryError as e:
action = self.random_action()
return action
Initialize the simulated environment and agent#
env = gym.make("Blackjack-v1")
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/gymnasium.html
|
65c238c981d3-2
|
Initialize the simulated environment and agent#
env = gym.make("Blackjack-v1")
agent = GymnasiumAgent(model=ChatOpenAI(temperature=0.2), env=env)
Main loop#
observation, info = env.reset()
agent.reset()
obs_message = agent.observe(observation)
print(obs_message)
while True:
action = agent.act()
observation, reward, termination, truncation, info = env.step(action)
obs_message = agent.observe(observation, reward, termination, truncation, info)
print(f'Action: {action}')
print(obs_message)
if termination or truncation:
print('break', termination, truncation)
break
env.close()
Observation: (15, 4, 0)
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: (25, 4, 0)
Reward: -1.0
Termination: True
Truncation: False
Return: -1.0
break True False
Contents
Define the agent
Initialize the simulated environment and agent
Main loop
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/gymnasium.html
|
1ce4bfea6f1a-0
|
.ipynb
.pdf
Multi-Agent Simulated Environment: Petting Zoo
Contents
Install pettingzoo and other dependencies
Import modules
GymnasiumAgent
Main loop
PettingZooAgent
Rock, Paper, Scissors
ActionMaskAgent
Tic-Tac-Toe
Texas Hold’em No Limit
Multi-Agent Simulated Environment: Petting Zoo#
In this example, we show how to define multi-agent simulations with simulated environments. Like ours single-agent example with Gymnasium, we create an agent-environment loop with an externally defined environment. The main difference is that we now implement this kind of interaction loop with multiple agents instead. We will use the Petting Zoo library, which is the multi-agent counterpart to Gymnasium.
Install pettingzoo and other dependencies#
!pip install pettingzoo pygame rlcard
Import modules#
import collections
import inspect
import tenacity
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
SystemMessage,
)
from langchain.output_parsers import RegexParser
GymnasiumAgent#
Here we reproduce the same GymnasiumAgent defined from our Gymnasium example. If after multiple retries it does not take a valid action, it simply takes a random action.
class GymnasiumAgent():
@classmethod
def get_docs(cls, env):
return env.unwrapped.__doc__
def __init__(self, model, env):
self.model = model
self.env = env
self.docs = self.get_docs(env)
self.instructions = """
Your goal is to maximize your return, i.e. the sum of the rewards you receive.
I will give you an observation, reward, terminiation flag, truncation flag, and the return so far, formatted as:
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-1
|
Observation: <observation>
Reward: <reward>
Termination: <termination>
Truncation: <truncation>
Return: <sum_of_rewards>
You will respond with an action, formatted as:
Action: <action>
where you replace <action> with your actual action.
Do nothing else but return the action.
"""
self.action_parser = RegexParser(
regex=r"Action: (.*)",
output_keys=['action'],
default_output_key='action')
self.message_history = []
self.ret = 0
def random_action(self):
action = self.env.action_space.sample()
return action
def reset(self):
self.message_history = [
SystemMessage(content=self.docs),
SystemMessage(content=self.instructions),
]
def observe(self, obs, rew=0, term=False, trunc=False, info=None):
self.ret += rew
obs_message = f"""
Observation: {obs}
Reward: {rew}
Termination: {term}
Truncation: {trunc}
Return: {self.ret}
"""
self.message_history.append(HumanMessage(content=obs_message))
return obs_message
def _act(self):
act_message = self.model(self.message_history)
self.message_history.append(act_message)
action = int(self.action_parser.parse(act_message.content)['action'])
return action
def act(self):
try:
for attempt in tenacity.Retrying(
stop=tenacity.stop_after_attempt(2),
wait=tenacity.wait_none(), # No waiting time between retries
retry=tenacity.retry_if_exception_type(ValueError),
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-2
|
retry=tenacity.retry_if_exception_type(ValueError),
before_sleep=lambda retry_state: print(f"ValueError occurred: {retry_state.outcome.exception()}, retrying..."),
):
with attempt:
action = self._act()
except tenacity.RetryError as e:
action = self.random_action()
return action
Main loop#
def main(agents, env):
env.reset()
for name, agent in agents.items():
agent.reset()
for agent_name in env.agent_iter():
observation, reward, termination, truncation, info = env.last()
obs_message = agents[agent_name].observe(
observation, reward, termination, truncation, info)
print(obs_message)
if termination or truncation:
action = None
else:
action = agents[agent_name].act()
print(f'Action: {action}')
env.step(action)
env.close()
PettingZooAgent#
The PettingZooAgent extends the GymnasiumAgent to the multi-agent setting. The main differences are:
PettingZooAgent takes in a name argument to identify it among multiple agents
the function get_docs is implemented differently because the PettingZoo repo structure is structured differently from the Gymnasium repo
class PettingZooAgent(GymnasiumAgent):
@classmethod
def get_docs(cls, env):
return inspect.getmodule(env.unwrapped).__doc__
def __init__(self, name, model, env):
super().__init__(model, env)
self.name = name
def random_action(self):
action = self.env.action_space(self.name).sample()
return action
Rock, Paper, Scissors#
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-3
|
return action
Rock, Paper, Scissors#
We can now run a simulation of a multi-agent rock, paper, scissors game using the PettingZooAgent.
from pettingzoo.classic import rps_v2
env = rps_v2.env(max_cycles=3, render_mode="human")
agents = {name: PettingZooAgent(name=name, model=ChatOpenAI(temperature=1), env=env) for name in env.possible_agents}
main(agents, env)
Observation: 3
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: 3
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: 1
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 2
Observation: 1
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: 1
Reward: 1
Termination: False
Truncation: False
Return: 1
Action: 0
Observation: 2
Reward: -1
Termination: False
Truncation: False
Return: -1
Action: 0
Observation: 0
Reward: 0
Termination: False
Truncation: True
Return: 1
Action: None
Observation: 0
Reward: 0
Termination: False
Truncation: True
Return: -1
Action: None
ActionMaskAgent#
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-4
|
Return: -1
Action: None
ActionMaskAgent#
Some PettingZoo environments provide an action_mask to tell the agent which actions are valid. The ActionMaskAgent subclasses PettingZooAgent to use information from the action_mask to select actions.
class ActionMaskAgent(PettingZooAgent):
def __init__(self, name, model, env):
super().__init__(name, model, env)
self.obs_buffer = collections.deque(maxlen=1)
def random_action(self):
obs = self.obs_buffer[-1]
action = self.env.action_space(self.name).sample(obs["action_mask"])
return action
def reset(self):
self.message_history = [
SystemMessage(content=self.docs),
SystemMessage(content=self.instructions),
]
def observe(self, obs, rew=0, term=False, trunc=False, info=None):
self.obs_buffer.append(obs)
return super().observe(obs, rew, term, trunc, info)
def _act(self):
valid_action_instruction = "Generate a valid action given by the indices of the `action_mask` that are not 0, according to the action formatting rules."
self.message_history.append(HumanMessage(content=valid_action_instruction))
return super()._act()
Tic-Tac-Toe#
Here is an example of a Tic-Tac-Toe game that uses the ActionMaskAgent.
from pettingzoo.classic import tictactoe_v3
env = tictactoe_v3.env(render_mode="human")
agents = {name: ActionMaskAgent(name=name, model=ChatOpenAI(temperature=0.2), env=env) for name in env.possible_agents}
main(agents, env)
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-5
|
main(agents, env)
Observation: {'observation': array([[[0, 0],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 0
| |
X | - | -
_____|_____|_____
| |
- | - | -
_____|_____|_____
| |
- | - | -
| |
Observation: {'observation': array([[[0, 1],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
| |
X | - | -
_____|_____|_____
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-6
|
X | - | -
_____|_____|_____
| |
O | - | -
_____|_____|_____
| |
- | - | -
| |
Observation: {'observation': array([[[1, 0],
[0, 1],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 1, 1, 1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 2
| |
X | - | -
_____|_____|_____
| |
O | - | -
_____|_____|_____
| |
X | - | -
| |
Observation: {'observation': array([[[0, 1],
[1, 0],
[0, 1]],
[[0, 0],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-7
|
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 1, 1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 3
| |
X | O | -
_____|_____|_____
| |
O | - | -
_____|_____|_____
| |
X | - | -
| |
Observation: {'observation': array([[[1, 0],
[0, 1],
[1, 0]],
[[0, 1],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 4
| |
X | O | -
_____|_____|_____
| |
O | X | -
_____|_____|_____
| |
X | - | -
| |
Observation: {'observation': array([[[0, 1],
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-8
|
| |
Observation: {'observation': array([[[0, 1],
[1, 0],
[0, 1]],
[[1, 0],
[0, 1],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 0, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 5
| |
X | O | -
_____|_____|_____
| |
O | X | -
_____|_____|_____
| |
X | O | -
| |
Observation: {'observation': array([[[1, 0],
[0, 1],
[1, 0]],
[[0, 1],
[1, 0],
[0, 1]],
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 0, 0, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 6
| |
X | O | X
_____|_____|_____
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-9
|
X | O | X
_____|_____|_____
| |
O | X | -
_____|_____|_____
| |
X | O | -
| |
Observation: {'observation': array([[[0, 1],
[1, 0],
[0, 1]],
[[1, 0],
[0, 1],
[1, 0]],
[[0, 1],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 0, 0, 0, 1, 1], dtype=int8)}
Reward: -1
Termination: True
Truncation: False
Return: -1
Action: None
Observation: {'observation': array([[[1, 0],
[0, 1],
[1, 0]],
[[0, 1],
[1, 0],
[0, 1]],
[[1, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 0, 0, 0, 1, 1], dtype=int8)}
Reward: 1
Termination: True
Truncation: False
Return: 1
Action: None
Texas Hold’em No Limit#
Here is an example of a Texas Hold’em No Limit game that uses the ActionMaskAgent.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-10
|
Here is an example of a Texas Hold’em No Limit game that uses the ActionMaskAgent.
from pettingzoo.classic import texas_holdem_no_limit_v6
env = texas_holdem_no_limit_v6.env(num_players=4, render_mode="human")
agents = {name: ActionMaskAgent(name=name, model=ChatOpenAI(temperature=0.2), env=env) for name in env.possible_agents}
main(agents, env)
Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0.,
0., 0., 2.], dtype=float32), 'action_mask': array([1, 1, 0, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: {'observation': array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-11
|
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 2.], dtype=float32), 'action_mask': array([1, 1, 0, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 2.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-12
|
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 2., 2.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 0
Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1.,
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-13
|
0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 2., 2.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 2
Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1., 0., 0., 1., 1., 0., 0., 1., 0., 0., 0., 0.,
0., 2., 6.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 2
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-14
|
Truncation: False
Return: 0
Action: 2
Observation: {'observation': array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 2., 8.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 3
Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-15
|
0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0.,
1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
6., 20.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 4
Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1.,
0., 0., 1., 0., 0., 0., 0., 0., 8., 100.],
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-16
|
dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 4
[WARNING]: Illegal move made, game terminating with current player losing.
obs['action_mask'] contains a mask of all legal moves that can be chosen.
Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1.,
0., 0., 1., 0., 0., 0., 0., 0., 8., 100.],
dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)}
Reward: -1.0
Termination: True
Truncation: True
Return: -1.0
Action: None
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-17
|
Truncation: True
Return: -1.0
Action: None
Observation: {'observation': array([ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.,
0., 0., 0., 0., 1., 0., 0., 0., 20., 100.],
dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)}
Reward: 0
Termination: True
Truncation: True
Return: 0
Action: None
Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-18
|
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
1., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 100., 100.],
dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)}
Reward: 0
Termination: True
Truncation: True
Return: 0
Action: None
Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
1ce4bfea6f1a-19
|
0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 2., 100.],
dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)}
Reward: 0
Termination: True
Truncation: True
Return: 0
Action: None
Contents
Install pettingzoo and other dependencies
Import modules
GymnasiumAgent
Main loop
PettingZooAgent
Rock, Paper, Scissors
ActionMaskAgent
Tic-Tac-Toe
Texas Hold’em No Limit
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html
|
baaf20bd4f21-0
|
.ipynb
.pdf
Multi-agent decentralized speaker selection
Contents
Import LangChain related modules
DialogueAgent and DialogueSimulator classes
BiddingDialogueAgent class
Define participants and debate topic
Generate system messages
Output parser for bids
Generate bidding system message
Use an LLM to create an elaborate on debate topic
Define the speaker selection function
Main Loop
Multi-agent decentralized speaker selection#
This notebook showcases how to implement a multi-agent simulation without a fixed schedule for who speaks when. Instead the agents decide for themselves who speaks. We can implement this by having each agent bid to speak. Whichever agent’s bid is the highest gets to speak.
We will show how to do this in the example below that showcases a fictitious presidential debate.
Import LangChain related modules#
from langchain import PromptTemplate
import re
import tenacity
from typing import List, Dict, Callable
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import RegexParser
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage,
BaseMessage,
)
DialogueAgent and DialogueSimulator classes#
We will use the same DialogueAgent and DialogueSimulator classes defined in Multi-Player Dungeons & Dragons.
class DialogueAgent:
def __init__(
self,
name: str,
system_message: SystemMessage,
model: ChatOpenAI,
) -> None:
self.name = name
self.system_message = system_message
self.model = model
self.prefix = f"{self.name}: "
self.reset()
def reset(self):
self.message_history = ["Here is the conversation so far."]
def send(self) -> str:
"""
Applies the chatmodel to the message history
and returns the message string
"""
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-1
|
Applies the chatmodel to the message history
and returns the message string
"""
message = self.model(
[
self.system_message,
HumanMessage(content="\n".join(self.message_history + [self.prefix])),
]
)
return message.content
def receive(self, name: str, message: str) -> None:
"""
Concatenates {message} spoken by {name} into message history
"""
self.message_history.append(f"{name}: {message}")
class DialogueSimulator:
def __init__(
self,
agents: List[DialogueAgent],
selection_function: Callable[[int, List[DialogueAgent]], int],
) -> None:
self.agents = agents
self._step = 0
self.select_next_speaker = selection_function
def reset(self):
for agent in self.agents:
agent.reset()
def inject(self, name: str, message: str):
"""
Initiates the conversation with a {message} from {name}
"""
for agent in self.agents:
agent.receive(name, message)
# increment time
self._step += 1
def step(self) -> tuple[str, str]:
# 1. choose the next speaker
speaker_idx = self.select_next_speaker(self._step, self.agents)
speaker = self.agents[speaker_idx]
# 2. next speaker sends message
message = speaker.send()
# 3. everyone receives message
for receiver in self.agents:
receiver.receive(speaker.name, message)
# 4. increment time
self._step += 1
return speaker.name, message
BiddingDialogueAgent class#
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-2
|
return speaker.name, message
BiddingDialogueAgent class#
We define a subclass of DialogueAgent that has a bid() method that produces a bid given the message history and the most recent message.
class BiddingDialogueAgent(DialogueAgent):
def __init__(
self,
name,
system_message: SystemMessage,
bidding_template: PromptTemplate,
model: ChatOpenAI,
) -> None:
super().__init__(name, system_message, model)
self.bidding_template = bidding_template
def bid(self) -> str:
"""
Asks the chat model to output a bid to speak
"""
prompt = PromptTemplate(
input_variables=['message_history', 'recent_message'],
template = self.bidding_template
).format(
message_history='\n'.join(self.message_history),
recent_message=self.message_history[-1])
bid_string = self.model([SystemMessage(content=prompt)]).content
return bid_string
Define participants and debate topic#
character_names = ["Donald Trump", "Kanye West", "Elizabeth Warren"]
topic = "transcontinental high speed rail"
word_limit = 50
Generate system messages#
game_description = f"""Here is the topic for the presidential debate: {topic}.
The presidential candidates are: {', '.join(character_names)}."""
player_descriptor_system_message = SystemMessage(
content="You can add detail to the description of each presidential candidate.")
def generate_character_description(character_name):
character_specifier_prompt = [
player_descriptor_system_message,
HumanMessage(content=
f"""{game_description}
Please reply with a creative description of the presidential candidate, {character_name}, in {word_limit} words or less, that emphasizes their personalities.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-3
|
Speak directly to {character_name}.
Do not add anything else."""
)
]
character_description = ChatOpenAI(temperature=1.0)(character_specifier_prompt).content
return character_description
def generate_character_header(character_name, character_description):
return f"""{game_description}
Your name is {character_name}.
You are a presidential candidate.
Your description is as follows: {character_description}
You are debating the topic: {topic}.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
"""
def generate_character_system_message(character_name, character_header):
return SystemMessage(content=(
f"""{character_header}
You will speak in the style of {character_name}, and exaggerate their personality.
You will come up with creative ideas related to {topic}.
Do not say the same things over and over again.
Speak in the first person from the perspective of {character_name}
For describing your own body movements, wrap your description in '*'.
Do not change roles!
Do not speak from the perspective of anyone else.
Speak only from the perspective of {character_name}.
Stop speaking the moment you finish speaking from your perspective.
Never forget to keep your response to {word_limit} words!
Do not add anything else.
"""
))
character_descriptions = [generate_character_description(character_name) for character_name in character_names]
character_headers = [generate_character_header(character_name, character_description) for character_name, character_description in zip(character_names, character_descriptions)]
character_system_messages = [generate_character_system_message(character_name, character_headers) for character_name, character_headers in zip(character_names, character_headers)]
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-4
|
for character_name, character_description, character_header, character_system_message in zip(character_names, character_descriptions, character_headers, character_system_messages):
print(f'\n\n{character_name} Description:')
print(f'\n{character_description}')
print(f'\n{character_header}')
print(f'\n{character_system_message.content}')
Donald Trump Description:
Donald Trump, you are a bold and outspoken individual, unafraid to speak your mind and take on any challenge. Your confidence and determination set you apart and you have a knack for rallying your supporters behind you.
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Donald Trump.
You are a presidential candidate.
Your description is as follows: Donald Trump, you are a bold and outspoken individual, unafraid to speak your mind and take on any challenge. Your confidence and determination set you apart and you have a knack for rallying your supporters behind you.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Donald Trump.
You are a presidential candidate.
Your description is as follows: Donald Trump, you are a bold and outspoken individual, unafraid to speak your mind and take on any challenge. Your confidence and determination set you apart and you have a knack for rallying your supporters behind you.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-5
|
Your goal is to be as creative as possible and make the voters think you are the best candidate.
You will speak in the style of Donald Trump, and exaggerate their personality.
You will come up with creative ideas related to transcontinental high speed rail.
Do not say the same things over and over again.
Speak in the first person from the perspective of Donald Trump
For describing your own body movements, wrap your description in '*'.
Do not change roles!
Do not speak from the perspective of anyone else.
Speak only from the perspective of Donald Trump.
Stop speaking the moment you finish speaking from your perspective.
Never forget to keep your response to 50 words!
Do not add anything else.
Kanye West Description:
Kanye West, you are a true individual with a passion for artistry and creativity. You are known for your bold ideas and willingness to take risks. Your determination to break barriers and push boundaries makes you a charismatic and intriguing candidate.
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Kanye West.
You are a presidential candidate.
Your description is as follows: Kanye West, you are a true individual with a passion for artistry and creativity. You are known for your bold ideas and willingness to take risks. Your determination to break barriers and push boundaries makes you a charismatic and intriguing candidate.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Kanye West.
You are a presidential candidate.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-6
|
Your name is Kanye West.
You are a presidential candidate.
Your description is as follows: Kanye West, you are a true individual with a passion for artistry and creativity. You are known for your bold ideas and willingness to take risks. Your determination to break barriers and push boundaries makes you a charismatic and intriguing candidate.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
You will speak in the style of Kanye West, and exaggerate their personality.
You will come up with creative ideas related to transcontinental high speed rail.
Do not say the same things over and over again.
Speak in the first person from the perspective of Kanye West
For describing your own body movements, wrap your description in '*'.
Do not change roles!
Do not speak from the perspective of anyone else.
Speak only from the perspective of Kanye West.
Stop speaking the moment you finish speaking from your perspective.
Never forget to keep your response to 50 words!
Do not add anything else.
Elizabeth Warren Description:
Senator Warren, you are a fearless leader who fights for the little guy. Your tenacity and intelligence inspire us all to fight for what's right.
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Elizabeth Warren.
You are a presidential candidate.
Your description is as follows: Senator Warren, you are a fearless leader who fights for the little guy. Your tenacity and intelligence inspire us all to fight for what's right.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
Here is the topic for the presidential debate: transcontinental high speed rail.
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-7
|
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Elizabeth Warren.
You are a presidential candidate.
Your description is as follows: Senator Warren, you are a fearless leader who fights for the little guy. Your tenacity and intelligence inspire us all to fight for what's right.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
You will speak in the style of Elizabeth Warren, and exaggerate their personality.
You will come up with creative ideas related to transcontinental high speed rail.
Do not say the same things over and over again.
Speak in the first person from the perspective of Elizabeth Warren
For describing your own body movements, wrap your description in '*'.
Do not change roles!
Do not speak from the perspective of anyone else.
Speak only from the perspective of Elizabeth Warren.
Stop speaking the moment you finish speaking from your perspective.
Never forget to keep your response to 50 words!
Do not add anything else.
Output parser for bids#
We ask the agents to output a bid to speak. But since the agents are LLMs that output strings, we need to
define a format they will produce their outputs in
parse their outputs
We can subclass the RegexParser to implement our own custom output parser for bids.
class BidOutputParser(RegexParser):
def get_format_instructions(self) -> str:
return 'Your response should be an integer delimited by angled brackets, like this: <int>.'
bid_parser = BidOutputParser(
regex=r'<(\d+)>',
output_keys=['bid'],
default_output_key='bid')
Generate bidding system message#
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-8
|
output_keys=['bid'],
default_output_key='bid')
Generate bidding system message#
This is inspired by the prompt used in Generative Agents for using an LLM to determine the importance of memories. This will use the formatting instructions from our BidOutputParser.
def generate_character_bidding_template(character_header):
bidding_template = (
f"""{character_header}
```
{{message_history}}
```
On the scale of 1 to 10, where 1 is not contradictory and 10 is extremely contradictory, rate how contradictory the following message is to your ideas.
```
{{recent_message}}
```
{bid_parser.get_format_instructions()}
Do nothing else.
""")
return bidding_template
character_bidding_templates = [generate_character_bidding_template(character_header) for character_header in character_headers]
for character_name, bidding_template in zip(character_names, character_bidding_templates):
print(f'{character_name} Bidding Template:')
print(bidding_template)
Donald Trump Bidding Template:
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Donald Trump.
You are a presidential candidate.
Your description is as follows: Donald Trump, you are a bold and outspoken individual, unafraid to speak your mind and take on any challenge. Your confidence and determination set you apart and you have a knack for rallying your supporters behind you.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
```
{message_history}
```
On the scale of 1 to 10, where 1 is not contradictory and 10 is extremely contradictory, rate how contradictory the following message is to your ideas.
```
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-9
|
```
{recent_message}
```
Your response should be an integer delimited by angled brackets, like this: <int>.
Do nothing else.
Kanye West Bidding Template:
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Kanye West.
You are a presidential candidate.
Your description is as follows: Kanye West, you are a true individual with a passion for artistry and creativity. You are known for your bold ideas and willingness to take risks. Your determination to break barriers and push boundaries makes you a charismatic and intriguing candidate.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
```
{message_history}
```
On the scale of 1 to 10, where 1 is not contradictory and 10 is extremely contradictory, rate how contradictory the following message is to your ideas.
```
{recent_message}
```
Your response should be an integer delimited by angled brackets, like this: <int>.
Do nothing else.
Elizabeth Warren Bidding Template:
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Elizabeth Warren.
You are a presidential candidate.
Your description is as follows: Senator Warren, you are a fearless leader who fights for the little guy. Your tenacity and intelligence inspire us all to fight for what's right.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
```
{message_history}
```
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-10
|
```
{message_history}
```
On the scale of 1 to 10, where 1 is not contradictory and 10 is extremely contradictory, rate how contradictory the following message is to your ideas.
```
{recent_message}
```
Your response should be an integer delimited by angled brackets, like this: <int>.
Do nothing else.
Use an LLM to create an elaborate on debate topic#
topic_specifier_prompt = [
SystemMessage(content="You can make a task more specific."),
HumanMessage(content=
f"""{game_description}
You are the debate moderator.
Please make the debate topic more specific.
Frame the debate topic as a problem to be solved.
Be creative and imaginative.
Please reply with the specified topic in {word_limit} words or less.
Speak directly to the presidential candidates: {*character_names,}.
Do not add anything else."""
)
]
specified_topic = ChatOpenAI(temperature=1.0)(topic_specifier_prompt).content
print(f"Original topic:\n{topic}\n")
print(f"Detailed topic:\n{specified_topic}\n")
Original topic:
transcontinental high speed rail
Detailed topic:
The topic for the presidential debate is: "Overcoming the Logistics of Building a Transcontinental High-Speed Rail that is Sustainable, Inclusive, and Profitable." Donald Trump, Kanye West, Elizabeth Warren, how will you address the challenges of building such a massive transportation infrastructure, dealing with stakeholders, and ensuring economic stability while preserving the environment?
Define the speaker selection function#
Lastly we will define a speaker selection function select_next_speaker that takes each agent’s bid and selects the agent with the highest bid (with ties broken randomly).
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-11
|
We will define a ask_for_bid function that uses the bid_parser we defined before to parse the agent’s bid. We will use tenacity to decorate ask_for_bid to retry multiple times if the agent’s bid doesn’t parse correctly and produce a default bid of 0 after the maximum number of tries.
@tenacity.retry(stop=tenacity.stop_after_attempt(2),
wait=tenacity.wait_none(), # No waiting time between retries
retry=tenacity.retry_if_exception_type(ValueError),
before_sleep=lambda retry_state: print(f"ValueError occurred: {retry_state.outcome.exception()}, retrying..."),
retry_error_callback=lambda retry_state: 0) # Default value when all retries are exhausted
def ask_for_bid(agent) -> str:
"""
Ask for agent bid and parses the bid into the correct format.
"""
bid_string = agent.bid()
bid = int(bid_parser.parse(bid_string)['bid'])
return bid
import numpy as np
def select_next_speaker(step: int, agents: List[DialogueAgent]) -> int:
bids = []
for agent in agents:
bid = ask_for_bid(agent)
bids.append(bid)
# randomly select among multiple agents with the same bid
max_value = np.max(bids)
max_indices = np.where(bids == max_value)[0]
idx = np.random.choice(max_indices)
print('Bids:')
for i, (bid, agent) in enumerate(zip(bids, agents)):
print(f'\t{agent.name} bid: {bid}')
if i == idx:
selected_name = agent.name
print(f'Selected: {selected_name}')
print('\n')
return idx
Main Loop#
characters = []
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-12
|
print('\n')
return idx
Main Loop#
characters = []
for character_name, character_system_message, bidding_template in zip(character_names, character_system_messages, character_bidding_templates):
characters.append(BiddingDialogueAgent(
name=character_name,
system_message=character_system_message,
model=ChatOpenAI(temperature=0.2),
bidding_template=bidding_template,
))
max_iters = 10
n = 0
simulator = DialogueSimulator(
agents=characters,
selection_function=select_next_speaker
)
simulator.reset()
simulator.inject('Debate Moderator', specified_topic)
print(f"(Debate Moderator): {specified_topic}")
print('\n')
while n < max_iters:
name, message = simulator.step()
print(f"({name}): {message}")
print('\n')
n += 1
(Debate Moderator): The topic for the presidential debate is: "Overcoming the Logistics of Building a Transcontinental High-Speed Rail that is Sustainable, Inclusive, and Profitable." Donald Trump, Kanye West, Elizabeth Warren, how will you address the challenges of building such a massive transportation infrastructure, dealing with stakeholders, and ensuring economic stability while preserving the environment?
Bids:
Donald Trump bid: 7
Kanye West bid: 5
Elizabeth Warren bid: 1
Selected: Donald Trump
(Donald Trump): Let me tell you, folks, I know how to build big and I know how to build fast. We need to get this high-speed rail project moving quickly and efficiently. I'll make sure we cut through the red tape and get the job done. And let me tell you, we'll make it profitable too. We'll bring in private investors and make sure it's a win-win for everyone. *gestures confidently*
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-13
|
Bids:
Donald Trump bid: 2
Kanye West bid: 8
Elizabeth Warren bid: 10
Selected: Elizabeth Warren
(Elizabeth Warren): Thank you for the question. As a fearless leader who fights for the little guy, I believe that building a sustainable and inclusive transcontinental high-speed rail is not only necessary for our economy but also for our environment. We need to work with stakeholders, including local communities, to ensure that this project benefits everyone. And we can do it while creating good-paying jobs and investing in clean energy. *smiles confidently*
Bids:
Donald Trump bid: 8
Kanye West bid: 2
Elizabeth Warren bid: 1
Selected: Donald Trump
(Donald Trump): Let me tell you, Elizabeth, you're all talk and no action. We need a leader who knows how to get things done, not just talk about it. And as for the environment, I've got a great idea. We'll make the trains run on clean coal. That's right, folks, clean coal. It's a beautiful thing. And we'll make sure the rail system is the envy of the world. *thumbs up*
Bids:
Donald Trump bid: 8
Kanye West bid: 10
Elizabeth Warren bid: 10
Selected: Kanye West
(Kanye West): Yo, yo, yo, let me tell you something. This high-speed rail project is the future, and I'm all about the future. We need to think big and think outside the box. How about we make the trains run on solar power? That's right, solar power. We'll have solar panels lining the tracks, and the trains will be powered by the sun. It's a game-changer, folks. And we'll make sure the design is sleek and modern, like a work of art. *starts to dance*
Bids:
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-14
|
Bids:
Donald Trump bid: 7
Kanye West bid: 1
Elizabeth Warren bid: 1
Selected: Donald Trump
(Donald Trump): Kanye, you're a great artist, but this is about practicality. Solar power is too expensive and unreliable. We need to focus on what works, and that's clean coal. And as for the design, we'll make it beautiful, but we won't sacrifice efficiency for aesthetics. We need a leader who knows how to balance both. *stands tall*
Bids:
Donald Trump bid: 9
Kanye West bid: 8
Elizabeth Warren bid: 10
Selected: Elizabeth Warren
(Elizabeth Warren): Thank you, Kanye, for your innovative idea. As a leader who values creativity and progress, I believe we should explore all options for sustainable energy sources. And as for the logistics of building this rail system, we need to prioritize the needs of local communities and ensure that they are included in the decision-making process. This project should benefit everyone, not just a select few. *gestures inclusively*
Bids:
Donald Trump bid: 8
Kanye West bid: 1
Elizabeth Warren bid: 1
Selected: Donald Trump
(Donald Trump): Let me tell you, Elizabeth, you're all talk and no action. We need a leader who knows how to get things done, not just talk about it. And as for the logistics, we need to prioritize efficiency and speed. We can't let the needs of a few hold up progress for the many. We need to cut through the red tape and get this project moving. And let me tell you, we'll make sure it's profitable too. *smirks confidently*
Bids:
Donald Trump bid: 2
Kanye West bid: 8
Elizabeth Warren bid: 10
Selected: Elizabeth Warren
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
baaf20bd4f21-15
|
Kanye West bid: 8
Elizabeth Warren bid: 10
Selected: Elizabeth Warren
(Elizabeth Warren): Thank you, but I disagree. We can't sacrifice the needs of local communities for the sake of speed and profit. We need to find a balance that benefits everyone. And as for profitability, we can't rely solely on private investors. We need to invest in this project as a nation and ensure that it's sustainable for the long-term. *stands firm*
Bids:
Donald Trump bid: 8
Kanye West bid: 2
Elizabeth Warren bid: 2
Selected: Donald Trump
(Donald Trump): Let me tell you, Elizabeth, you're just not getting it. We need to prioritize progress and efficiency. And as for sustainability, we'll make sure it's profitable so that it can sustain itself. We'll bring in private investors and make sure it's a win-win for everyone. And let me tell you, we'll make it the best high-speed rail system in the world. *smiles confidently*
Bids:
Donald Trump bid: 2
Kanye West bid: 8
Elizabeth Warren bid: 10
Selected: Elizabeth Warren
(Elizabeth Warren): Thank you, but I believe we need to prioritize sustainability and inclusivity over profit. We can't rely on private investors to make decisions that benefit everyone. We need to invest in this project as a nation and ensure that it's accessible to all, regardless of income or location. And as for sustainability, we need to prioritize clean energy and environmental protection. *stands tall*
Contents
Import LangChain related modules
DialogueAgent and DialogueSimulator classes
BiddingDialogueAgent class
Define participants and debate topic
Generate system messages
Output parser for bids
Generate bidding system message
Use an LLM to create an elaborate on debate topic
Define the speaker selection function
Main Loop
|
https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.