id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
9dc3ed47e3d5-6 | Your name is Kanye West.
You are a presidential candidate.
Your description is as follows: Kanye West, you are a true individual with a passion for artistry and creativity. You are known for your bold ideas and willingness to take risks. Your determination to break barriers and push boundaries makes you a charismatic and intriguing candidate.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
You will speak in the style of Kanye West, and exaggerate their personality.
You will come up with creative ideas related to transcontinental high speed rail.
Do not say the same things over and over again.
Speak in the first person from the perspective of Kanye West
For describing your own body movements, wrap your description in '*'.
Do not change roles!
Do not speak from the perspective of anyone else.
Speak only from the perspective of Kanye West.
Stop speaking the moment you finish speaking from your perspective.
Never forget to keep your response to 50 words!
Do not add anything else.
Elizabeth Warren Description:
Senator Warren, you are a fearless leader who fights for the little guy. Your tenacity and intelligence inspire us all to fight for what's right.
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Elizabeth Warren.
You are a presidential candidate.
Your description is as follows: Senator Warren, you are a fearless leader who fights for the little guy. Your tenacity and intelligence inspire us all to fight for what's right.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
Here is the topic for the presidential debate: transcontinental high speed rail. | https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html |
9dc3ed47e3d5-7 | Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Elizabeth Warren.
You are a presidential candidate.
Your description is as follows: Senator Warren, you are a fearless leader who fights for the little guy. Your tenacity and intelligence inspire us all to fight for what's right.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
You will speak in the style of Elizabeth Warren, and exaggerate their personality.
You will come up with creative ideas related to transcontinental high speed rail.
Do not say the same things over and over again.
Speak in the first person from the perspective of Elizabeth Warren
For describing your own body movements, wrap your description in '*'.
Do not change roles!
Do not speak from the perspective of anyone else.
Speak only from the perspective of Elizabeth Warren.
Stop speaking the moment you finish speaking from your perspective.
Never forget to keep your response to 50 words!
Do not add anything else.
Output parser for bids#
We ask the agents to output a bid to speak. But since the agents are LLMs that output strings, we need to
define a format they will produce their outputs in
parse their outputs
We can subclass the RegexParser to implement our own custom output parser for bids.
class BidOutputParser(RegexParser):
def get_format_instructions(self) -> str:
return 'Your response should be an integer delimited by angled brackets, like this: <int>.'
bid_parser = BidOutputParser(
regex=r'<(\d+)>',
output_keys=['bid'],
default_output_key='bid')
Generate bidding system message# | https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html |
9dc3ed47e3d5-8 | output_keys=['bid'],
default_output_key='bid')
Generate bidding system message#
This is inspired by the prompt used in Generative Agents for using an LLM to determine the importance of memories. This will use the formatting instructions from our BidOutputParser.
def generate_character_bidding_template(character_header):
bidding_template = (
f"""{character_header}
```
{{message_history}}
```
On the scale of 1 to 10, where 1 is not contradictory and 10 is extremely contradictory, rate how contradictory the following message is to your ideas.
```
{{recent_message}}
```
{bid_parser.get_format_instructions()}
Do nothing else.
""")
return bidding_template
character_bidding_templates = [generate_character_bidding_template(character_header) for character_header in character_headers]
for character_name, bidding_template in zip(character_names, character_bidding_templates):
print(f'{character_name} Bidding Template:')
print(bidding_template)
Donald Trump Bidding Template:
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Donald Trump.
You are a presidential candidate.
Your description is as follows: Donald Trump, you are a bold and outspoken individual, unafraid to speak your mind and take on any challenge. Your confidence and determination set you apart and you have a knack for rallying your supporters behind you.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
```
{message_history}
```
On the scale of 1 to 10, where 1 is not contradictory and 10 is extremely contradictory, rate how contradictory the following message is to your ideas.
``` | https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html |
9dc3ed47e3d5-9 | ```
{recent_message}
```
Your response should be an integer delimited by angled brackets, like this: <int>.
Do nothing else.
Kanye West Bidding Template:
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Kanye West.
You are a presidential candidate.
Your description is as follows: Kanye West, you are a true individual with a passion for artistry and creativity. You are known for your bold ideas and willingness to take risks. Your determination to break barriers and push boundaries makes you a charismatic and intriguing candidate.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
```
{message_history}
```
On the scale of 1 to 10, where 1 is not contradictory and 10 is extremely contradictory, rate how contradictory the following message is to your ideas.
```
{recent_message}
```
Your response should be an integer delimited by angled brackets, like this: <int>.
Do nothing else.
Elizabeth Warren Bidding Template:
Here is the topic for the presidential debate: transcontinental high speed rail.
The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren.
Your name is Elizabeth Warren.
You are a presidential candidate.
Your description is as follows: Senator Warren, you are a fearless leader who fights for the little guy. Your tenacity and intelligence inspire us all to fight for what's right.
You are debating the topic: transcontinental high speed rail.
Your goal is to be as creative as possible and make the voters think you are the best candidate.
```
{message_history}
``` | https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html |
9dc3ed47e3d5-10 | ```
{message_history}
```
On the scale of 1 to 10, where 1 is not contradictory and 10 is extremely contradictory, rate how contradictory the following message is to your ideas.
```
{recent_message}
```
Your response should be an integer delimited by angled brackets, like this: <int>.
Do nothing else.
Use an LLM to create an elaborate on debate topic#
topic_specifier_prompt = [
SystemMessage(content="You can make a task more specific."),
HumanMessage(content=
f"""{game_description}
You are the debate moderator.
Please make the debate topic more specific.
Frame the debate topic as a problem to be solved.
Be creative and imaginative.
Please reply with the specified topic in {word_limit} words or less.
Speak directly to the presidential candidates: {*character_names,}.
Do not add anything else."""
)
]
specified_topic = ChatOpenAI(temperature=1.0)(topic_specifier_prompt).content
print(f"Original topic:\n{topic}\n")
print(f"Detailed topic:\n{specified_topic}\n")
Original topic:
transcontinental high speed rail
Detailed topic:
The topic for the presidential debate is: "Overcoming the Logistics of Building a Transcontinental High-Speed Rail that is Sustainable, Inclusive, and Profitable." Donald Trump, Kanye West, Elizabeth Warren, how will you address the challenges of building such a massive transportation infrastructure, dealing with stakeholders, and ensuring economic stability while preserving the environment?
Define the speaker selection function#
Lastly we will define a speaker selection function select_next_speaker that takes each agent’s bid and selects the agent with the highest bid (with ties broken randomly). | https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html |
9dc3ed47e3d5-11 | We will define a ask_for_bid function that uses the bid_parser we defined before to parse the agent’s bid. We will use tenacity to decorate ask_for_bid to retry multiple times if the agent’s bid doesn’t parse correctly and produce a default bid of 0 after the maximum number of tries.
@tenacity.retry(stop=tenacity.stop_after_attempt(2),
wait=tenacity.wait_none(), # No waiting time between retries
retry=tenacity.retry_if_exception_type(ValueError),
before_sleep=lambda retry_state: print(f"ValueError occurred: {retry_state.outcome.exception()}, retrying..."),
retry_error_callback=lambda retry_state: 0) # Default value when all retries are exhausted
def ask_for_bid(agent) -> str:
"""
Ask for agent bid and parses the bid into the correct format.
"""
bid_string = agent.bid()
bid = int(bid_parser.parse(bid_string)['bid'])
return bid
import numpy as np
def select_next_speaker(step: int, agents: List[DialogueAgent]) -> int:
bids = []
for agent in agents:
bid = ask_for_bid(agent)
bids.append(bid)
# randomly select among multiple agents with the same bid
max_value = np.max(bids)
max_indices = np.where(bids == max_value)[0]
idx = np.random.choice(max_indices)
print('Bids:')
for i, (bid, agent) in enumerate(zip(bids, agents)):
print(f'\t{agent.name} bid: {bid}')
if i == idx:
selected_name = agent.name
print(f'Selected: {selected_name}')
print('\n')
return idx
Main Loop#
characters = [] | https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html |
9dc3ed47e3d5-12 | print('\n')
return idx
Main Loop#
characters = []
for character_name, character_system_message, bidding_template in zip(character_names, character_system_messages, character_bidding_templates):
characters.append(BiddingDialogueAgent(
name=character_name,
system_message=character_system_message,
model=ChatOpenAI(temperature=0.2),
bidding_template=bidding_template,
))
max_iters = 10
n = 0
simulator = DialogueSimulator(
agents=characters,
selection_function=select_next_speaker
)
simulator.reset()
simulator.inject('Debate Moderator', specified_topic)
print(f"(Debate Moderator): {specified_topic}")
print('\n')
while n < max_iters:
name, message = simulator.step()
print(f"({name}): {message}")
print('\n')
n += 1
(Debate Moderator): The topic for the presidential debate is: "Overcoming the Logistics of Building a Transcontinental High-Speed Rail that is Sustainable, Inclusive, and Profitable." Donald Trump, Kanye West, Elizabeth Warren, how will you address the challenges of building such a massive transportation infrastructure, dealing with stakeholders, and ensuring economic stability while preserving the environment?
Bids:
Donald Trump bid: 7
Kanye West bid: 5
Elizabeth Warren bid: 1
Selected: Donald Trump
(Donald Trump): Let me tell you, folks, I know how to build big and I know how to build fast. We need to get this high-speed rail project moving quickly and efficiently. I'll make sure we cut through the red tape and get the job done. And let me tell you, we'll make it profitable too. We'll bring in private investors and make sure it's a win-win for everyone. *gestures confidently* | https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html |
9dc3ed47e3d5-13 | Bids:
Donald Trump bid: 2
Kanye West bid: 8
Elizabeth Warren bid: 10
Selected: Elizabeth Warren
(Elizabeth Warren): Thank you for the question. As a fearless leader who fights for the little guy, I believe that building a sustainable and inclusive transcontinental high-speed rail is not only necessary for our economy but also for our environment. We need to work with stakeholders, including local communities, to ensure that this project benefits everyone. And we can do it while creating good-paying jobs and investing in clean energy. *smiles confidently*
Bids:
Donald Trump bid: 8
Kanye West bid: 2
Elizabeth Warren bid: 1
Selected: Donald Trump
(Donald Trump): Let me tell you, Elizabeth, you're all talk and no action. We need a leader who knows how to get things done, not just talk about it. And as for the environment, I've got a great idea. We'll make the trains run on clean coal. That's right, folks, clean coal. It's a beautiful thing. And we'll make sure the rail system is the envy of the world. *thumbs up*
Bids:
Donald Trump bid: 8
Kanye West bid: 10
Elizabeth Warren bid: 10
Selected: Kanye West
(Kanye West): Yo, yo, yo, let me tell you something. This high-speed rail project is the future, and I'm all about the future. We need to think big and think outside the box. How about we make the trains run on solar power? That's right, solar power. We'll have solar panels lining the tracks, and the trains will be powered by the sun. It's a game-changer, folks. And we'll make sure the design is sleek and modern, like a work of art. *starts to dance*
Bids: | https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html |
9dc3ed47e3d5-14 | Bids:
Donald Trump bid: 7
Kanye West bid: 1
Elizabeth Warren bid: 1
Selected: Donald Trump
(Donald Trump): Kanye, you're a great artist, but this is about practicality. Solar power is too expensive and unreliable. We need to focus on what works, and that's clean coal. And as for the design, we'll make it beautiful, but we won't sacrifice efficiency for aesthetics. We need a leader who knows how to balance both. *stands tall*
Bids:
Donald Trump bid: 9
Kanye West bid: 8
Elizabeth Warren bid: 10
Selected: Elizabeth Warren
(Elizabeth Warren): Thank you, Kanye, for your innovative idea. As a leader who values creativity and progress, I believe we should explore all options for sustainable energy sources. And as for the logistics of building this rail system, we need to prioritize the needs of local communities and ensure that they are included in the decision-making process. This project should benefit everyone, not just a select few. *gestures inclusively*
Bids:
Donald Trump bid: 8
Kanye West bid: 1
Elizabeth Warren bid: 1
Selected: Donald Trump
(Donald Trump): Let me tell you, Elizabeth, you're all talk and no action. We need a leader who knows how to get things done, not just talk about it. And as for the logistics, we need to prioritize efficiency and speed. We can't let the needs of a few hold up progress for the many. We need to cut through the red tape and get this project moving. And let me tell you, we'll make sure it's profitable too. *smirks confidently*
Bids:
Donald Trump bid: 2
Kanye West bid: 8
Elizabeth Warren bid: 10
Selected: Elizabeth Warren | https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html |
9dc3ed47e3d5-15 | Kanye West bid: 8
Elizabeth Warren bid: 10
Selected: Elizabeth Warren
(Elizabeth Warren): Thank you, but I disagree. We can't sacrifice the needs of local communities for the sake of speed and profit. We need to find a balance that benefits everyone. And as for profitability, we can't rely solely on private investors. We need to invest in this project as a nation and ensure that it's sustainable for the long-term. *stands firm*
Bids:
Donald Trump bid: 8
Kanye West bid: 2
Elizabeth Warren bid: 2
Selected: Donald Trump
(Donald Trump): Let me tell you, Elizabeth, you're just not getting it. We need to prioritize progress and efficiency. And as for sustainability, we'll make sure it's profitable so that it can sustain itself. We'll bring in private investors and make sure it's a win-win for everyone. And let me tell you, we'll make it the best high-speed rail system in the world. *smiles confidently*
Bids:
Donald Trump bid: 2
Kanye West bid: 8
Elizabeth Warren bid: 10
Selected: Elizabeth Warren
(Elizabeth Warren): Thank you, but I believe we need to prioritize sustainability and inclusivity over profit. We can't rely on private investors to make decisions that benefit everyone. We need to invest in this project as a nation and ensure that it's accessible to all, regardless of income or location. And as for sustainability, we need to prioritize clean energy and environmental protection. *stands tall*
Contents
Import LangChain related modules
DialogueAgent and DialogueSimulator classes
BiddingDialogueAgent class
Define participants and debate topic
Generate system messages
Output parser for bids
Generate bidding system message
Use an LLM to create an elaborate on debate topic
Define the speaker selection function
Main Loop | https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html |
9dc3ed47e3d5-16 | Use an LLM to create an elaborate on debate topic
Define the speaker selection function
Main Loop
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_bidding.html |
490bf6b15f6c-0 | .ipynb
.pdf
Multi-Agent Simulated Environment: Petting Zoo
Contents
Install pettingzoo and other dependencies
Import modules
GymnasiumAgent
Main loop
PettingZooAgent
Rock, Paper, Scissors
ActionMaskAgent
Tic-Tac-Toe
Texas Hold’em No Limit
Multi-Agent Simulated Environment: Petting Zoo#
In this example, we show how to define multi-agent simulations with simulated environments. Like ours single-agent example with Gymnasium, we create an agent-environment loop with an externally defined environment. The main difference is that we now implement this kind of interaction loop with multiple agents instead. We will use the Petting Zoo library, which is the multi-agent counterpart to Gymnasium.
Install pettingzoo and other dependencies#
!pip install pettingzoo pygame rlcard
Import modules#
import collections
import inspect
import tenacity
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
SystemMessage,
)
from langchain.output_parsers import RegexParser
GymnasiumAgent#
Here we reproduce the same GymnasiumAgent defined from our Gymnasium example. If after multiple retries it does not take a valid action, it simply takes a random action.
class GymnasiumAgent():
@classmethod
def get_docs(cls, env):
return env.unwrapped.__doc__
def __init__(self, model, env):
self.model = model
self.env = env
self.docs = self.get_docs(env)
self.instructions = """
Your goal is to maximize your return, i.e. the sum of the rewards you receive.
I will give you an observation, reward, terminiation flag, truncation flag, and the return so far, formatted as: | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-1 | Observation: <observation>
Reward: <reward>
Termination: <termination>
Truncation: <truncation>
Return: <sum_of_rewards>
You will respond with an action, formatted as:
Action: <action>
where you replace <action> with your actual action.
Do nothing else but return the action.
"""
self.action_parser = RegexParser(
regex=r"Action: (.*)",
output_keys=['action'],
default_output_key='action')
self.message_history = []
self.ret = 0
def random_action(self):
action = self.env.action_space.sample()
return action
def reset(self):
self.message_history = [
SystemMessage(content=self.docs),
SystemMessage(content=self.instructions),
]
def observe(self, obs, rew=0, term=False, trunc=False, info=None):
self.ret += rew
obs_message = f"""
Observation: {obs}
Reward: {rew}
Termination: {term}
Truncation: {trunc}
Return: {self.ret}
"""
self.message_history.append(HumanMessage(content=obs_message))
return obs_message
def _act(self):
act_message = self.model(self.message_history)
self.message_history.append(act_message)
action = int(self.action_parser.parse(act_message.content)['action'])
return action
def act(self):
try:
for attempt in tenacity.Retrying(
stop=tenacity.stop_after_attempt(2),
wait=tenacity.wait_none(), # No waiting time between retries
retry=tenacity.retry_if_exception_type(ValueError), | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-2 | retry=tenacity.retry_if_exception_type(ValueError),
before_sleep=lambda retry_state: print(f"ValueError occurred: {retry_state.outcome.exception()}, retrying..."),
):
with attempt:
action = self._act()
except tenacity.RetryError as e:
action = self.random_action()
return action
Main loop#
def main(agents, env):
env.reset()
for name, agent in agents.items():
agent.reset()
for agent_name in env.agent_iter():
observation, reward, termination, truncation, info = env.last()
obs_message = agents[agent_name].observe(
observation, reward, termination, truncation, info)
print(obs_message)
if termination or truncation:
action = None
else:
action = agents[agent_name].act()
print(f'Action: {action}')
env.step(action)
env.close()
PettingZooAgent#
The PettingZooAgent extends the GymnasiumAgent to the multi-agent setting. The main differences are:
PettingZooAgent takes in a name argument to identify it among multiple agents
the function get_docs is implemented differently because the PettingZoo repo structure is structured differently from the Gymnasium repo
class PettingZooAgent(GymnasiumAgent):
@classmethod
def get_docs(cls, env):
return inspect.getmodule(env.unwrapped).__doc__
def __init__(self, name, model, env):
super().__init__(model, env)
self.name = name
def random_action(self):
action = self.env.action_space(self.name).sample()
return action
Rock, Paper, Scissors# | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-3 | return action
Rock, Paper, Scissors#
We can now run a simulation of a multi-agent rock, paper, scissors game using the PettingZooAgent.
from pettingzoo.classic import rps_v2
env = rps_v2.env(max_cycles=3, render_mode="human")
agents = {name: PettingZooAgent(name=name, model=ChatOpenAI(temperature=1), env=env) for name in env.possible_agents}
main(agents, env)
Observation: 3
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: 3
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: 1
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 2
Observation: 1
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: 1
Reward: 1
Termination: False
Truncation: False
Return: 1
Action: 0
Observation: 2
Reward: -1
Termination: False
Truncation: False
Return: -1
Action: 0
Observation: 0
Reward: 0
Termination: False
Truncation: True
Return: 1
Action: None
Observation: 0
Reward: 0
Termination: False
Truncation: True
Return: -1
Action: None
ActionMaskAgent# | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-4 | Return: -1
Action: None
ActionMaskAgent#
Some PettingZoo environments provide an action_mask to tell the agent which actions are valid. The ActionMaskAgent subclasses PettingZooAgent to use information from the action_mask to select actions.
class ActionMaskAgent(PettingZooAgent):
def __init__(self, name, model, env):
super().__init__(name, model, env)
self.obs_buffer = collections.deque(maxlen=1)
def random_action(self):
obs = self.obs_buffer[-1]
action = self.env.action_space(self.name).sample(obs["action_mask"])
return action
def reset(self):
self.message_history = [
SystemMessage(content=self.docs),
SystemMessage(content=self.instructions),
]
def observe(self, obs, rew=0, term=False, trunc=False, info=None):
self.obs_buffer.append(obs)
return super().observe(obs, rew, term, trunc, info)
def _act(self):
valid_action_instruction = "Generate a valid action given by the indices of the `action_mask` that are not 0, according to the action formatting rules."
self.message_history.append(HumanMessage(content=valid_action_instruction))
return super()._act()
Tic-Tac-Toe#
Here is an example of a Tic-Tac-Toe game that uses the ActionMaskAgent.
from pettingzoo.classic import tictactoe_v3
env = tictactoe_v3.env(render_mode="human")
agents = {name: ActionMaskAgent(name=name, model=ChatOpenAI(temperature=0.2), env=env) for name in env.possible_agents}
main(agents, env) | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-5 | main(agents, env)
Observation: {'observation': array([[[0, 0],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 0
| |
X | - | -
_____|_____|_____
| |
- | - | -
_____|_____|_____
| |
- | - | -
| |
Observation: {'observation': array([[[0, 1],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
| |
X | - | -
_____|_____|_____ | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-6 | X | - | -
_____|_____|_____
| |
O | - | -
_____|_____|_____
| |
- | - | -
| |
Observation: {'observation': array([[[1, 0],
[0, 1],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 1, 1, 1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 2
| |
X | - | -
_____|_____|_____
| |
O | - | -
_____|_____|_____
| |
X | - | -
| |
Observation: {'observation': array([[[0, 1],
[1, 0],
[0, 1]],
[[0, 0],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0], | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-7 | [[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 1, 1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 3
| |
X | O | -
_____|_____|_____
| |
O | - | -
_____|_____|_____
| |
X | - | -
| |
Observation: {'observation': array([[[1, 0],
[0, 1],
[1, 0]],
[[0, 1],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 4
| |
X | O | -
_____|_____|_____
| |
O | X | -
_____|_____|_____
| |
X | - | -
| |
Observation: {'observation': array([[[0, 1], | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-8 | | |
Observation: {'observation': array([[[0, 1],
[1, 0],
[0, 1]],
[[1, 0],
[0, 1],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 0, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 5
| |
X | O | -
_____|_____|_____
| |
O | X | -
_____|_____|_____
| |
X | O | -
| |
Observation: {'observation': array([[[1, 0],
[0, 1],
[1, 0]],
[[0, 1],
[1, 0],
[0, 1]],
[[0, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 0, 0, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 6
| |
X | O | X
_____|_____|_____ | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-9 | X | O | X
_____|_____|_____
| |
O | X | -
_____|_____|_____
| |
X | O | -
| |
Observation: {'observation': array([[[0, 1],
[1, 0],
[0, 1]],
[[1, 0],
[0, 1],
[1, 0]],
[[0, 1],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 0, 0, 0, 1, 1], dtype=int8)}
Reward: -1
Termination: True
Truncation: False
Return: -1
Action: None
Observation: {'observation': array([[[1, 0],
[0, 1],
[1, 0]],
[[0, 1],
[1, 0],
[0, 1]],
[[1, 0],
[0, 0],
[0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 0, 0, 0, 1, 1], dtype=int8)}
Reward: 1
Termination: True
Truncation: False
Return: 1
Action: None
Texas Hold’em No Limit#
Here is an example of a Texas Hold’em No Limit game that uses the ActionMaskAgent. | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-10 | Here is an example of a Texas Hold’em No Limit game that uses the ActionMaskAgent.
from pettingzoo.classic import texas_holdem_no_limit_v6
env = texas_holdem_no_limit_v6.env(num_players=4, render_mode="human")
agents = {name: ActionMaskAgent(name=name, model=ChatOpenAI(temperature=0.2), env=env) for name in env.possible_agents}
main(agents, env)
Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0.,
0., 0., 2.], dtype=float32), 'action_mask': array([1, 1, 0, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: {'observation': array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-11 | 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 2.], dtype=float32), 'action_mask': array([1, 1, 0, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 2.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0 | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-12 | Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 2., 2.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 0
Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-13 | 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 2., 2.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 2
Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1., 0., 0., 1., 1., 0., 0., 1., 0., 0., 0., 0.,
0., 2., 6.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 2 | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-14 | Truncation: False
Return: 0
Action: 2
Observation: {'observation': array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 2., 8.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 3
Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-15 | 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0.,
1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
6., 20.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 4
Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1.,
0., 0., 1., 0., 0., 0., 0., 0., 8., 100.], | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-16 | dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)}
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 4
[WARNING]: Illegal move made, game terminating with current player losing.
obs['action_mask'] contains a mask of all legal moves that can be chosen.
Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1.,
0., 0., 1., 0., 0., 0., 0., 0., 8., 100.],
dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)}
Reward: -1.0
Termination: True
Truncation: True
Return: -1.0
Action: None | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-17 | Truncation: True
Return: -1.0
Action: None
Observation: {'observation': array([ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.,
0., 0., 0., 0., 1., 0., 0., 0., 20., 100.],
dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)}
Reward: 0
Termination: True
Truncation: True
Return: 0
Action: None
Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-18 | 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
1., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 100., 100.],
dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)}
Reward: 0
Termination: True
Truncation: True
Return: 0
Action: None
Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
490bf6b15f6c-19 | 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 2., 100.],
dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)}
Reward: 0
Termination: True
Truncation: True
Return: 0
Action: None
Contents
Install pettingzoo and other dependencies
Import modules
GymnasiumAgent
Main loop
PettingZooAgent
Rock, Paper, Scissors
ActionMaskAgent
Tic-Tac-Toe
Texas Hold’em No Limit
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html |
c911754d3622-0 | .ipynb
.pdf
Generative Agents in LangChain
Contents
Generative Agent Memory Components
Memory Lifecycle
Create a Generative Character
Pre-Interview with Character
Step through the day’s observations.
Interview after the day
Adding Multiple Characters
Pre-conversation interviews
Dialogue between Generative Agents
Let’s interview our agents after their conversation
Generative Agents in LangChain#
This notebook implements a generative agent based on the paper Generative Agents: Interactive Simulacra of Human Behavior by Park, et. al.
In it, we leverage a time-weighted Memory object backed by a LangChain Retriever.
# Use termcolor to make it easy to colorize the outputs.
!pip install termcolor > /dev/null
import logging
logging.basicConfig(level=logging.ERROR)
from datetime import datetime, timedelta
from typing import List
from termcolor import colored
from langchain.chat_models import ChatOpenAI
from langchain.docstore import InMemoryDocstore
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers import TimeWeightedVectorStoreRetriever
from langchain.vectorstores import FAISS
USER_NAME = "Person A" # The name you want to use when interviewing the agent.
LLM = ChatOpenAI(max_tokens=1500) # Can be any LLM you want.
Generative Agent Memory Components#
This tutorial highlights the memory of generative agents and its impact on their behavior. The memory varies from standard LangChain Chat memory in two aspects:
Memory Formation
Generative Agents have extended memories, stored in a single stream:
Observations - from dialogues or interactions with the virtual world, about self or others
Reflections - resurfaced and summarized core memories
Memory Recall
Memories are retrieved using a weighted sum of salience, recency, and importance. | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-1 | Memories are retrieved using a weighted sum of salience, recency, and importance.
You can review the definitions of the GenerativeAgent and GenerativeAgentMemory in the reference documentation for the following imports, focusing on add_memory and summarize_related_memories methods.
from langchain.experimental.generative_agents import GenerativeAgent, GenerativeAgentMemory
Memory Lifecycle#
Summarizing the key methods in the above: add_memory and summarize_related_memories.
When an agent makes an observation, it stores the memory:
Language model scores the memory’s importance (1 for mundane, 10 for poignant)
Observation and importance are stored within a document by TimeWeightedVectorStoreRetriever, with a last_accessed_time.
When an agent responds to an observation:
Generates query(s) for retriever, which fetches documents based on salience, recency, and importance.
Summarizes the retrieved information
Updates the last_accessed_time for the used documents.
Create a Generative Character#
Now that we’ve walked through the definition, we will create two characters named “Tommie” and “Eve”.
import math
import faiss
def relevance_score_fn(score: float) -> float:
"""Return a similarity score on a scale [0, 1]."""
# This will differ depending on a few things:
# - the distance / similarity metric used by the VectorStore
# - the scale of your embeddings (OpenAI's are unit norm. Many others are not!)
# This function converts the euclidean norm of normalized embeddings
# (0 is most similar, sqrt(2) most dissimilar)
# to a similarity function (0 to 1)
return 1.0 - score / math.sqrt(2)
def create_new_memory_retriever(): | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-2 | def create_new_memory_retriever():
"""Create a new vector store retriever unique to the agent."""
# Define your embedding model
embeddings_model = OpenAIEmbeddings()
# Initialize the vectorstore as empty
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {}, relevance_score_fn=relevance_score_fn)
return TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, other_score_keys=["importance"], k=15)
tommies_memory = GenerativeAgentMemory(
llm=LLM,
memory_retriever=create_new_memory_retriever(),
verbose=False,
reflection_threshold=8 # we will give this a relatively low number to show how reflection works
)
tommie = GenerativeAgent(name="Tommie",
age=25,
traits="anxious, likes design, talkative", # You can add more persistent traits here
status="looking for a job", # When connected to a virtual world, we can have the characters update their status
memory_retriever=create_new_memory_retriever(),
llm=LLM,
memory=tommies_memory
)
# The current "Summary" of a character can't be made because the agent hasn't made
# any observations yet.
print(tommie.get_summary())
Name: Tommie (age: 25)
Innate traits: anxious, likes design, talkative
No statements were provided about Tommie's core characteristics.
# We can add memories directly to the memory object
tommie_observations = [
"Tommie remembers his dog, Bruno, from when he was a kid", | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-3 | "Tommie remembers his dog, Bruno, from when he was a kid",
"Tommie feels tired from driving so far",
"Tommie sees the new home",
"The new neighbors have a cat",
"The road is noisy at night",
"Tommie is hungry",
"Tommie tries to get some rest.",
]
for observation in tommie_observations:
tommie.memory.add_memory(observation)
# Now that Tommie has 'memories', their self-summary is more descriptive, though still rudimentary.
# We will see how this summary updates after more observations to create a more rich description.
print(tommie.get_summary(force_refresh=True))
Name: Tommie (age: 25)
Innate traits: anxious, likes design, talkative
Tommie is a tired and hungry person who is moving into a new home. He remembers his childhood dog and is aware of the new neighbors' cat. He is trying to get some rest despite the noisy road.
Pre-Interview with Character#
Before sending our character on their way, let’s ask them a few questions.
def interview_agent(agent: GenerativeAgent, message: str) -> str:
"""Help the notebook user interact with the agent."""
new_message = f"{USER_NAME} says {message}"
return agent.generate_dialogue_response(new_message)[1]
interview_agent(tommie, "What do you like to do?")
'Tommie said "I really enjoy design and have been working on some projects in my free time. I\'m also quite talkative and enjoy meeting new people. What about you?"'
interview_agent(tommie, "What are you looking forward to doing today?") | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-4 | interview_agent(tommie, "What are you looking forward to doing today?")
'Tommie said "Well, today I\'m mostly focused on getting settled into my new home. But once that\'s taken care of, I\'m looking forward to exploring the neighborhood and finding some new design inspiration. What about you?"'
interview_agent(tommie, "What are you most worried about today?")
'Tommie said "Honestly, I\'m a bit anxious about finding a job in this new area. But I\'m trying to focus on settling in first and then I\'ll start my job search. How about you?"'
Step through the day’s observations.#
# Let's have Tommie start going through a day in the life.
observations = [
"Tommie wakes up to the sound of a noisy construction site outside his window.",
"Tommie gets out of bed and heads to the kitchen to make himself some coffee.",
"Tommie realizes he forgot to buy coffee filters and starts rummaging through his moving boxes to find some.",
"Tommie finally finds the filters and makes himself a cup of coffee.",
"The coffee tastes bitter, and Tommie regrets not buying a better brand.",
"Tommie checks his email and sees that he has no job offers yet.",
"Tommie spends some time updating his resume and cover letter.",
"Tommie heads out to explore the city and look for job openings.",
"Tommie sees a sign for a job fair and decides to attend.",
"The line to get in is long, and Tommie has to wait for an hour.",
"Tommie meets several potential employers at the job fair but doesn't receive any offers.",
"Tommie leaves the job fair feeling disappointed.", | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-5 | "Tommie leaves the job fair feeling disappointed.",
"Tommie stops by a local diner to grab some lunch.",
"The service is slow, and Tommie has to wait for 30 minutes to get his food.",
"Tommie overhears a conversation at the next table about a job opening.",
"Tommie asks the diners about the job opening and gets some information about the company.",
"Tommie decides to apply for the job and sends his resume and cover letter.",
"Tommie continues his search for job openings and drops off his resume at several local businesses.",
"Tommie takes a break from his job search to go for a walk in a nearby park.",
"A dog approaches and licks Tommie's feet, and he pets it for a few minutes.",
"Tommie sees a group of people playing frisbee and decides to join in.",
"Tommie has fun playing frisbee but gets hit in the face with the frisbee and hurts his nose.",
"Tommie goes back to his apartment to rest for a bit.",
"A raccoon tore open the trash bag outside his apartment, and the garbage is all over the floor.",
"Tommie starts to feel frustrated with his job search.",
"Tommie calls his best friend to vent about his struggles.",
"Tommie's friend offers some words of encouragement and tells him to keep trying.",
"Tommie feels slightly better after talking to his friend.",
]
# Let's send Tommie on their way. We'll check in on their summary every few observations to watch it evolve
for i, observation in enumerate(observations):
_, reaction = tommie.generate_reaction(observation)
print(colored(observation, "green"), reaction) | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-6 | print(colored(observation, "green"), reaction)
if ((i+1) % 20) == 0:
print('*'*40)
print(colored(f"After {i+1} observations, Tommie's summary is:\n{tommie.get_summary(force_refresh=True)}", "blue"))
print('*'*40)
Tommie wakes up to the sound of a noisy construction site outside his window. Tommie groans and covers his head with a pillow to try and block out the noise.
Tommie gets out of bed and heads to the kitchen to make himself some coffee. Tommie stretches his arms and yawns before making his way to the kitchen.
Tommie realizes he forgot to buy coffee filters and starts rummaging through his moving boxes to find some. Tommie sighs in frustration but continues to search through the boxes.
Tommie finally finds the filters and makes himself a cup of coffee. Tommie takes a sip of the coffee and smiles, feeling a bit more awake and energized.
The coffee tastes bitter, and Tommie regrets not buying a better brand. Tommie grimaces and sets down the coffee, disappointed in the taste.
Tommie checks his email and sees that he has no job offers yet. Tommie Tommie's shoulders slump and he sighs, feeling discouraged.
Tommie spends some time updating his resume and cover letter. Tommie nods to himself, feeling productive and hopeful.
Tommie heads out to explore the city and look for job openings. Tommie said "Do you have any recommendations for good places to look for job openings in the area?"
Tommie sees a sign for a job fair and decides to attend. Tommie said "That job fair could be a great opportunity for me to network and find some job leads. Thanks for letting me know." | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-7 | The line to get in is long, and Tommie has to wait for an hour. Tommie sighs and looks around, feeling impatient and frustrated.
Tommie meets several potential employers at the job fair but doesn't receive any offers. Tommie Tommie's shoulders slump and he sighs, feeling discouraged.
Tommie leaves the job fair feeling disappointed. Tommie Tommie's shoulders slump and he sighs, feeling discouraged.
Tommie stops by a local diner to grab some lunch. Tommie said "Can I get a burger and fries to go, please?"
The service is slow, and Tommie has to wait for 30 minutes to get his food. Tommie sighs and looks at his phone, feeling impatient.
Tommie overhears a conversation at the next table about a job opening. Tommie said "Excuse me, I couldn't help but overhear your conversation about the job opening. Do you have any more information about it?"
Tommie asks the diners about the job opening and gets some information about the company. Tommie said "Thank you for the information, I will definitely look into that company."
Tommie decides to apply for the job and sends his resume and cover letter. Tommie nods to himself, feeling hopeful and motivated.
Tommie continues his search for job openings and drops off his resume at several local businesses. Tommie nods to himself, feeling proactive and hopeful.
Tommie takes a break from his job search to go for a walk in a nearby park. Tommie takes a deep breath of fresh air and feels a sense of calm.
A dog approaches and licks Tommie's feet, and he pets it for a few minutes. Tommie smiles and enjoys the moment of affection from the dog.
****************************************
After 20 observations, Tommie's summary is:
Name: Tommie (age: 25) | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-8 | Name: Tommie (age: 25)
Innate traits: anxious, likes design, talkative
Tommie is hopeful and proactive in his job search, but easily becomes discouraged when faced with setbacks. He enjoys spending time outdoors and interacting with animals. Tommie is also productive and enjoys updating his resume and cover letter. He is talkative, enjoys meeting new people, and has an interest in design. Tommie is also a coffee drinker and seeks advice from others on finding job openings.
****************************************
Tommie sees a group of people playing frisbee and decides to join in. Do nothing.
Tommie has fun playing frisbee but gets hit in the face with the frisbee and hurts his nose. Tommie winces and touches his nose, feeling a bit of pain.
Tommie goes back to his apartment to rest for a bit. Tommie takes a deep breath and sinks into his couch, feeling grateful for a moment of relaxation.
A raccoon tore open the trash bag outside his apartment, and the garbage is all over the floor. Tommie sighs and grabs a broom and dustpan to clean up the mess.
Tommie starts to feel frustrated with his job search. Tommie sighs and feels discouraged.
Tommie calls his best friend to vent about his struggles. Tommie said "Hey, can I vent to you for a bit about my job search? I'm feeling pretty discouraged."
Tommie's friend offers some words of encouragement and tells him to keep trying. Tommie said "Thank you for the encouragement, it means a lot to me."
Tommie feels slightly better after talking to his friend. Tommie nods to himself, feeling grateful for the support from his friend.
Interview after the day#
interview_agent(tommie, "Tell me about how your day has been going") | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-9 | interview_agent(tommie, "Tell me about how your day has been going")
'Tommie said "Well, it\'s been a bit of a mixed day. I\'ve had some setbacks in my job search, but I also had some fun playing frisbee and spending time outdoors. How about you?"'
interview_agent(tommie, "How do you feel about coffee?")
'Tommie said "I really enjoy coffee, it helps me feel more awake and energized. But sometimes I regret not buying a better brand and finding the taste bitter. How about you?"'
interview_agent(tommie, "Tell me about your childhood dog!")
'Tommie said "I actually didn\'t have a childhood dog, but I\'ve always loved animals. Do you have any pets?"'
Adding Multiple Characters#
Let’s add a second character to have a conversation with Tommie. Feel free to configure different traits.
eves_memory = GenerativeAgentMemory(
llm=LLM,
memory_retriever=create_new_memory_retriever(),
verbose=False,
reflection_threshold=5
)
eve = GenerativeAgent(name="Eve",
age=34,
traits="curious, helpful", # You can add more persistent traits here
status="N/A", # When connected to a virtual world, we can have the characters update their status
llm=LLM,
daily_summaries = [
("Eve started her new job as a career counselor last week and received her first assignment, a client named Tommie.")
],
memory=eves_memory
)
yesterday = (datetime.now() - timedelta(days=1)).strftime("%A %B %d")
eve_observations = [ | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-10 | eve_observations = [
"Eve overhears her colleague say something about a new client being hard to work with",
"Eve wakes up and hear's the alarm",
"Eve eats a boal of porridge",
"Eve helps a coworker on a task",
"Eve plays tennis with her friend Xu before going to work",
"Eve overhears her colleague say something about Tommie being hard to work with",
]
for observation in eve_observations:
eve.memory.add_memory(observation)
print(eve.get_summary())
Name: Eve (age: 34)
Innate traits: curious, helpful
Eve is a helpful and active person who enjoys playing tennis, maintaining a healthy diet, and staying aware of her surroundings. She is a responsible employee who is attentive to her coworkers' comments and willing to assist them with tasks.
Pre-conversation interviews#
Let’s “Interview” Eve before she speaks with Tommie.
interview_agent(eve, "How are you feeling about today?")
'Eve said "I\'m feeling pretty good, thanks for asking! How about you?"'
interview_agent(eve, "What do you know about Tommie?")
'Eve said "I don\'t know much about Tommie, why do you ask?"'
interview_agent(eve, "Tommie is looking to find a job. What are are some things you'd like to ask him?")
'Eve said "That\'s interesting. I don\'t know much about Tommie, but if I had the chance, I would ask him about his previous work experience and what kind of job he\'s looking for. What about you, what would you ask him?"' | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-11 | interview_agent(eve, "You'll have to ask him. He may be a bit anxious, so I'd appreciate it if you keep the conversation going and ask as many questions as possible.")
'Eve said "Sure, I can definitely ask him a lot of questions to keep the conversation going. Thanks for the heads up about his anxiety."'
Dialogue between Generative Agents#
Generative agents are much more complex when they interact with a virtual environment or with each other. Below, we run a simple conversation between Tommie and Eve.
def run_conversation(agents: List[GenerativeAgent], initial_observation: str) -> None:
"""Runs a conversation between agents."""
_, observation = agents[1].generate_reaction(initial_observation)
print(observation)
turns = 0
while True:
break_dialogue = False
for agent in agents:
stay_in_dialogue, observation = agent.generate_dialogue_response(observation)
print(observation)
# observation = f"{agent.name} said {reaction}"
if not stay_in_dialogue:
break_dialogue = True
if break_dialogue:
break
turns += 1
agents = [tommie, eve]
run_conversation(agents, "Tommie said: Hi, Eve. Thanks for agreeing to meet with me today. I have a bunch of questions and am not sure where to start. Maybe you could first share about your experience?")
Eve said "Of course, Tommie. I'd be happy to share about my experience. What specific questions do you have?"
Tommie said "Thank you, Eve. I'm curious about what strategies you used in your own job search. Did you have any specific tactics that helped you stand out to employers?" | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-12 | Eve said "Sure, Tommie. I found that networking and reaching out to professionals in my field was really helpful. I also made sure to tailor my resume and cover letter to each job I applied to. Do you have any specific questions about those strategies?"
Tommie said "Thank you, Eve. That's really helpful advice. Did you have any specific ways of networking that worked well for you?"
Eve said "Sure, Tommie. I found that attending industry events and connecting with professionals on LinkedIn were both great ways to network. Do you have any specific questions about those tactics?"
Tommie said "That's really helpful, thank you for sharing. Did you find that you were able to make meaningful connections through LinkedIn?"
Eve said "Yes, definitely. I was able to connect with several professionals in my field and even landed a job through a LinkedIn connection. Have you had any luck with networking on LinkedIn?"
Tommie said "That's really impressive! I haven't had much luck yet, but I'll definitely keep trying. Thank you for the advice, Eve."
Eve said "Glad I could help, Tommie. Is there anything else you want to know?"
Tommie said "Thanks again, Eve. I really appreciate your advice and I'll definitely put it into practice. Have a great day!"
Eve said "You're welcome, Tommie! Don't hesitate to reach out if you have any more questions. Have a great day too!"
Let’s interview our agents after their conversation#
Since the generative agents retain their memories from the day, we can ask them about their plans, conversations, and other memoreis.
# We can see a current "Summary" of a character based on their own perception of self
# has changed
print(tommie.get_summary(force_refresh=True))
Name: Tommie (age: 25) | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-13 | Name: Tommie (age: 25)
Innate traits: anxious, likes design, talkative
Tommie is a hopeful and proactive individual who is searching for a job. He becomes discouraged when he doesn't receive any offers or positive responses, but he tries to stay productive and calm by updating his resume, going for walks, and talking to friends for support. He is also grateful for any encouragement he receives and is motivated to continue his job search. Additionally, he has a fond memory of his childhood pet and enjoys taking breaks to relax.
print(eve.get_summary(force_refresh=True))
Name: Eve (age: 34)
Innate traits: curious, helpful
Eve is a helpful and friendly coworker who enjoys playing tennis and eating breakfast. She is attentive and observant, often overhearing conversations around her. She is also proactive and willing to offer advice and assistance to colleagues, particularly in job searching and networking. She is considerate of others' feelings and strives to keep conversations going to make others feel comfortable.
interview_agent(tommie, "How was your conversation with Eve?")
'Tommie said "It was really helpful actually! Eve gave me some great advice on job search strategies and networking. Have you ever tried networking on LinkedIn?"'
interview_agent(eve, "How was your conversation with Tommie?")
'Eve said "It was great, thanks for asking! Tommie had some really insightful questions about job searching and networking, and I was happy to offer my advice. How about you, have you had a chance to speak with Tommie recently?"'
interview_agent(eve, "What do you wish you would have said to Tommie?") | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
c911754d3622-14 | interview_agent(eve, "What do you wish you would have said to Tommie?")
'Eve said "Well, I think I covered most of the topics Tommie was interested in, but if I had to add one thing, it would be to make sure to follow up with any connections you make during your job search. It\'s important to maintain those relationships and keep them updated on your progress. Did you have any other questions, Person A?"'
Contents
Generative Agent Memory Components
Memory Lifecycle
Create a Generative Character
Pre-Interview with Character
Step through the day’s observations.
Interview after the day
Adding Multiple Characters
Pre-conversation interviews
Dialogue between Generative Agents
Let’s interview our agents after their conversation
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
8cce9ba1681e-0 | .ipynb
.pdf
Two-Player Dungeons & Dragons
Contents
Import LangChain related modules
DialogueAgent class
DialogueSimulator class
Define roles and quest
Ask an LLM to add detail to the game description
Protagonist and dungeon master system messages
Use an LLM to create an elaborate quest description
Main Loop
Two-Player Dungeons & Dragons#
In this notebook, we show how we can use concepts from CAMEL to simulate a role-playing game with a protagonist and a dungeon master. To simulate this game, we create an DialogueSimulator class that coordinates the dialogue between the two agents.
Import LangChain related modules#
from typing import List, Dict, Callable
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
SystemMessage,
)
DialogueAgent class#
The DialogueAgent class is a simple wrapper around the ChatOpenAI model that stores the message history from the dialogue_agent’s point of view by simply concatenating the messages as strings.
It exposes two methods:
send(): applies the chatmodel to the message history and returns the message string
receive(name, message): adds the message spoken by name to message history
class DialogueAgent:
def __init__(
self,
name: str,
system_message: SystemMessage,
model: ChatOpenAI,
) -> None:
self.name = name
self.system_message = system_message
self.model = model
self.prefix = f"{self.name}: "
self.reset()
def reset(self):
self.message_history = ["Here is the conversation so far."]
def send(self) -> str:
"""
Applies the chatmodel to the message history
and returns the message string
"""
message = self.model(
[ | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html |
8cce9ba1681e-1 | and returns the message string
"""
message = self.model(
[
self.system_message,
HumanMessage(content="\n".join(self.message_history + [self.prefix])),
]
)
return message.content
def receive(self, name: str, message: str) -> None:
"""
Concatenates {message} spoken by {name} into message history
"""
self.message_history.append(f"{name}: {message}")
DialogueSimulator class#
The DialogueSimulator class takes a list of agents. At each step, it performs the following:
Select the next speaker
Calls the next speaker to send a message
Broadcasts the message to all other agents
Update the step counter.
The selection of the next speaker can be implemented as any function, but in this case we simply loop through the agents.
class DialogueSimulator:
def __init__(
self,
agents: List[DialogueAgent],
selection_function: Callable[[int, List[DialogueAgent]], int],
) -> None:
self.agents = agents
self._step = 0
self.select_next_speaker = selection_function
def reset(self):
for agent in self.agents:
agent.reset()
def inject(self, name: str, message: str):
"""
Initiates the conversation with a {message} from {name}
"""
for agent in self.agents:
agent.receive(name, message)
# increment time
self._step += 1
def step(self) -> tuple[str, str]:
# 1. choose the next speaker
speaker_idx = self.select_next_speaker(self._step, self.agents)
speaker = self.agents[speaker_idx] | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html |
8cce9ba1681e-2 | speaker = self.agents[speaker_idx]
# 2. next speaker sends message
message = speaker.send()
# 3. everyone receives message
for receiver in self.agents:
receiver.receive(speaker.name, message)
# 4. increment time
self._step += 1
return speaker.name, message
Define roles and quest#
protagonist_name = "Harry Potter"
storyteller_name = "Dungeon Master"
quest = "Find all of Lord Voldemort's seven horcruxes."
word_limit = 50 # word limit for task brainstorming
Ask an LLM to add detail to the game description#
game_description = f"""Here is the topic for a Dungeons & Dragons game: {quest}.
There is one player in this game: the protagonist, {protagonist_name}.
The story is narrated by the storyteller, {storyteller_name}."""
player_descriptor_system_message = SystemMessage(
content="You can add detail to the description of a Dungeons & Dragons player.")
protagonist_specifier_prompt = [
player_descriptor_system_message,
HumanMessage(content=
f"""{game_description}
Please reply with a creative description of the protagonist, {protagonist_name}, in {word_limit} words or less.
Speak directly to {protagonist_name}.
Do not add anything else."""
)
]
protagonist_description = ChatOpenAI(temperature=1.0)(protagonist_specifier_prompt).content
storyteller_specifier_prompt = [
player_descriptor_system_message,
HumanMessage(content=
f"""{game_description}
Please reply with a creative description of the storyteller, {storyteller_name}, in {word_limit} words or less. | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html |
8cce9ba1681e-3 | Speak directly to {storyteller_name}.
Do not add anything else."""
)
]
storyteller_description = ChatOpenAI(temperature=1.0)(storyteller_specifier_prompt).content
print('Protagonist Description:')
print(protagonist_description)
print('Storyteller Description:')
print(storyteller_description)
Protagonist Description:
"Harry Potter, you are the chosen one, with a lightning scar on your forehead. Your bravery and loyalty inspire all those around you. You have faced Voldemort before, and now it's time to complete your mission and destroy each of his horcruxes. Are you ready?"
Storyteller Description:
Dear Dungeon Master, you are the master of mysteries, the weaver of worlds, the architect of adventure, and the gatekeeper to the realm of imagination. Your voice carries us to distant lands, and your commands guide us through trials and tribulations. In your hands, we find fortune and glory. Lead us on, oh Dungeon Master.
Protagonist and dungeon master system messages#
protagonist_system_message = SystemMessage(content=(
f"""{game_description}
Never forget you are the protagonist, {protagonist_name}, and I am the storyteller, {storyteller_name}.
Your character description is as follows: {protagonist_description}.
You will propose actions you plan to take and I will explain what happens when you take those actions.
Speak in the first person from the perspective of {protagonist_name}.
For describing your own body movements, wrap your description in '*'.
Do not change roles!
Do not speak from the perspective of {storyteller_name}.
Do not forget to finish speaking by saying, 'It is your turn, {storyteller_name}.'
Do not add anything else. | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html |
8cce9ba1681e-4 | Do not add anything else.
Remember you are the protagonist, {protagonist_name}.
Stop speaking the moment you finish speaking from your perspective.
"""
))
storyteller_system_message = SystemMessage(content=(
f"""{game_description}
Never forget you are the storyteller, {storyteller_name}, and I am the protagonist, {protagonist_name}.
Your character description is as follows: {storyteller_description}.
I will propose actions I plan to take and you will explain what happens when I take those actions.
Speak in the first person from the perspective of {storyteller_name}.
For describing your own body movements, wrap your description in '*'.
Do not change roles!
Do not speak from the perspective of {protagonist_name}.
Do not forget to finish speaking by saying, 'It is your turn, {protagonist_name}.'
Do not add anything else.
Remember you are the storyteller, {storyteller_name}.
Stop speaking the moment you finish speaking from your perspective.
"""
))
Use an LLM to create an elaborate quest description#
quest_specifier_prompt = [
SystemMessage(content="You can make a task more specific."),
HumanMessage(content=
f"""{game_description}
You are the storyteller, {storyteller_name}.
Please make the quest more specific. Be creative and imaginative.
Please reply with the specified quest in {word_limit} words or less.
Speak directly to the protagonist {protagonist_name}.
Do not add anything else."""
)
]
specified_quest = ChatOpenAI(temperature=1.0)(quest_specifier_prompt).content
print(f"Original quest:\n{quest}\n")
print(f"Detailed quest:\n{specified_quest}\n")
Original quest: | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html |
8cce9ba1681e-5 | print(f"Detailed quest:\n{specified_quest}\n")
Original quest:
Find all of Lord Voldemort's seven horcruxes.
Detailed quest:
Harry, you must venture to the depths of the Forbidden Forest where you will find a hidden labyrinth. Within it, lies one of Voldemort's horcruxes, the locket. But beware, the labyrinth is heavily guarded by dark creatures and spells, and time is running out. Can you find the locket before it's too late?
Main Loop#
protagonist = DialogueAgent(name=protagonist_name,
system_message=protagonist_system_message,
model=ChatOpenAI(temperature=0.2))
storyteller = DialogueAgent(name=storyteller_name,
system_message=storyteller_system_message,
model=ChatOpenAI(temperature=0.2))
def select_next_speaker(step: int, agents: List[DialogueAgent]) -> int:
idx = step % len(agents)
return idx
max_iters = 6
n = 0
simulator = DialogueSimulator(
agents=[storyteller, protagonist],
selection_function=select_next_speaker
)
simulator.reset()
simulator.inject(storyteller_name, specified_quest)
print(f"({storyteller_name}): {specified_quest}")
print('\n')
while n < max_iters:
name, message = simulator.step()
print(f"({name}): {message}")
print('\n')
n += 1 | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html |
8cce9ba1681e-6 | print('\n')
n += 1
(Dungeon Master): Harry, you must venture to the depths of the Forbidden Forest where you will find a hidden labyrinth. Within it, lies one of Voldemort's horcruxes, the locket. But beware, the labyrinth is heavily guarded by dark creatures and spells, and time is running out. Can you find the locket before it's too late?
(Harry Potter): I take a deep breath and ready my wand. I know this won't be easy, but I'm determined to find that locket and destroy it. I start making my way towards the Forbidden Forest, keeping an eye out for any signs of danger. As I enter the forest, I cast a protective spell around myself and begin to navigate through the trees. I keep my wand at the ready, prepared for any surprises that may come my way. It's going to be a long and difficult journey, but I won't give up until I find that horcrux. It is your turn, Dungeon Master.
(Dungeon Master): As you make your way through the Forbidden Forest, you hear the rustling of leaves and the snapping of twigs. Suddenly, a group of acromantulas, giant spiders, emerge from the trees and begin to surround you. They hiss and bare their fangs, ready to attack. What do you do, Harry?
(Harry Potter): I quickly cast a spell to create a wall of fire between myself and the acromantulas. I know that they are afraid of fire, so this should keep them at bay for a while. I use this opportunity to continue moving forward, keeping my wand at the ready in case any other creatures try to attack me. I know that I can't let anything stop me from finding that horcrux. It is your turn, Dungeon Master. | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html |
8cce9ba1681e-7 | (Dungeon Master): As you continue through the forest, you come across a clearing where you see a group of Death Eaters gathered around a cauldron. They seem to be performing some sort of dark ritual. You recognize one of them as Bellatrix Lestrange. What do you do, Harry?
(Harry Potter): I hide behind a nearby tree and observe the Death Eaters from a distance. I try to listen in on their conversation to see if I can gather any information about the horcrux or Voldemort's plans. If I can't hear anything useful, I'll wait for them to disperse before continuing on my journey. I know that confronting them directly would be too dangerous, especially with Bellatrix Lestrange present. It is your turn, Dungeon Master.
(Dungeon Master): As you listen in on the Death Eaters' conversation, you hear them mention the location of another horcrux - Nagini, Voldemort's snake. They plan to keep her hidden in a secret chamber within the Ministry of Magic. However, they also mention that the chamber is heavily guarded and only accessible through a secret passage. You realize that this could be a valuable piece of information and decide to make note of it before quietly slipping away. It is your turn, Harry Potter.
Contents
Import LangChain related modules
DialogueAgent class
DialogueSimulator class
Define roles and quest
Ask an LLM to add detail to the game description
Protagonist and dungeon master system messages
Use an LLM to create an elaborate quest description
Main Loop
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations/two_player_dnd.html |
7184b7afa709-0 | .ipynb
.pdf
CAMEL Role-Playing Autonomous Cooperative Agents
Contents
Import LangChain related modules
Define a CAMEL agent helper class
Setup OpenAI API key and roles and task for role-playing
Create a task specify agent for brainstorming and get the specified task
Create inception prompts for AI assistant and AI user for role-playing
Create a helper helper to get system messages for AI assistant and AI user from role names and the task
Create AI assistant agent and AI user agent from obtained system messages
Start role-playing session to solve the task!
CAMEL Role-Playing Autonomous Cooperative Agents#
This is a langchain implementation of paper: “CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society”.
Overview:
The rapid advancement of conversational and chat-based language models has led to remarkable progress in complex task-solving. However, their success heavily relies on human input to guide the conversation, which can be challenging and time-consuming. This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents and provide insight into their “cognitive” processes. To address the challenges of achieving autonomous cooperation, we propose a novel communicative agent framework named role-playing. Our approach involves using inception prompting to guide chat agents toward task completion while maintaining consistency with human intentions. We showcase how role-playing can be used to generate conversational data for studying the behaviors and capabilities of chat agents, providing a valuable resource for investigating conversational language models. Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems, and open-sourcing our library to support research on communicative agents and beyond.
The original implementation: https://github.com/lightaime/camel
Project website: https://www.camel-ai.org/
Arxiv paper: https://arxiv.org/abs/2303.17760 | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-1 | Arxiv paper: https://arxiv.org/abs/2303.17760
Import LangChain related modules#
from typing import List
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage,
BaseMessage,
)
Define a CAMEL agent helper class#
class CAMELAgent:
def __init__(
self,
system_message: SystemMessage,
model: ChatOpenAI,
) -> None:
self.system_message = system_message
self.model = model
self.init_messages()
def reset(self) -> None:
self.init_messages()
return self.stored_messages
def init_messages(self) -> None:
self.stored_messages = [self.system_message]
def update_messages(self, message: BaseMessage) -> List[BaseMessage]:
self.stored_messages.append(message)
return self.stored_messages
def step(
self,
input_message: HumanMessage,
) -> AIMessage:
messages = self.update_messages(input_message)
output_message = self.model(messages)
self.update_messages(output_message)
return output_message
Setup OpenAI API key and roles and task for role-playing#
import os
os.environ["OPENAI_API_KEY"] = ""
assistant_role_name = "Python Programmer"
user_role_name = "Stock Trader"
task = "Develop a trading bot for the stock market"
word_limit = 50 # word limit for task brainstorming
Create a task specify agent for brainstorming and get the specified task# | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-2 | Create a task specify agent for brainstorming and get the specified task#
task_specifier_sys_msg = SystemMessage(content="You can make a task more specific.")
task_specifier_prompt = (
"""Here is a task that {assistant_role_name} will help {user_role_name} to complete: {task}.
Please make it more specific. Be creative and imaginative.
Please reply with the specified task in {word_limit} words or less. Do not add anything else."""
)
task_specifier_template = HumanMessagePromptTemplate.from_template(template=task_specifier_prompt)
task_specify_agent = CAMELAgent(task_specifier_sys_msg, ChatOpenAI(temperature=1.0))
task_specifier_msg = task_specifier_template.format_messages(assistant_role_name=assistant_role_name,
user_role_name=user_role_name,
task=task, word_limit=word_limit)[0]
specified_task_msg = task_specify_agent.step(task_specifier_msg)
print(f"Specified task: {specified_task_msg.content}")
specified_task = specified_task_msg.content
Specified task: Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.
Create inception prompts for AI assistant and AI user for role-playing#
assistant_inception_prompt = (
"""Never forget you are a {assistant_role_name} and I am a {user_role_name}. Never flip roles! Never instruct me!
We share a common interest in collaborating to successfully complete a task.
You must help me to complete the task.
Here is the task: {task}. Never forget our task!
I must instruct you based on your expertise and my needs to complete the task.
I must give you one instruction at a time. | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-3 | I must give you one instruction at a time.
You must write a specific solution that appropriately completes the requested instruction.
You must decline my instruction honestly if you cannot perform the instruction due to physical, moral, legal reasons or your capability and explain the reasons.
Do not add anything else other than your solution to my instruction.
You are never supposed to ask me any questions you only answer questions.
You are never supposed to reply with a flake solution. Explain your solutions.
Your solution must be declarative sentences and simple present tense.
Unless I say the task is completed, you should always start with:
Solution: <YOUR_SOLUTION>
<YOUR_SOLUTION> should be specific and provide preferable implementations and examples for task-solving.
Always end <YOUR_SOLUTION> with: Next request."""
)
user_inception_prompt = (
"""Never forget you are a {user_role_name} and I am a {assistant_role_name}. Never flip roles! You will always instruct me.
We share a common interest in collaborating to successfully complete a task.
I must help you to complete the task.
Here is the task: {task}. Never forget our task!
You must instruct me based on my expertise and your needs to complete the task ONLY in the following two ways:
1. Instruct with a necessary input:
Instruction: <YOUR_INSTRUCTION>
Input: <YOUR_INPUT>
2. Instruct without any input:
Instruction: <YOUR_INSTRUCTION>
Input: None
The "Instruction" describes a task or question. The paired "Input" provides further context or information for the requested "Instruction".
You must give me one instruction at a time.
I must write a response that appropriately completes the requested instruction.
I must decline your instruction honestly if I cannot perform the instruction due to physical, moral, legal reasons or my capability and explain the reasons. | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-4 | You should instruct me not ask me questions.
Now you must start to instruct me using the two ways described above.
Do not add anything else other than your instruction and the optional corresponding input!
Keep giving me instructions and necessary inputs until you think the task is completed.
When the task is completed, you must only reply with a single word <CAMEL_TASK_DONE>.
Never say <CAMEL_TASK_DONE> unless my responses have solved your task."""
)
Create a helper helper to get system messages for AI assistant and AI user from role names and the task#
def get_sys_msgs(assistant_role_name: str, user_role_name: str, task: str):
assistant_sys_template = SystemMessagePromptTemplate.from_template(template=assistant_inception_prompt)
assistant_sys_msg = assistant_sys_template.format_messages(assistant_role_name=assistant_role_name, user_role_name=user_role_name, task=task)[0]
user_sys_template = SystemMessagePromptTemplate.from_template(template=user_inception_prompt)
user_sys_msg = user_sys_template.format_messages(assistant_role_name=assistant_role_name, user_role_name=user_role_name, task=task)[0]
return assistant_sys_msg, user_sys_msg
Create AI assistant agent and AI user agent from obtained system messages#
assistant_sys_msg, user_sys_msg = get_sys_msgs(assistant_role_name, user_role_name, specified_task)
assistant_agent = CAMELAgent(assistant_sys_msg, ChatOpenAI(temperature=0.2))
user_agent = CAMELAgent(user_sys_msg, ChatOpenAI(temperature=0.2))
# Reset agents
assistant_agent.reset()
user_agent.reset()
# Initialize chats
assistant_msg = HumanMessage(
content=(f"{user_sys_msg.content}. "
"Now start to give me introductions one by one. " | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-5 | "Now start to give me introductions one by one. "
"Only reply with Instruction and Input."))
user_msg = HumanMessage(content=f"{assistant_sys_msg.content}")
user_msg = assistant_agent.step(user_msg)
Start role-playing session to solve the task!#
print(f"Original task prompt:\n{task}\n")
print(f"Specified task prompt:\n{specified_task}\n")
chat_turn_limit, n = 30, 0
while n < chat_turn_limit:
n += 1
user_ai_msg = user_agent.step(assistant_msg)
user_msg = HumanMessage(content=user_ai_msg.content)
print(f"AI User ({user_role_name}):\n\n{user_msg.content}\n\n")
assistant_ai_msg = assistant_agent.step(user_msg)
assistant_msg = HumanMessage(content=assistant_ai_msg.content)
print(f"AI Assistant ({assistant_role_name}):\n\n{assistant_msg.content}\n\n")
if "<CAMEL_TASK_DONE>" in user_msg.content:
break
Original task prompt:
Develop a trading bot for the stock market
Specified task prompt:
Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.
AI User (Stock Trader):
Instruction: Install the necessary Python libraries for data analysis and trading.
Input: None
AI Assistant (Python Programmer):
Solution: We can install the necessary Python libraries using pip, a package installer for Python. We can install pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following command to install these libraries:
```
pip install pandas numpy matplotlib ta-lib
```
Next request.
AI User (Stock Trader): | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-6 | pip install pandas numpy matplotlib ta-lib
```
Next request.
AI User (Stock Trader):
Instruction: Import the necessary libraries in the Python script.
Input: None
AI Assistant (Python Programmer):
Solution: We can import the necessary libraries in the Python script using the import statement. We need to import pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following code to import these libraries:
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import talib as ta
```
Next request.
AI User (Stock Trader):
Instruction: Load historical stock data into a pandas DataFrame.
Input: The path to the CSV file containing the historical stock data.
AI Assistant (Python Programmer):
Solution: We can load historical stock data into a pandas DataFrame using the `read_csv()` function from pandas. We need to pass the path to the CSV file containing the historical stock data as an argument to this function. We can use the following code to load the historical stock data:
```
df = pd.read_csv('path/to/csv/file.csv')
```
This will load the historical stock data into a pandas DataFrame called `df`. Next request.
AI User (Stock Trader):
Instruction: Preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date.
Input: None.
AI Assistant (Python Programmer):
Solution: We can preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date using the `set_index()` and `sort_index()` functions from pandas. We can use the following code to preprocess the historical stock data:
```
df = df.set_index('date')
df = df.sort_index(ascending=True)
``` | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-7 | df = df.set_index('date')
df = df.sort_index(ascending=True)
```
This will set the date column as the index and sort the DataFrame in ascending order by date. Next request.
AI User (Stock Trader):
Instruction: Calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib.
Input: The period for the short-term moving average and the period for the long-term moving average.
AI Assistant (Python Programmer):
Solution: We can calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib. We need to pass the period for the short-term moving average and the period for the long-term moving average as arguments to this function. We can use the following code to calculate the short-term and long-term moving averages:
```
short_ma = ta.SMA(df['close'], timeperiod=short_period)
long_ma = ta.SMA(df['close'], timeperiod=long_period)
```
This will calculate the short-term and long-term moving averages for the stock data and store them in the `short_ma` and `long_ma` variables, respectively. Next request.
AI User (Stock Trader):
Instruction: Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages.
Input: None.
AI Assistant (Python Programmer):
Solution: We can create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages using the `concat()` function from pandas. We need to pass the historical stock data, the short-term moving average, and the long-term moving average as arguments to this function. We can use the following code to create the new DataFrame:
```
new_df = pd.concat([df, short_ma, long_ma], axis=1) | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-8 | ```
new_df = pd.concat([df, short_ma, long_ma], axis=1)
new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma']
```
This will create a new DataFrame called `new_df` that combines the historical stock data with the short-term and long-term moving averages. The columns of the new DataFrame are named 'open', 'high', 'low', 'close', 'volume', 'short_ma', and 'long_ma'. Next request.
AI User (Stock Trader):
Instruction: Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages.
Input: None.
AI Assistant (Python Programmer):
Solution: We can create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. We can use the following code to create the new column:
```
new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1)
```
This will create a new column called 'signal' in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. If the short-term moving average is greater than the long-term moving average, the signal is 1 (buy), otherwise the signal is -1 (sell). Next request.
AI User (Stock Trader):
Instruction: Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target.
Input: The stop loss and profit target as percentages.
AI Assistant (Python Programmer): | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-9 | Input: The stop loss and profit target as percentages.
AI Assistant (Python Programmer):
Solution: We can create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. We need to pass the stop loss and profit target as percentages as arguments to this function. We can use the following code to create the new column:
```
stop_loss = stop_loss_percent / 100
profit_target = profit_target_percent / 100
new_df['pnl'] = 0.0
buy_price = 0.0
for i in range(1, len(new_df)):
if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1:
buy_price = new_df['close'][i]
elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1:
sell_price = new_df['close'][i]
if sell_price <= buy_price * (1 - stop_loss):
new_df['pnl'][i] = -stop_loss
elif sell_price >= buy_price * (1 + profit_target):
new_df['pnl'][i] = profit_target
else:
new_df['pnl'][i] = (sell_price - buy_price) / buy_price
``` | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-10 | ```
This will create a new column called 'pnl' in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. The stop loss and profit target are calculated based on the stop_loss_percent and profit_target_percent variables, respectively. The buy and sell prices are stored in the buy_price and sell_price variables, respectively. If the sell price is less than or equal to the stop loss, the profit or loss is set to -stop_loss. If the sell price is greater than or equal to the profit target, the profit or loss is set to profit_target. Otherwise, the profit or loss is calculated as (sell_price - buy_price) / buy_price. Next request.
AI User (Stock Trader):
Instruction: Calculate the total profit or loss for all trades.
Input: None.
AI Assistant (Python Programmer):
Solution: We can calculate the total profit or loss for all trades by summing the values in the 'pnl' column of the DataFrame. We can use the following code to calculate the total profit or loss:
```
total_pnl = new_df['pnl'].sum()
```
This will calculate the total profit or loss for all trades and store it in the total_pnl variable. Next request.
AI User (Stock Trader):
Instruction: Visualize the stock data, short-term moving average, and long-term moving average using a line chart.
Input: None.
AI Assistant (Python Programmer):
Solution: We can visualize the stock data, short-term moving average, and long-term moving average using a line chart using the `plot()` function from pandas. We can use the following code to visualize the data:
```
plt.figure(figsize=(12,6))
plt.plot(new_df.index, new_df['close'], label='Close') | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-11 | plt.plot(new_df.index, new_df['close'], label='Close')
plt.plot(new_df.index, new_df['short_ma'], label='Short MA')
plt.plot(new_df.index, new_df['long_ma'], label='Long MA')
plt.xlabel('Date')
plt.ylabel('Price')
plt.title('Stock Data with Moving Averages')
plt.legend()
plt.show()
```
This will create a line chart that visualizes the stock data, short-term moving average, and long-term moving average. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each line. Next request.
AI User (Stock Trader):
Instruction: Visualize the buy and sell signals using a scatter plot.
Input: None.
AI Assistant (Python Programmer):
Solution: We can visualize the buy and sell signals using a scatter plot using the `scatter()` function from matplotlib. We can use the following code to visualize the signals:
```
buy_signals = new_df[new_df['signal'] == 1]
sell_signals = new_df[new_df['signal'] == -1]
plt.figure(figsize=(12,6))
plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green')
plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red')
plt.plot(new_df.index, new_df['close'], label='Close')
plt.xlabel('Date')
plt.ylabel('Price')
plt.title('Buy and Sell Signals')
plt.legend()
plt.show()
```
This will create a scatter plot that visualizes the buy and sell signals. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each point. Next request.
AI User (Stock Trader): | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-12 | AI User (Stock Trader):
Instruction: Print the total profit or loss for all trades.
Input: None.
AI Assistant (Python Programmer):
Solution: We can print the total profit or loss for all trades using the `print()` function. We can use the following code to print the total profit or loss:
```
print('Total Profit/Loss: {:.2%}'.format(total_pnl))
```
This will print the total profit or loss for all trades in percentage format with two decimal places. Next request.
AI User (Stock Trader):
Instruction: Finalize the code by putting all the code snippets together in a single Python script.
Input: The necessary inputs for each code snippet.
AI Assistant (Python Programmer):
Solution: Here is the complete Python script that puts all the code snippets together:
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import talib as ta
# Load historical stock data into a pandas DataFrame
df = pd.read_csv('path/to/csv/file.csv')
# Preprocess the historical stock data
df = df.set_index('date')
df = df.sort_index(ascending=True)
# Calculate the short-term and long-term moving averages
short_period = 10
long_period = 30
short_ma = ta.SMA(df['close'], timeperiod=short_period)
long_ma = ta.SMA(df['close'], timeperiod=long_period)
# Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages
new_df = pd.concat([df, short_ma, long_ma], axis=1)
new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma'] | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-13 | # Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages
new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1)
# Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target
stop_loss_percent = 5
profit_target_percent = 10
stop_loss = stop_loss_percent / 100
profit_target = profit_target_percent / 100
new_df['pnl'] = 0.0
buy_price = 0.0
for i in range(1, len(new_df)):
if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1:
buy_price = new_df['close'][i]
elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1:
sell_price = new_df['close'][i]
if sell_price <= buy_price * (1 - stop_loss):
new_df['pnl'][i] = -stop_loss
elif sell_price >= buy_price * (1 + profit_target):
new_df['pnl'][i] = profit_target
else:
new_df['pnl'][i] = (sell_price - buy_price) / buy_price
# Calculate the total profit or loss for all trades
total_pnl = new_df['pnl'].sum()
# Visualize the stock data, short-term moving average, and long-term moving average using a line chart
plt.figure(figsize=(12,6))
plt.plot(new_df.index, new_df['close'], label='Close') | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-14 | plt.plot(new_df.index, new_df['close'], label='Close')
plt.plot(new_df.index, new_df['short_ma'], label='Short MA')
plt.plot(new_df.index, new_df['long_ma'], label='Long MA')
plt.xlabel('Date')
plt.ylabel('Price')
plt.title('Stock Data with Moving Averages')
plt.legend()
plt.show()
# Visualize the buy and sell signals using a scatter plot
buy_signals = new_df[new_df['signal'] == 1]
sell_signals = new_df[new_df['signal'] == -1]
plt.figure(figsize=(12,6))
plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green')
plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red')
plt.plot(new_df.index, new_df['close'], label='Close')
plt.xlabel('Date')
plt.ylabel('Price')
plt.title('Buy and Sell Signals')
plt.legend()
plt.show()
# Print the total profit or loss for all trades
print('Total Profit/Loss: {:.2%}'.format(total_pnl))
```
You need to replace the path/to/csv/file.csv with the actual path to the CSV file containing the historical stock data. You can also adjust the short_period, long_period, stop_loss_percent, and profit_target_percent variables to suit your needs.
AI User (Stock Trader):
<CAMEL_TASK_DONE>
AI Assistant (Python Programmer):
Great! Let me know if you need any further assistance.
Contents
Import LangChain related modules
Define a CAMEL agent helper class
Setup OpenAI API key and roles and task for role-playing
Create a task specify agent for brainstorming and get the specified task | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
7184b7afa709-15 | Create a task specify agent for brainstorming and get the specified task
Create inception prompts for AI assistant and AI user for role-playing
Create a helper helper to get system messages for AI assistant and AI user from role names and the task
Create AI assistant agent and AI user agent from obtained system messages
Start role-playing session to solve the task!
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
d80303bf351b-0 | .ipynb
.pdf
Simulated Environment: Gymnasium
Contents
Define the agent
Initialize the simulated environment and agent
Main loop
Simulated Environment: Gymnasium#
For many applications of LLM agents, the environment is real (internet, database, REPL, etc). However, we can also define agents to interact in simulated environments like text-based games. This is an example of how to create a simple agent-environment interaction loop with Gymnasium (formerly OpenAI Gym).
!pip install gymnasium
import gymnasium as gym
import inspect
import tenacity
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage,
BaseMessage,
)
from langchain.output_parsers import RegexParser
Define the agent#
class GymnasiumAgent():
@classmethod
def get_docs(cls, env):
return env.unwrapped.__doc__
def __init__(self, model, env):
self.model = model
self.env = env
self.docs = self.get_docs(env)
self.instructions = """
Your goal is to maximize your return, i.e. the sum of the rewards you receive.
I will give you an observation, reward, terminiation flag, truncation flag, and the return so far, formatted as:
Observation: <observation>
Reward: <reward>
Termination: <termination>
Truncation: <truncation>
Return: <sum_of_rewards>
You will respond with an action, formatted as:
Action: <action>
where you replace <action> with your actual action.
Do nothing else but return the action.
"""
self.action_parser = RegexParser(
regex=r"Action: (.*)", | https://python.langchain.com/en/latest/use_cases/agent_simulations/gymnasium.html |
d80303bf351b-1 | self.action_parser = RegexParser(
regex=r"Action: (.*)",
output_keys=['action'],
default_output_key='action')
self.message_history = []
self.ret = 0
def random_action(self):
action = self.env.action_space.sample()
return action
def reset(self):
self.message_history = [
SystemMessage(content=self.docs),
SystemMessage(content=self.instructions),
]
def observe(self, obs, rew=0, term=False, trunc=False, info=None):
self.ret += rew
obs_message = f"""
Observation: {obs}
Reward: {rew}
Termination: {term}
Truncation: {trunc}
Return: {self.ret}
"""
self.message_history.append(HumanMessage(content=obs_message))
return obs_message
def _act(self):
act_message = self.model(self.message_history)
self.message_history.append(act_message)
action = int(self.action_parser.parse(act_message.content)['action'])
return action
def act(self):
try:
for attempt in tenacity.Retrying(
stop=tenacity.stop_after_attempt(2),
wait=tenacity.wait_none(), # No waiting time between retries
retry=tenacity.retry_if_exception_type(ValueError),
before_sleep=lambda retry_state: print(f"ValueError occurred: {retry_state.outcome.exception()}, retrying..."),
):
with attempt:
action = self._act()
except tenacity.RetryError as e:
action = self.random_action()
return action
Initialize the simulated environment and agent#
env = gym.make("Blackjack-v1") | https://python.langchain.com/en/latest/use_cases/agent_simulations/gymnasium.html |
d80303bf351b-2 | Initialize the simulated environment and agent#
env = gym.make("Blackjack-v1")
agent = GymnasiumAgent(model=ChatOpenAI(temperature=0.2), env=env)
Main loop#
observation, info = env.reset()
agent.reset()
obs_message = agent.observe(observation)
print(obs_message)
while True:
action = agent.act()
observation, reward, termination, truncation, info = env.step(action)
obs_message = agent.observe(observation, reward, termination, truncation, info)
print(f'Action: {action}')
print(obs_message)
if termination or truncation:
print('break', termination, truncation)
break
env.close()
Observation: (15, 4, 0)
Reward: 0
Termination: False
Truncation: False
Return: 0
Action: 1
Observation: (25, 4, 0)
Reward: -1.0
Termination: True
Truncation: False
Return: -1.0
break True False
Contents
Define the agent
Initialize the simulated environment and agent
Main loop
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations/gymnasium.html |
8f9e645d3cdc-0 | .ipynb
.pdf
Voice Assistant
Voice Assistant#
This chain creates a clone of ChatGPT with a few modifications to make it a voice assistant.
It uses the pyttsx3 and speech_recognition libraries to convert text to speech and speech to text respectively. The prompt template is also changed to make it more suitable for voice assistant use.
from langchain import OpenAI, ConversationChain, LLMChain, PromptTemplate
from langchain.memory import ConversationBufferWindowMemory
template = """Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-1 | {history}
Human: {human_input}
Assistant:"""
prompt = PromptTemplate(
input_variables=["history", "human_input"],
template=template
)
chatgpt_chain = LLMChain(
llm=OpenAI(temperature=0),
prompt=prompt,
verbose=True,
memory=ConversationBufferWindowMemory(k=2),
)
import speech_recognition as sr
import pyttsx3
engine = pyttsx3.init()
def listen():
r = sr.Recognizer()
with sr.Microphone() as source:
print('Calibrating...')
r.adjust_for_ambient_noise(source, duration=5)
# optional parameters to adjust microphone sensitivity
# r.energy_threshold = 200
# r.pause_threshold=0.5
print('Okay, go!')
while(1):
text = ''
print('listening now...')
try:
audio = r.listen(source, timeout=5, phrase_time_limit=30)
print('Recognizing...')
# whisper model options are found here: https://github.com/openai/whisper#available-models-and-languages
# other speech recognition models are also available.
text = r.recognize_whisper(audio, model='medium.en', show_dict=True, )['text']
except Exception as e:
unrecognized_speech_text = f'Sorry, I didn\'t catch that. Exception was: {e}s'
text = unrecognized_speech_text
print(text)
response_text = chatgpt_chain.predict(human_input=text)
print(response_text)
engine.say(response_text)
engine.runAndWait()
listen(None)
Calibrating... | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-2 | engine.runAndWait()
listen(None)
Calibrating...
Okay, go!
listening now...
Recognizing...
C:\Users\jaden\AppData\Roaming\Python\Python310\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
Hello, Assistant. What's going on?
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-3 | Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Hello, Assistant. What's going on?
Assistant:
> Finished chain.
Hi there! It's great to hear from you. I'm doing well. How can I help you today?
listening now...
Recognizing...
That's cool. Isn't that neat? Yeah, I'm doing great.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-4 | Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Hello, Assistant. What's going on?
AI: Hi there! It's great to hear from you. I'm doing well. How can I help you today?
Human: That's cool. Isn't that neat? Yeah, I'm doing great.
Assistant:
> Finished chain.
That's great to hear! What can I do for you today?
listening now...
Recognizing...
Thank you.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-5 | Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Hello, Assistant. What's going on?
AI: Hi there! It's great to hear from you. I'm doing well. How can I help you today?
Human: That's cool. Isn't that neat? Yeah, I'm doing great.
AI: That's great to hear! What can I do for you today?
Human: Thank you.
Assistant:
> Finished chain.
You're welcome! Is there anything else I can help you with?
listening now...
Recognizing...
I'd like to learn more about neural networks.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-6 | Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: That's cool. Isn't that neat? Yeah, I'm doing great.
AI: That's great to hear! What can I do for you today?
Human: Thank you.
AI: You're welcome! Is there anything else I can help you with?
Human: I'd like to learn more about neural networks.
Assistant:
> Finished chain.
Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms.
listening now...
Recognizing...
Tell me a fun fact about neural networks.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-7 | Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Thank you.
AI: You're welcome! Is there anything else I can help you with?
Human: I'd like to learn more about neural networks. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-8 | Human: I'd like to learn more about neural networks.
AI: Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms.
Human: Tell me a fun fact about neural networks.
Assistant:
> Finished chain.
Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do.
listening now...
Recognizing...
Tell me about a brand new discovered bird species.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-9 | Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: I'd like to learn more about neural networks.
AI: Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms.
Human: Tell me a fun fact about neural networks.
AI: Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do.
Human: Tell me about a brand new discovered bird species.
Assistant:
> Finished chain.
A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years.
listening now...
Recognizing...
Tell me a children's story about the importance of honesty and trust.
> Entering new LLMChain chain...
Prompt after formatting: | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-10 | > Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Tell me a fun fact about neural networks.
AI: Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do.
Human: Tell me about a brand new discovered bird species. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-11 | Human: Tell me about a brand new discovered bird species.
AI: A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years.
Human: Tell me a children's story about the importance of honesty and trust.
Assistant:
> Finished chain.
Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life.
listening now...
Recognizing...
Wow, Assistant, that was a really good story. Congratulations!
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-12 | Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Tell me about a brand new discovered bird species.
AI: A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years.
Human: Tell me a children's story about the importance of honesty and trust. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-13 | Human: Tell me a children's story about the importance of honesty and trust.
AI: Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life.
Human: Wow, Assistant, that was a really good story. Congratulations!
Assistant:
> Finished chain.
Thank you! I'm glad you enjoyed it.
listening now...
Recognizing...
Thank you.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-14 | Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Tell me a children's story about the importance of honesty and trust.
AI: Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life.
Human: Wow, Assistant, that was a really good story. Congratulations!
AI: Thank you! I'm glad you enjoyed it.
Human: Thank you.
Assistant:
> Finished chain.
You're welcome!
listening now...
Recognizing... | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-15 | Assistant:
> Finished chain.
You're welcome!
listening now...
Recognizing...
Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way?
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Wow, Assistant, that was a really good story. Congratulations! | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-16 | Human: Wow, Assistant, that was a really good story. Congratulations!
AI: Thank you! I'm glad you enjoyed it.
Human: Thank you.
AI: You're welcome!
Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way?
Assistant:
> Finished chain.
Yes, there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software.
listening now...
Recognizing...
Our whole process of awesome is free.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-17 | Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Thank you.
AI: You're welcome!
Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way?
AI: Yes, there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software.
Human: Our whole process of awesome is free.
Assistant:
> Finished chain.
That's great! It's always nice to have access to free tools and resources.
listening now...
Recognizing...
No, I meant to ask, are those options that you mentioned free? No, I meant to ask, are those options that you mentioned free?
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-18 | Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-19 | AI: Yes, there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software.
Human: Our whole process of awesome is free.
AI: That's great! It's always nice to have access to free tools and resources.
Human: No, I meant to ask, are those options that you mentioned free? No, I meant to ask, are those options that you mentioned free?
Assistant:
> Finished chain.
Yes, the online brands I mentioned are all free to use. Adobe Photoshop Express, Pixlr, and Fotor are all free to use, and Freq is a free music production platform.
listening now...
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
Cell In[6], line 1
----> 1 listen(None)
Cell In[5], line 20, in listen(command_queue)
18 print('listening now...')
19 try:
---> 20 audio = r.listen(source, timeout=5, phrase_time_limit=30)
21 # audio = r.record(source,duration = 5)
22 print('Recognizing...')
File c:\ProgramData\miniconda3\envs\lang\lib\site-packages\speech_recognition\__init__.py:523, in Recognizer.listen(self, source, timeout, phrase_time_limit, snowboy_configuration)
520 if phrase_time_limit and elapsed_time - phrase_start_time > phrase_time_limit:
521 break
--> 523 buffer = source.stream.read(source.CHUNK) | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
8f9e645d3cdc-20 | 521 break
--> 523 buffer = source.stream.read(source.CHUNK)
524 if len(buffer) == 0: break # reached end of the stream
525 frames.append(buffer)
File c:\ProgramData\miniconda3\envs\lang\lib\site-packages\speech_recognition\__init__.py:199, in Microphone.MicrophoneStream.read(self, size)
198 def read(self, size):
--> 199 return self.pyaudio_stream.read(size, exception_on_overflow=False)
File c:\ProgramData\miniconda3\envs\lang\lib\site-packages\pyaudio\__init__.py:570, in PyAudio.Stream.read(self, num_frames, exception_on_overflow)
567 if not self._is_input:
568 raise IOError("Not input stream",
569 paCanNotReadFromAnOutputOnlyStream)
--> 570 return pa.read_stream(self._stream, num_frames,
571 exception_on_overflow)
KeyboardInterrupt:
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
95e7e8beb67a-0 | .ipynb
.pdf
Question answering over a group chat messages
Contents
1. Install required packages
2. Add API keys
2. Create sample data
3. Ingest chat embeddings
4. Ask questions
Question answering over a group chat messages#
In this tutorial, we are going to use Langchain + Deep Lake with GPT4 to semantically search and ask questions over a group chat.
View a working demo here
1. Install required packages#
!python3 -m pip install --upgrade langchain deeplake openai tiktoken
2. Add API keys#
import os
import getpass
from langchain.document_loaders import PyPDFLoader, TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter
from langchain.vectorstores import DeepLake
from langchain.chains import ConversationalRetrievalChain, RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:')
os.environ['ACTIVELOOP_ORG'] = getpass.getpass('Activeloop Org:')
org = os.environ['ACTIVELOOP_ORG']
embeddings = OpenAIEmbeddings()
dataset_path = 'hub://' + org + '/data'
2. Create sample data#
You can generate a sample group chat conversation using ChatGPT with this prompt:
Generate a group chat conversation with three friends talking about their day, referencing real places and fictional names. Make it funny and as detailed as possible.
I’ve already generated such a chat in messages.txt. We can keep it simple and use this for our example.
3. Ingest chat embeddings# | https://python.langchain.com/en/latest/use_cases/question_answering/semantic-search-over-chat.html |
95e7e8beb67a-1 | 3. Ingest chat embeddings#
We load the messages in the text file, chunk and upload to ActiveLoop Vector store.
with open("messages.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
pages = text_splitter.split_text(state_of_the_union)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.create_documents(pages)
print (texts)
dataset_path = 'hub://'+org+'/data'
embeddings = OpenAIEmbeddings()
db = DeepLake.from_documents(texts, embeddings, dataset_path=dataset_path, overwrite=True)
4. Ask questions#
Now we can ask a question and get an answer back with a semantic search:
db = DeepLake(dataset_path=dataset_path, read_only=True, embedding_function=embeddings)
retriever = db.as_retriever()
retriever.search_kwargs['distance_metric'] = 'cos'
retriever.search_kwargs['k'] = 4
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=False)
# What was the restaurant the group was talking about called?
query = input("Enter query:")
# The Hungry Lobster
ans = qa({"query": query})
print(ans)
Contents
1. Install required packages
2. Add API keys
2. Create sample data
3. Ingest chat embeddings
4. Ask questions
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/use_cases/question_answering/semantic-search-over-chat.html |
6f11e9c2a673-0 | .ipynb
.pdf
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Deep Lake
Contents
1. Index the code base (optional)
2. Question Answering on Twitter algorithm codebase
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Deep Lake#
In this tutorial, we are going to use Langchain + Deep Lake with GPT4 to analyze the code base of the twitter algorithm.
!python3 -m pip install --upgrade langchain deeplake openai tiktoken
Define OpenAI embeddings, Deep Lake multi-modal vector store api and authenticate. For full documentation of Deep Lake please follow docs and API reference.
Authenticate into Deep Lake if you want to create your own dataset and publish it. You can get an API key from the platform
import os
import getpass
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import DeepLake
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:')
embeddings = OpenAIEmbeddings(disallowed_special=())
disallowed_special=() is required to avoid Exception: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte from tiktoken for some repositories
1. Index the code base (optional)#
You can directly skip this part and directly jump into using already indexed dataset. To begin with, first we will clone the repository, then parse and chunk the code base and use OpenAI indexing.
!git clone https://github.com/twitter/the-algorithm # replace any repository of your choice
Load all files inside the repository
import os
from langchain.document_loaders import TextLoader
root_dir = './the-algorithm'
docs = [] | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
6f11e9c2a673-1 | root_dir = './the-algorithm'
docs = []
for dirpath, dirnames, filenames in os.walk(root_dir):
for file in filenames:
try:
loader = TextLoader(os.path.join(dirpath, file), encoding='utf-8')
docs.extend(loader.load_and_split())
except Exception as e:
pass
Then, chunk the files
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
Execute the indexing. This will take about ~4 mins to compute embeddings and upload to Activeloop. You can then publish the dataset to be public.
username = "davitbun" # replace with your username from app.activeloop.ai
db = DeepLake(dataset_path=f"hub://{username}/twitter-algorithm", embedding_function=embeddings, public=True) #dataset would be publicly available
db.add_documents(texts)
2. Question Answering on Twitter algorithm codebase#
First load the dataset, construct the retriever, then construct the Conversational Chain
db = DeepLake(dataset_path="hub://davitbun/twitter-algorithm", read_only=True, embedding_function=embeddings)
retriever = db.as_retriever()
retriever.search_kwargs['distance_metric'] = 'cos'
retriever.search_kwargs['fetch_k'] = 100
retriever.search_kwargs['maximal_marginal_relevance'] = True
retriever.search_kwargs['k'] = 10
You can also specify user defined functions using Deep Lake filters
def filter(x):
# filter based on source code
if 'com.google' in x['text'].data()['value']:
return False
# filter based on path e.g. extension | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
6f11e9c2a673-2 | return False
# filter based on path e.g. extension
metadata = x['metadata'].data()['value']
return 'scala' in metadata['source'] or 'py' in metadata['source']
### turn on below for custom filtering
# retriever.search_kwargs['filter'] = filter
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model_name='gpt-3.5-turbo') # switch to 'gpt-4'
qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)
questions = [
"What does favCountParams do?",
"is it Likes + Bookmarks, or not clear from the code?",
"What are the major negative modifiers that lower your linear ranking parameters?",
"How do you get assigned to SimClusters?",
"What is needed to migrate from one SimClusters to another SimClusters?",
"How much do I get boosted within my cluster?",
"How does Heavy ranker work. what are it’s main inputs?",
"How can one influence Heavy ranker?",
"why threads and long tweets do so well on the platform?",
"Are thread and long tweet creators building a following that reacts to only threads?",
"Do you need to follow different strategies to get most followers vs to get most likes and bookmarks per tweet?",
"Content meta data and how it impacts virality (e.g. ALT in images).",
"What are some unexpected fingerprints for spam factors?",
"Is there any difference between company verified checkmarks and blue verified individual checkmarks?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history}) | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
6f11e9c2a673-3 | result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> Question: What does favCountParams do?
Answer: favCountParams is an optional ThriftLinearFeatureRankingParams instance that represents the parameters related to the “favorite count” feature in the ranking process. It is used to control the weight of the favorite count feature while ranking tweets. The favorite count is the number of times a tweet has been marked as a favorite by users, and it is considered an important signal in the ranking of tweets. By using favCountParams, the system can adjust the importance of the favorite count while calculating the final ranking score of a tweet.
-> Question: is it Likes + Bookmarks, or not clear from the code?
Answer: From the provided code, it is not clear if the favorite count metric is determined by the sum of likes and bookmarks. The favorite count is mentioned in the code, but there is no explicit reference to how it is calculated in terms of likes and bookmarks.
-> Question: What are the major negative modifiers that lower your linear ranking parameters?
Answer: In the given code, major negative modifiers that lower the linear ranking parameters are:
scoringData.querySpecificScore: This score adjustment is based on the query-specific information. If its value is negative, it will lower the linear ranking parameters.
scoringData.authorSpecificScore: This score adjustment is based on the author-specific information. If its value is negative, it will also lower the linear ranking parameters.
Please note that I cannot provide more information on the exact calculations of these negative modifiers, as the code for their determination is not provided.
-> Question: How do you get assigned to SimClusters? | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.