Spaces:
No application file
No application file
Upload 14 files
Browse files- generative-agents-main/LICENSE +21 -0
- generative-agents-main/README.md +29 -0
- generative-agents-main/SECURITY.md +21 -0
- generative-agents-main/game_simulation/agents/__init__.py +0 -0
- generative-agents-main/game_simulation/agents/agent.py +235 -0
- generative-agents-main/game_simulation/locations/__init__.py +0 -0
- generative-agents-main/game_simulation/locations/locations.py +62 -0
- generative-agents-main/game_simulation/main.py +145 -0
- generative-agents-main/game_simulation/simulation_config.json +33 -0
- generative-agents-main/game_simulation/simulation_log.txt +65 -0
- generative-agents-main/game_simulation/utils/__init__.py +0 -0
- generative-agents-main/game_simulation/utils/text_generation.py +71 -0
- generative-agents-main/notebook/Release/generative_model_simple.ipynb +0 -0
- generative-agents-main/notebook/WIP/demo_01_phandalin_simulation.ipynb +0 -0
generative-agents-main/LICENSE
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
MIT License
|
2 |
+
|
3 |
+
Copyright (c) 2023 M. K. Turkcan
|
4 |
+
|
5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6 |
+
of this software and associated documentation files (the "Software"), to deal
|
7 |
+
in the Software without restriction, including without limitation the rights
|
8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9 |
+
copies of the Software, and to permit persons to whom the Software is
|
10 |
+
furnished to do so, subject to the following conditions:
|
11 |
+
|
12 |
+
The above copyright notice and this permission notice shall be included in all
|
13 |
+
copies or substantial portions of the Software.
|
14 |
+
|
15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
21 |
+
SOFTWARE.
|
generative-agents-main/README.md
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generative Large Language Models for Human-Like Behavior
|
2 |
+
|
3 |
+
This repository includes a working version of the type of model described in Generative Agents: Interactive Simulacra of Human Behavior.
|
4 |
+
|
5 |
+
## Setup
|
6 |
+
|
7 |
+
The models are distributed as notebooks that are easy to run locally, or on Google Colab. We recommend the use of Jupyter Lab if running locally. The notebook(s) should work as-is on Google Colab.
|
8 |
+
|
9 |
+
# How to Use
|
10 |
+
|
11 |
+
* The most stable model is available at https://github.com/mkturkcan/generative-agents/tree/main/notebook/Release.
|
12 |
+
* WIP models with the latest features will be available in https://github.com/mkturkcan/generative-agents/tree/main/notebook/WIP.
|
13 |
+
* A WIP library is available under https://github.com/mkturkcan/generative-agents/tree/main/game_simulation.
|
14 |
+
|
15 |
+
## Model
|
16 |
+
|
17 |
+
The current model is a simulation of the town of Phandalin from an introductory D&D 5e adventure. This setting is chosen as it is much more free form than the simple scenario described in the original paper.
|
18 |
+
|
19 |
+
## Limitations
|
20 |
+
|
21 |
+
The model, as described in the paper, requires access to a very high quality instruction model such as GPT-3. However, the model also requires many high-context queries to work, making it expensive to run. As such, in this work we use low-parameter, locally runnable models instead.
|
22 |
+
|
23 |
+
We expect that with the advent of the next generation of instruction-tuned models, the model in this repo will perform better.
|
24 |
+
|
25 |
+
## Future Steps
|
26 |
+
|
27 |
+
* Summarize agent decisions as emojis. (WIP)
|
28 |
+
* Create a family of questions to compress agent contexts better.
|
29 |
+
* Check if the agent contexts are compressed well with an another layer of prompts.
|
generative-agents-main/SECURITY.md
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Security Policy
|
2 |
+
|
3 |
+
## Supported Versions
|
4 |
+
|
5 |
+
Use this section to tell people about which versions of your project are
|
6 |
+
currently being supported with security updates.
|
7 |
+
|
8 |
+
| Version | Supported |
|
9 |
+
| ------- | ------------------ |
|
10 |
+
| 5.1.x | :white_check_mark: |
|
11 |
+
| 5.0.x | :x: |
|
12 |
+
| 4.0.x | :white_check_mark: |
|
13 |
+
| < 4.0 | :x: |
|
14 |
+
|
15 |
+
## Reporting a Vulnerability
|
16 |
+
|
17 |
+
Use this section to tell people how to report a vulnerability.
|
18 |
+
|
19 |
+
Tell them where to go, how often they can expect to get an update on a
|
20 |
+
reported vulnerability, what to expect if the vulnerability is accepted or
|
21 |
+
declined, etc.
|
generative-agents-main/game_simulation/agents/__init__.py
ADDED
File without changes
|
generative-agents-main/game_simulation/agents/agent.py
ADDED
@@ -0,0 +1,235 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import random
|
2 |
+
from utils.text_generation import generate, get_rating
|
3 |
+
import networkx as nx
|
4 |
+
|
5 |
+
class Agent:
|
6 |
+
|
7 |
+
"""
|
8 |
+
A class to represent an individual agent in a simulation similar to The Sims.
|
9 |
+
|
10 |
+
Attributes:
|
11 |
+
-----------
|
12 |
+
name : str
|
13 |
+
The name of the agent.
|
14 |
+
description : str
|
15 |
+
A brief description of the agent.
|
16 |
+
location : str
|
17 |
+
The current location of the agent in the simulated environment.
|
18 |
+
memories : list
|
19 |
+
A list of memories the agent has about their interactions.
|
20 |
+
compressed_memories : list
|
21 |
+
A list of compressed memories that summarize the agent's experiences.
|
22 |
+
plans : str
|
23 |
+
The agent's daily plans, generated at the beginning of each day.
|
24 |
+
|
25 |
+
Methods:
|
26 |
+
--------
|
27 |
+
plan(global_time, town_people, prompt_meta):
|
28 |
+
Generates the agent's daily plan.
|
29 |
+
|
30 |
+
execute_action(other_agents, location, global_time, town_areas, prompt_meta):
|
31 |
+
Executes the agent's action based on their current situation and interactions with other agents.
|
32 |
+
|
33 |
+
update_memories(other_agents, global_time, action_results):
|
34 |
+
Updates the agent's memories based on their interactions with other agents.
|
35 |
+
|
36 |
+
compress_memories(memory_ratings, global_time, MEMORY_LIMIT=10):
|
37 |
+
Compresses the agent's memories to a more manageable and relevant set.
|
38 |
+
|
39 |
+
rate_locations(locations, town_areas, global_time, prompt_meta):
|
40 |
+
Rates different locations in the simulated environment based on the agent's preferences and experiences.
|
41 |
+
"""
|
42 |
+
|
43 |
+
def __init__(self, name, description, starting_location, world_graph, use_openai):
|
44 |
+
self.name = name
|
45 |
+
self.description = description
|
46 |
+
self.location = starting_location
|
47 |
+
self.memory_ratings = []
|
48 |
+
self.memories = []
|
49 |
+
self.compressed_memories = []
|
50 |
+
self.plans = ""
|
51 |
+
self.world_graph = world_graph
|
52 |
+
self.use_openai = use_openai
|
53 |
+
|
54 |
+
def __repr__(self):
|
55 |
+
return f"Agent({self.name}, {self.description}, {self.location})"
|
56 |
+
|
57 |
+
def plan(self, global_time, prompt_meta):
|
58 |
+
"""
|
59 |
+
Generates the agent's daily plan.
|
60 |
+
|
61 |
+
Parameters:
|
62 |
+
-----------
|
63 |
+
global_time : int
|
64 |
+
The current time in the simulation.
|
65 |
+
prompt_meta : str
|
66 |
+
The prompt used to generate the plan.
|
67 |
+
"""
|
68 |
+
|
69 |
+
prompt = "You are {}. The following is your description: {} You just woke up. What is your goal for today? Write it down in an hourly basis, starting at {}:00. Write only one or two very short sentences. Be very brief. Use at most 50 words.".format(self.name, self.description, str(global_time))
|
70 |
+
self.plans = generate(prompt_meta.format(prompt), self.use_openai)
|
71 |
+
|
72 |
+
def execute_action(self, other_agents, location, global_time, town_areas, prompt_meta):
|
73 |
+
|
74 |
+
"""Executes the agent's action based on their current situation and interactions with other agents.
|
75 |
+
|
76 |
+
Parameters:
|
77 |
+
-----------
|
78 |
+
other_agents : list
|
79 |
+
A list of other Agent objects in the simulation.
|
80 |
+
location : Location
|
81 |
+
The current Location object where the agent is located.
|
82 |
+
global_time : int
|
83 |
+
The current time in the simulation.
|
84 |
+
town_areas : dict
|
85 |
+
A dictionary of Location objects representing different areas in the simulated environment.
|
86 |
+
prompt_meta : str
|
87 |
+
The prompt used to generate the action.
|
88 |
+
|
89 |
+
Returns:
|
90 |
+
--------
|
91 |
+
action : str
|
92 |
+
The action executed by the agent.
|
93 |
+
"""
|
94 |
+
|
95 |
+
people = [agent.name for agent in other_agents if agent.location == location]
|
96 |
+
|
97 |
+
prompt = "You are {}. Your plans are: {}. You are currently in {} with the following description: {}. It is currently {}:00. The following people are in this area: {}. You can interact with them.".format(self.name, self.plans, location.name, town_areas[location.name], str(global_time), ', '.join(people))
|
98 |
+
|
99 |
+
people_description = [f"{agent.name}: {agent.description}" for agent in other_agents if agent.location == location.name]
|
100 |
+
prompt += ' You know the following about people: ' + '. '.join(people_description)
|
101 |
+
|
102 |
+
prompt += "What do you do in the next hour? Use at most 10 words to explain."
|
103 |
+
action = generate(prompt_meta.format(prompt), self.use_openai)
|
104 |
+
return action
|
105 |
+
|
106 |
+
def update_memories(self, other_agents, global_time, action_results):
|
107 |
+
|
108 |
+
"""
|
109 |
+
Updates the agent's memories based on their interactions with other agents.
|
110 |
+
|
111 |
+
Parameters:
|
112 |
+
-----------
|
113 |
+
other_agents : list
|
114 |
+
A list of other Agent objects in the simulation.
|
115 |
+
global_time : int
|
116 |
+
The current time in the simulation.
|
117 |
+
action_results : dict
|
118 |
+
A dictionary of the results of each agent's action.
|
119 |
+
"""
|
120 |
+
|
121 |
+
for agent in other_agents:
|
122 |
+
if agent.location == self.location:
|
123 |
+
self.memories.append('[Time: {}. Person: {}. Memory: {}]\n'.format(str(global_time), agent.name, action_results[agent.name]))
|
124 |
+
|
125 |
+
def compress_memories(self, global_time, MEMORY_LIMIT=10):
|
126 |
+
|
127 |
+
"""
|
128 |
+
Compresses the agent's memories to a more manageable and relevant set.
|
129 |
+
|
130 |
+
Parameters:
|
131 |
+
-----------
|
132 |
+
global_time : int
|
133 |
+
The current time in the simulation.
|
134 |
+
MEMORY_LIMIT : int, optional
|
135 |
+
The maximum number of memories to compress. Default is 10.
|
136 |
+
|
137 |
+
Returns:
|
138 |
+
--------
|
139 |
+
memory_string : str
|
140 |
+
The compressed memory string.
|
141 |
+
"""
|
142 |
+
|
143 |
+
memories_sorted = sorted(self.memory_ratings, key=lambda x: x[1], reverse=True)
|
144 |
+
relevant_memories = memories_sorted[:MEMORY_LIMIT]
|
145 |
+
memory_string_to_compress = '.'.join([a[0] for a in relevant_memories])
|
146 |
+
return '[Recollection at Time {}:00: {}]'.format(str(global_time), memory_string_to_compress)
|
147 |
+
|
148 |
+
def rate_memories(self, locations, global_time, prompt_meta):
|
149 |
+
|
150 |
+
"""
|
151 |
+
Rates the agent's memories based on their relevance and importance.
|
152 |
+
|
153 |
+
Parameters:
|
154 |
+
-----------
|
155 |
+
locations : Locations
|
156 |
+
The Locations object representing different areas in the simulated environment.
|
157 |
+
global_time : int
|
158 |
+
The current time in the simulation.
|
159 |
+
prompt_meta : str
|
160 |
+
The prompt used to rate the memories.
|
161 |
+
|
162 |
+
Returns:
|
163 |
+
--------
|
164 |
+
memory_ratings : list
|
165 |
+
A list of tuples representing the memory, its rating, and the generated response.
|
166 |
+
"""
|
167 |
+
|
168 |
+
memory_ratings = []
|
169 |
+
for memory in self.memories:
|
170 |
+
prompt = "You are {}. Your plans are: {}. You are currently in {}. It is currently {}:00. You observe the following: {}. Give a rating, between 1 and 5, to how much you care about this.".format(self.name, self.plans, locations.get_location(self.location), str(global_time), memory)
|
171 |
+
res = generate(prompt_meta.format(prompt), self.use_openai)
|
172 |
+
rating = get_rating(res)
|
173 |
+
max_attempts = 2
|
174 |
+
current_attempt = 0
|
175 |
+
while rating is None and current_attempt < max_attempts:
|
176 |
+
rating = get_rating(res)
|
177 |
+
current_attempt += 1
|
178 |
+
if rating is None:
|
179 |
+
rating = 0
|
180 |
+
memory_ratings.append((memory, rating, res))
|
181 |
+
self.memory_ratings = memory_ratings
|
182 |
+
return memory_ratings
|
183 |
+
|
184 |
+
|
185 |
+
def rate_locations(self, locations, global_time, prompt_meta):
|
186 |
+
|
187 |
+
"""
|
188 |
+
Rates different locations in the simulated environment based on the agent's preferences and experiences.
|
189 |
+
|
190 |
+
Parameters:
|
191 |
+
-----------
|
192 |
+
locations : Locations
|
193 |
+
The Locations object representing different areas in the simulated environment.
|
194 |
+
global_time : int
|
195 |
+
The current time in the simulation.
|
196 |
+
prompt_meta : str
|
197 |
+
The prompt used to rate the locations.
|
198 |
+
|
199 |
+
Returns:
|
200 |
+
--------
|
201 |
+
place_ratings : list
|
202 |
+
A list of tuples representing the location, its rating, and the generated response.
|
203 |
+
|
204 |
+
"""
|
205 |
+
|
206 |
+
place_ratings = []
|
207 |
+
for location in locations.locations.values():
|
208 |
+
prompt = "You are {}. Your plans are: {}. It is currently {}:00. You are currently at {}. How likely are you to go to {} next?".format(self.name, self.plans, str(global_time), locations.get_location(self.location), location.name)
|
209 |
+
res = generate(prompt_meta.format(prompt), self.use_openai)
|
210 |
+
rating = get_rating(res)
|
211 |
+
max_attempts = 2
|
212 |
+
current_attempt = 0
|
213 |
+
while rating is None and current_attempt < max_attempts:
|
214 |
+
rating = get_rating(res)
|
215 |
+
current_attempt += 1
|
216 |
+
if rating is None:
|
217 |
+
rating = 0
|
218 |
+
place_ratings.append((location.name, rating, res))
|
219 |
+
self.place_ratings = place_ratings
|
220 |
+
return sorted(place_ratings, key=lambda x: x[1], reverse=True)
|
221 |
+
|
222 |
+
def move(self, new_location_name):
|
223 |
+
|
224 |
+
if new_location_name == self.location:
|
225 |
+
return self.location
|
226 |
+
|
227 |
+
try:
|
228 |
+
path = nx.shortest_path(self.world_graph, source=self.location, target=new_location_name)
|
229 |
+
self.location = new_location_name
|
230 |
+
except nx.NetworkXNoPath:
|
231 |
+
print(f"No path found between {self.location} and {new_location_name}")
|
232 |
+
return self.location
|
233 |
+
|
234 |
+
return self.location
|
235 |
+
|
generative-agents-main/game_simulation/locations/__init__.py
ADDED
File without changes
|
generative-agents-main/game_simulation/locations/locations.py
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
class Location:
|
2 |
+
"""
|
3 |
+
A class to represent a location in the simulated environment.
|
4 |
+
|
5 |
+
Attributes:
|
6 |
+
----------
|
7 |
+
name : str
|
8 |
+
The name of the location.
|
9 |
+
description : str
|
10 |
+
A brief description of the location.
|
11 |
+
|
12 |
+
Methods:
|
13 |
+
-------
|
14 |
+
describe():
|
15 |
+
Prints the description of the location.
|
16 |
+
"""
|
17 |
+
|
18 |
+
def __init__(self, name, description):
|
19 |
+
self.name = name
|
20 |
+
self.description = description
|
21 |
+
|
22 |
+
def __str__(self):
|
23 |
+
return self.name
|
24 |
+
|
25 |
+
def describe(self):
|
26 |
+
print(self.description)
|
27 |
+
|
28 |
+
class Locations:
|
29 |
+
"""
|
30 |
+
A class to represent a collection of locations in the simulated environment.
|
31 |
+
|
32 |
+
Attributes:
|
33 |
+
----------
|
34 |
+
locations : dict
|
35 |
+
A dictionary of locations, with keys as the location names and values as Location objects.
|
36 |
+
|
37 |
+
Methods:
|
38 |
+
-------
|
39 |
+
add_location(name, description):
|
40 |
+
Adds a new location to the collection.
|
41 |
+
|
42 |
+
get_location(name):
|
43 |
+
Returns the Location object with the given name.
|
44 |
+
|
45 |
+
__str__():
|
46 |
+
Returns a string representation of the collection of locations.
|
47 |
+
"""
|
48 |
+
|
49 |
+
def __init__(self):
|
50 |
+
self.locations = {}
|
51 |
+
|
52 |
+
def add_location(self, name, description):
|
53 |
+
self.locations[name] = Location(name, description)
|
54 |
+
|
55 |
+
def get_location(self, name):
|
56 |
+
return self.locations.get(name)
|
57 |
+
|
58 |
+
def __str__(self):
|
59 |
+
return '\n'.join([str(location) for location in self.locations.values()])
|
60 |
+
|
61 |
+
|
62 |
+
|
generative-agents-main/game_simulation/main.py
ADDED
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import json
|
2 |
+
import networkx as nx
|
3 |
+
from agents.agent import Agent
|
4 |
+
from locations.locations import Locations
|
5 |
+
from utils.text_generation import summarize_simulation
|
6 |
+
|
7 |
+
# Set default value for prompt_meta if not defined elsewhere
|
8 |
+
prompt_meta = '### Instruction:\n{}\n### Response:'
|
9 |
+
|
10 |
+
# Initialize global time and simulation variables
|
11 |
+
global_time = 0
|
12 |
+
repeats = 5
|
13 |
+
|
14 |
+
log_locations = False
|
15 |
+
log_actions = True
|
16 |
+
log_plans = False
|
17 |
+
log_ratings = False
|
18 |
+
log_memories = False
|
19 |
+
|
20 |
+
print_locations = True
|
21 |
+
print_actions = True
|
22 |
+
print_plans = True
|
23 |
+
print_ratings = True
|
24 |
+
print_memories = False
|
25 |
+
|
26 |
+
use_openai=True
|
27 |
+
|
28 |
+
# Start simulation loop
|
29 |
+
whole_simulation_output = ""
|
30 |
+
|
31 |
+
# Load town areas and people from JSON file
|
32 |
+
with open('simulation_config.json', 'r') as f:
|
33 |
+
town_data = json.load(f)
|
34 |
+
|
35 |
+
town_people = town_data['town_people']
|
36 |
+
town_areas = town_data['town_areas']
|
37 |
+
|
38 |
+
# Create world_graph
|
39 |
+
world_graph = nx.Graph()
|
40 |
+
last_town_area = None
|
41 |
+
for town_area in town_areas.keys():
|
42 |
+
world_graph.add_node(town_area)
|
43 |
+
world_graph.add_edge(town_area, town_area) # Add an edge to itself
|
44 |
+
if last_town_area is not None:
|
45 |
+
world_graph.add_edge(town_area, last_town_area)
|
46 |
+
last_town_area = town_area
|
47 |
+
|
48 |
+
# Add the edge between the first and the last town areas to complete the cycle
|
49 |
+
world_graph.add_edge(list(town_areas.keys())[0], last_town_area)
|
50 |
+
|
51 |
+
# Initialize agents and locations
|
52 |
+
agents = []
|
53 |
+
locations = Locations()
|
54 |
+
|
55 |
+
|
56 |
+
for name, description in town_people.items():
|
57 |
+
starting_location = description['starting_location']
|
58 |
+
agents.append(Agent(name, description['description'], starting_location, world_graph, use_openai))
|
59 |
+
|
60 |
+
for name, description in town_areas.items():
|
61 |
+
locations.add_location(name, description)
|
62 |
+
|
63 |
+
for repeat in range(repeats):
|
64 |
+
#log_output for one repeat
|
65 |
+
log_output = ""
|
66 |
+
|
67 |
+
print(f"====================== REPEAT {repeat} ======================\n")
|
68 |
+
log_output += f"====================== REPEAT {repeat} ======================\n"
|
69 |
+
if log_locations:
|
70 |
+
log_output += f"=== LOCATIONS AT START OF REPEAT {repeat} ===\n"
|
71 |
+
log_output += str(locations) + "\n"
|
72 |
+
if print_locations:
|
73 |
+
print(f"=== LOCATIONS AT START OF REPEAT {repeat} ===")
|
74 |
+
print(str(locations) + "\n")
|
75 |
+
|
76 |
+
# Plan actions for each agent
|
77 |
+
for agent in agents:
|
78 |
+
agent.plan(global_time, prompt_meta)
|
79 |
+
if log_plans:
|
80 |
+
log_output += f"{agent.name} plans: {agent.plans}\n"
|
81 |
+
if print_plans:
|
82 |
+
print(f"{agent.name} plans: {agent.plans}")
|
83 |
+
|
84 |
+
# Execute planned actions and update memories
|
85 |
+
for agent in agents:
|
86 |
+
# Execute action
|
87 |
+
action = agent.execute_action(agents, locations.get_location(agent.location), global_time, town_areas, prompt_meta)
|
88 |
+
if log_actions:
|
89 |
+
log_output += f"{agent.name} action: {action}\n"
|
90 |
+
if print_actions:
|
91 |
+
print(f"{agent.name} action: {action}")
|
92 |
+
|
93 |
+
# Update memories
|
94 |
+
for other_agent in agents:
|
95 |
+
if other_agent != agent:
|
96 |
+
memory = f'[Time: {global_time}. Person: {agent.name}. Memory: {action}]'
|
97 |
+
other_agent.memories.append(memory)
|
98 |
+
if log_memories:
|
99 |
+
log_output += f"{other_agent.name} remembers: {memory}\n"
|
100 |
+
if print_memories:
|
101 |
+
print(f"{other_agent.name} remembers: {memory}")
|
102 |
+
|
103 |
+
# Compress and rate memories for each agent
|
104 |
+
for agent in agents:
|
105 |
+
agent.compress_memories(global_time)
|
106 |
+
agent.rate_memories(locations, global_time, prompt_meta)
|
107 |
+
if log_ratings:
|
108 |
+
log_output += f"{agent.name} memory ratings: {agent.memory_ratings}\n"
|
109 |
+
if print_ratings:
|
110 |
+
print(f"{agent.name} memory ratings: {agent.memory_ratings}")
|
111 |
+
|
112 |
+
# Rate locations and determine where agents will go next
|
113 |
+
for agent in agents:
|
114 |
+
place_ratings = agent.rate_locations(locations, global_time, prompt_meta)
|
115 |
+
if log_ratings:
|
116 |
+
log_output += f"=== UPDATED LOCATION RATINGS {global_time} FOR {agent.name}===\n"
|
117 |
+
log_output += f"{agent.name} location ratings: {place_ratings}\n"
|
118 |
+
if print_ratings:
|
119 |
+
print(f"=== UPDATED LOCATION RATINGS {global_time} FOR {agent.name}===\n")
|
120 |
+
print(f"{agent.name} location ratings: {place_ratings}\n")
|
121 |
+
|
122 |
+
old_location = agent.location
|
123 |
+
|
124 |
+
new_location_name = place_ratings[0][0]
|
125 |
+
agent.move(new_location_name)
|
126 |
+
|
127 |
+
if print_locations:
|
128 |
+
log_output += f"=== UPDATED LOCATIONS AT TIME {global_time} FOR {agent.name}===\n"
|
129 |
+
log_output += f"{agent.name} moved from {old_location} to {new_location_name}\n"
|
130 |
+
if print_ratings:
|
131 |
+
print(f"=== UPDATED LOCATIONS AT TIME {global_time} FOR {agent.name}===\n")
|
132 |
+
print(f"{agent.name} moved from {old_location} to {new_location_name}\n")
|
133 |
+
|
134 |
+
print(f"----------------------- SUMMARY FOR REPEAT {repeat} -----------------------")
|
135 |
+
|
136 |
+
print(summarize_simulation(log_output=log_output))
|
137 |
+
|
138 |
+
whole_simulation_output += log_output
|
139 |
+
|
140 |
+
# Increment time
|
141 |
+
global_time += 1
|
142 |
+
|
143 |
+
# Write log output to file
|
144 |
+
with open('simulation_log.txt', 'w') as f:
|
145 |
+
f.write(whole_simulation_output)
|
generative-agents-main/game_simulation/simulation_config.json
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"general": {
|
3 |
+
"global_time_limit": 24,
|
4 |
+
"max_attempts": 2,
|
5 |
+
"memory_limit": 10,
|
6 |
+
"prompt_meta": "### Instruction:\n{}\n### Response:"
|
7 |
+
},
|
8 |
+
"town_areas": {
|
9 |
+
"Phandalin Town Square": "Town square of the town of Phandalin.",
|
10 |
+
"Stonehill Inn": "In the center of town stands a large, newly built roadhouse of fieldstone and rough-hewn timbers. The common room is filled with locals nursing mugs of ale or cider, all of them eyeing you with curiosity.",
|
11 |
+
"Barthen's Provisions": "Barthen’s is the biggest trading post in Phandalin. Its shelves stock most ordinary goods and supplies, including backpacks, bedrolls, rope, and rations. The place is open from sunup to sundown.",
|
12 |
+
"Edermath Orchard": "A tidy little cottage beside an apple orchard."
|
13 |
+
},
|
14 |
+
"town_people": {
|
15 |
+
"Toblen Stonehill": {
|
16 |
+
"description": "Owns a trading post.",
|
17 |
+
"starting_location": "Stonehill Inn"
|
18 |
+
},
|
19 |
+
"Daran Edermath": {
|
20 |
+
"description": "Daran Edermath is a retired adventurer who lives in a tidy little cottage beside an apple orchard. A fit, silver-haired half-elf well over a hundred years old, Daran is a fighter who served as a marshal and herald for many years in the lands of the Dragon Coast, far to the southeast. Upon retiring, he returned to the Neverwinter region, his original home.",
|
21 |
+
"starting_location": "Edermath Orchard"
|
22 |
+
},
|
23 |
+
"Linene Graywind": {
|
24 |
+
"description": "Runs a trading post.",
|
25 |
+
"starting_location": "Barthen's Provisions"
|
26 |
+
},
|
27 |
+
"Halia Thornton": {
|
28 |
+
"description": "An ambitious and calculating human woman. She is the guildmaster of Phandalin Miner’s Exchange, a trading post where local miners have their valuable finds weighed, measured, and paid out. In her attempts to establish the Miner's Exchange as the closest thing the town has to a governing authority, she acts as more than a simple merchant.",
|
29 |
+
"starting_location": "Phandalin Town Square"
|
30 |
+
}
|
31 |
+
}
|
32 |
+
}
|
33 |
+
|
generative-agents-main/game_simulation/simulation_log.txt
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
====================== REPEAT 0 ======================
|
2 |
+
Toblen Stonehill action: I get out of bed and have breakfast. Then, I go to the market to buy supplies for the trading post. After that, I arrive at the trading post and unload the supplies. Finally, I start trading.
|
3 |
+
Daran Edermath action: Daran wakes up and starts his day by making breakfast, then going for a walk in the apple orchard. He works on a new painting for a while before making lunch and writing a letter to an old friend. In the afternoon, he takes a walk in the woods before making dinner and reading a book before bed.
|
4 |
+
Linene Graywind action: I get out of bed and make breakfast. Then I go to the trading post and start trading.
|
5 |
+
Halia Thornton action: I will wake up and get dressed. I will have breakfast. I will go to the Miner's Exchange. I will work on establishing the Exchange as a governing authority. I will have dinner. I will go to bed.
|
6 |
+
=== UPDATED LOCATIONS AT TIME 0 FOR Toblen Stonehill===
|
7 |
+
Toblen Stonehill moved from Stonehill Inn to Stonehill Inn
|
8 |
+
=== UPDATED LOCATIONS AT TIME 0 FOR Daran Edermath===
|
9 |
+
Daran Edermath moved from Edermath Orchard to Edermath Orchard
|
10 |
+
=== UPDATED LOCATIONS AT TIME 0 FOR Linene Graywind===
|
11 |
+
Linene Graywind moved from Barthen's Provisions to Barthen's Provisions
|
12 |
+
=== UPDATED LOCATIONS AT TIME 0 FOR Halia Thornton===
|
13 |
+
Halia Thornton moved from Phandalin Town Square to Phandalin Town Square
|
14 |
+
====================== REPEAT 1 ======================
|
15 |
+
Toblen Stonehill action: I wake up, get dressed, and go to my trading post.
|
16 |
+
Daran Edermath action: I wake up and make breakfast. I then go for a walk in the woods and work on a jigsaw puzzle. I make dinner and watch the sunset. I read a book and then go to bed.
|
17 |
+
Linene Graywind action: I wake up at 1:00 and get out of bed. I take a shower at 2:00 and get dressed at 3:00. I eat breakfast at 4:00 and then go to work at 5:00. I open the store at 6:00 and help customers until 9:00. I do paperwork until 10:00 and then close the store. I go home at 11:00 and eat lunch at 12:00. I take a nap at 1:00 and then get up at 2:00. I make dinner at 3:00 and eat dinner at 4:00. I watch TV at 5:00 and then go to bed at 6:00.
|
18 |
+
Halia Thornton action: I will go to the Miner's Exchange and work on establishing it as the governing authority of Phandalin.
|
19 |
+
=== UPDATED LOCATIONS AT TIME 1 FOR Toblen Stonehill===
|
20 |
+
Toblen Stonehill moved from Stonehill Inn to Stonehill Inn
|
21 |
+
=== UPDATED LOCATIONS AT TIME 1 FOR Daran Edermath===
|
22 |
+
Daran Edermath moved from Edermath Orchard to Edermath Orchard
|
23 |
+
=== UPDATED LOCATIONS AT TIME 1 FOR Linene Graywind===
|
24 |
+
Linene Graywind moved from Barthen's Provisions to Barthen's Provisions
|
25 |
+
=== UPDATED LOCATIONS AT TIME 1 FOR Halia Thornton===
|
26 |
+
Halia Thornton moved from Phandalin Town Square to Phandalin Town Square
|
27 |
+
====================== REPEAT 2 ======================
|
28 |
+
Toblen Stonehill action: I wake up and get breakfast at the tavern. I go to the stables and check on the animals. I go to the market and see what's new. I go back to the trading post and start working. I take a break for dinner. I finish up work for the day. I go to the tavern for a drink. I go to bed.
|
29 |
+
Daran Edermath action: Daran Edermath wakes up at 2:00 and eats breakfast at 2:30. He goes for a walk in the apple orchard at 3:00 and works on some gardening at 4:00. He makes dinner at 5:00 and relaxes and reads a book at 6:00. He goes to bed at 7:00.
|
30 |
+
Linene Graywind action: I wake up at 2:00 and get dressed at 2:30. I eat breakfast at 3:00 and start work at 4:00. I check inventory at 5:00 and help customers at 6:00. I close up shop at 7:00.
|
31 |
+
Halia Thornton action: Get out of bed, have breakfast, go over today's goals, meet with Sildar, train with sword, check on miners, eat dinner, go over today's events, go to bed.
|
32 |
+
=== UPDATED LOCATIONS AT TIME 2 FOR Toblen Stonehill===
|
33 |
+
Toblen Stonehill moved from Stonehill Inn to Phandalin Town Square
|
34 |
+
=== UPDATED LOCATIONS AT TIME 2 FOR Daran Edermath===
|
35 |
+
Daran Edermath moved from Edermath Orchard to Edermath Orchard
|
36 |
+
=== UPDATED LOCATIONS AT TIME 2 FOR Linene Graywind===
|
37 |
+
Linene Graywind moved from Barthen's Provisions to Barthen's Provisions
|
38 |
+
=== UPDATED LOCATIONS AT TIME 2 FOR Halia Thornton===
|
39 |
+
Halia Thornton moved from Phandalin Town Square to Stonehill Inn
|
40 |
+
====================== REPEAT 3 ======================
|
41 |
+
Toblen Stonehill action: I wake up, eat breakfast, open the trading post, trade with customers, close the trading post, eat dinner, and go to bed.
|
42 |
+
Daran Edermath action: Daran Edermath gets out of bed and eats breakfast. He then goes for a walk in the apple orchard, checking on the apple trees. After lunch, he relaxes for a bit before going to bed.
|
43 |
+
Linene Graywind action: I get out of bed and get breakfast. I start work at 4:00 and take a break at 5:00. I eat dinner at 6:00 and relax at 7:00.
|
44 |
+
Halia Thornton action: I wake up and get dressed. I arrive at the Miner's Exchange at 4:00 and begin work.
|
45 |
+
=== UPDATED LOCATIONS AT TIME 3 FOR Toblen Stonehill===
|
46 |
+
Toblen Stonehill moved from Phandalin Town Square to Phandalin Town Square
|
47 |
+
=== UPDATED LOCATIONS AT TIME 3 FOR Daran Edermath===
|
48 |
+
Daran Edermath moved from Edermath Orchard to Edermath Orchard
|
49 |
+
=== UPDATED LOCATIONS AT TIME 3 FOR Linene Graywind===
|
50 |
+
Linene Graywind moved from Barthen's Provisions to Stonehill Inn
|
51 |
+
=== UPDATED LOCATIONS AT TIME 3 FOR Halia Thornton===
|
52 |
+
Halia Thornton moved from Stonehill Inn to Phandalin Town Square
|
53 |
+
====================== REPEAT 4 ======================
|
54 |
+
Toblen Stonehill action: I look for signs of the bandits.
|
55 |
+
Daran Edermath action: I wake up at 4:00 AM and eat breakfast at 5:00 AM. I go for a walk at 6:00 AM and check the traps at 7:00 AM. I go to town at 8:00 AM and buy groceries at 9:00 AM. I talk to the villagers at 10:00 AM and go home at 11:00 AM. I make lunch at 12:00 PM and eat lunch at 1:00 PM. I take a nap at 2:00 PM and wake up at 3:00 PM. I check the traps at 4:00 PM, go for a walk at 5:00 PM, and go to town at 6:00 PM. I talk to the villagers at 7:00 PM and go home at 8:00 PM. I make dinner at 9:00 PM and eat dinner at 10:00 PM. I go to bed at 11:00 PM.
|
56 |
+
Linene Graywind action: I wake up and get dressed. I make breakfast and then open the trading post. I start trading with customers and take a break at 9:00. I continue trading and then close the trading post at 11:00. I eat lunch and then rest for an hour. I prepare for tomorrow and then go to bed.
|
57 |
+
Halia Thornton action: I go to the Miner's Exchange.
|
58 |
+
=== UPDATED LOCATIONS AT TIME 4 FOR Toblen Stonehill===
|
59 |
+
Toblen Stonehill moved from Phandalin Town Square to Stonehill Inn
|
60 |
+
=== UPDATED LOCATIONS AT TIME 4 FOR Daran Edermath===
|
61 |
+
Daran Edermath moved from Edermath Orchard to Barthen's Provisions
|
62 |
+
=== UPDATED LOCATIONS AT TIME 4 FOR Linene Graywind===
|
63 |
+
Linene Graywind moved from Stonehill Inn to Stonehill Inn
|
64 |
+
=== UPDATED LOCATIONS AT TIME 4 FOR Halia Thornton===
|
65 |
+
Halia Thornton moved from Phandalin Town Square to Phandalin Town Square
|
generative-agents-main/game_simulation/utils/__init__.py
ADDED
File without changes
|
generative-agents-main/game_simulation/utils/text_generation.py
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import openai
|
2 |
+
import re
|
3 |
+
import os
|
4 |
+
from dotenv import load_dotenv
|
5 |
+
from transformers import pipeline
|
6 |
+
|
7 |
+
# Load environment variables from .env file
|
8 |
+
load_dotenv()
|
9 |
+
|
10 |
+
# Get OpenAI API key from environment variables
|
11 |
+
openai.api_key = os.getenv("OPENAI_API_KEY")
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
def generate(prompt, use_openai=True):
|
16 |
+
"""
|
17 |
+
Generates a text completion for a given prompt using either the OpenAI GPT-3 API or the Hugging Face GPT-3 model.
|
18 |
+
|
19 |
+
Args:
|
20 |
+
- prompt (str): The text prompt to generate a completion for.
|
21 |
+
- use_openai (bool): A boolean flag indicating whether to use the OpenAI API (True) or the Hugging Face GPT-3 model (False).
|
22 |
+
|
23 |
+
Returns:
|
24 |
+
- str: The generated text completion.
|
25 |
+
"""
|
26 |
+
if use_openai:
|
27 |
+
model_engine = "text-davinci-002"
|
28 |
+
response = openai.Completion.create(
|
29 |
+
engine=model_engine,
|
30 |
+
prompt=prompt,
|
31 |
+
max_tokens=1024,
|
32 |
+
n=1,
|
33 |
+
stop=None,
|
34 |
+
temperature=0.5,
|
35 |
+
)
|
36 |
+
|
37 |
+
message = response.choices[0].text
|
38 |
+
return message.strip()
|
39 |
+
|
40 |
+
else:
|
41 |
+
hf_generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B', device=0)
|
42 |
+
output = hf_generator(prompt, max_length=len(prompt)+128, do_sample=True)
|
43 |
+
out = output[0]['generated_text']
|
44 |
+
if '### Response:' in out:
|
45 |
+
out = out.split('### Response:')[1]
|
46 |
+
if '### Instruction:' in out:
|
47 |
+
out = out.split('### Instruction:')[0]
|
48 |
+
return out.strip()
|
49 |
+
|
50 |
+
|
51 |
+
def get_rating(x):
|
52 |
+
"""
|
53 |
+
Extracts a rating from a string.
|
54 |
+
|
55 |
+
Args:
|
56 |
+
- x (str): The string to extract the rating from.
|
57 |
+
|
58 |
+
Returns:
|
59 |
+
- int: The rating extracted from the string, or None if no rating is found.
|
60 |
+
"""
|
61 |
+
nums = [int(i) for i in re.findall(r'\d+', x)]
|
62 |
+
if len(nums)>0:
|
63 |
+
return min(nums)
|
64 |
+
else:
|
65 |
+
return None
|
66 |
+
|
67 |
+
# Summarize simulation loop with OpenAI GPT-4
|
68 |
+
def summarize_simulation(log_output):
|
69 |
+
prompt = f"Summarize the simulation loop:\n\n{log_output}"
|
70 |
+
response = generate(prompt)
|
71 |
+
return response
|
generative-agents-main/notebook/Release/generative_model_simple.ipynb
ADDED
The diff for this file is too large to render.
See raw diff
|
|
generative-agents-main/notebook/WIP/demo_01_phandalin_simulation.ipynb
ADDED
The diff for this file is too large to render.
See raw diff
|
|