|
# data |
|
|
|
If you are looking for our intermediate labeling version, please refer to [mango-ttic/data-intermediate](https://huggingface.co/datasets/mango-ttic/data-intermediate) |
|
|
|
Find more about us at [mango.ttic.edu](https://mango.ttic.edu) |
|
|
|
## Folder Structure |
|
|
|
Each folder inside `data` contains the cleaned up files used during LLM inference and results evaluations. Here is the tree structure from game `data/night` . |
|
|
|
```bash |
|
data/night/ |
|
├── night.actions.json # list of mentioned actions |
|
├── night.all2all.jsonl # all simple paths between any 2 locations |
|
├── night.all_pairs.jsonl # all connectivity between any 2 locations |
|
├── night.edges.json # list of all edges |
|
├── night.locations.json # list of all locations |
|
└── night.walkthrough # enriched walkthrough exported from Jericho simulator |
|
``` |
|
|
|
## Variations |
|
|
|
### 70-step vs all-step version |
|
|
|
In our paper, we benchmark using the first 70 steps of the walkthrough from each game. We also provide all-step versions of both `data` and `data-intermediate` collection. |
|
|
|
* **70-step** `data-70steps.tar.zst`: contains the first 70 steps of each walkthrough. If the complete walkthrough is shorter than 70 steps, then all steps are used. |
|
|
|
* **All-step** `data.tar.zst`: contains all steps of each walkthrough. |
|
|
|
### Word-only & Word+ID |
|
|
|
* **Word-only** `data.tar.zst`: Nodes are annotated by additional descriptive text to distinguish different locations with similar names. |
|
|
|
* **Word + Object ID** `data-objid.tar.zst`: variation of the word-only version, where nodes are labeled using minimaly fixed names with object id from Jericho simulator. |
|
|
|
* **Word + Random ID** `data-randid.tar.zst`: variation of the Jericho ID version, where the Jericho object id replaced with randomly generated integer. |
|
|
|
We primarily rely on the **word-only** version as benchmark, yet providing word+ID version for diverse benchmark settings. |
|
|
|
## How to use |
|
|
|
We use `data.tar.zst` as an example here. |
|
|
|
### 1. download from Huggingface |
|
|
|
#### by directly download |
|
|
|
You can selectively download certain variation of your choice. |
|
![](direct_download_data.png) |
|
|
|
#### by git |
|
|
|
Make sure you have [git-lfs](https://git-lfs.com) installed |
|
|
|
```bash |
|
git lfs install |
|
git clone https://huggingface.co/datasets/mango-ttic/data |
|
|
|
# or, use hf-mirror if your connection to huggingface.co is slow |
|
# git clone https://hf-mirror.com/datasets/mango-ttic/data |
|
``` |
|
|
|
If you want to clone without large files - just their pointers |
|
|
|
```bash |
|
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/mango-ttic/data |
|
|
|
# or, use hf-mirror if your connection to huggingface.co is slow |
|
# GIT_LFS_SKIP_SMUDGE=1 git clone https://hf-mirror.com/datasets/mango-ttic/data |
|
``` |
|
|
|
### 2. decompress |
|
|
|
Because some json files are huge, we use tar.zst to package the data efficiently. |
|
|
|
silently decompress |
|
|
|
```bash |
|
tar -I 'zstd -d' -xf data.tar.zst |
|
``` |
|
|
|
or, verbosely decompress |
|
|
|
```bash |
|
zstd -d -c data.tar.zst | tar -xvf - |
|
``` |
|
|