|
--- |
|
license: mit |
|
dataset_info: |
|
features: |
|
- name: prompt |
|
dtype: string |
|
- name: completion |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 13449668588 |
|
num_examples: 500000 |
|
download_size: 3251708048 |
|
dataset_size: 13449668588 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
task_categories: |
|
- text-generation |
|
tags: |
|
- nethack |
|
- interactive decision-making |
|
- llm agents |
|
- imitation learning |
|
- behavioral cloning |
|
--- |
|
# LangHack |
|
|
|
LangHack is a dataset of [diff history](https://diffhistory.github.io/) demonstration data for the rogue-like video game [NetHack](https://github.com/facebookresearch/nle) generated using the symbolic [AutoAscend bot](https://github.com/maciej-sypetkowski/autoascend), which boasts state-of-the-art performance in the game (as of 07/22/2024). |
|
|
|
This dataset was created by sub-sampling 10,000 full NetHack games played by AutoAscend into contiguous "chunks" of 64 timesteps, and converting the agent's game state observations in natural language text using the [NetHack Language Wrapper](https://github.com/ngoodger/nle-language-wrapper). Sub-sampling was performed uniformly at random over all recorded game data. |
|
|
|
LangHack prompts correspond to a full game state observation at one timestep of AutoAscend gameplay, while completions correspond to a interleaved set of the subsequent bot actions and their resultant text deltas in the world state. |
|
|
|
A detailed report of NetHack agent performance achieved by finetuning a tiny LLM ([GPT2-127M](https://huggingface.co/openai-community/gpt2)) on LangHack is provided [here](https://arxiv.org/abs/2312.07540). |