elcc / README.md
bboldt's picture
Update README
f69cc6d
metadata
license: cc-by-4.0
tags:
  - synthetic
  - emergent communication
  - linguistics
pretty_name: Emergent Language Corpus Collection
size_categories:
  - 10M<n<100M

ELCC

The Emergent Language Corpus Collection is collection of corpora and metadata from a variety of emergent communication simulations.

Using ELCC

You can clone this repository with git LFS and use the data directly or load the data via the mlcroissant library. To install the mlcroissant library and necessary dependencies, see the conda environment at util/environment.yml. Below we show an example of loading ELCC's data via mlcroissant.

import mlcroissant as mlc

cr_url = "https://huggingface.co/datasets/bboldt/elcc/raw/main/croissant.json"
dataset = mlc.Dataset(jsonld=cr_url)

# A raw corpus of integer arrays; the corpora are named based on their paths;
# e..g., "systems/babyai-sr/data/GoToObj/corpus.json" becomes
# "babyai-sr/GoToObj".
records = dataset.records(record_set="babyai-sr/GoToObj")
# System-level metadata
records = dataset.records(record_set="system-metadata")
# Raw JSON string for system metadata; some fields aren't handled well by
# Croissant, so you can access them here if need be.
records = dataset.records(record_set="system-metadata-raw")
# Corpus metadata, specifically metrics generated by ELCC's analyses
records = dataset.records(record_set="corpus-metadata")
# Raw corpus metadata
records = dataset.records(record_set="corpus-metadata-raw")

# `records` can now be iterated through to access the individual elements.

Developing

Running individual EC systems

For each emergent language entry, we provide wrapper code (in systems/*/code/) to create a reproducible environment and run the emergent language-generating code. Environments are specified precisely in the environment.yml file; if you wish to edit the dependencies manually, it may be easier to start with environment.editable.yml instead, if it exists. Next, either run or look at run.sh or run.py to see the commands necessary to produce to the corpora.

Git submodules

This project uses git submodules to manage external dependencies. Submodules do not always operate in an intuitive way, so we provide a brief explanation of how to use them here. By default, submodules are not "init-ed" which means that they will be empty after you clone the project. If you would like to populate a submodule (i.e., the directory pointing to another repo) to see or use its code, run git submodule init path/to/submodule to mark it as init-ed. Second, run git submodule update to populated init-ed submodules. Run git submodule deinit -f path/to/submodule to make the submodule empty again.

Paper

The work in this repository is associated with ELCC: the Emergent Language Corpus Collection. The analyses in this paper use code provided in https://github.com/brendon-boldt/elcc-analysis.

Citation

If you use this code or data in academic work, please cite:

@article{boldt2024elcc,
  title={{ELCC}: the {E}mergent {L}anguage {C}orpus {C}ollection}, 
  author={Brendon Boldt and David Mortensen},
  year={2024},
  eprint={2407.04158},
  volume={2407.04158},
  archivePrefix={arXiv},
  journal={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2407.04158}, 
}