File size: 3,375 Bytes
32c7bc2
 
 
 
 
 
 
 
 
 
036fdbb
a5f760c
036fdbb
 
a5f760c
 
036fdbb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a5f760c
 
 
 
 
 
 
 
 
f69cc6d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
license: cc-by-4.0
tags:
- synthetic
- emergent communication
- linguistics
pretty_name: Emergent Language Corpus Collection
size_categories:
- 10M<n<100M
---
# ELCC

The Emergent Language Corpus Collection is collection of corpora and metadata
from a variety of emergent communication simulations.


## Using ELCC

You can clone this repository with git LFS and use the data directly or load
the data via the mlcroissant library.  To install the mlcroissant library and
necessary dependencies, see the conda environment at `util/environment.yml`.
Below we show an example of loading ELCC's data via mlcroissant.
```python
import mlcroissant as mlc

cr_url = "https://huggingface.co/datasets/bboldt/elcc/raw/main/croissant.json"
dataset = mlc.Dataset(jsonld=cr_url)

# A raw corpus of integer arrays; the corpora are named based on their paths;
# e..g., "systems/babyai-sr/data/GoToObj/corpus.json" becomes
# "babyai-sr/GoToObj".
records = dataset.records(record_set="babyai-sr/GoToObj")
# System-level metadata
records = dataset.records(record_set="system-metadata")
# Raw JSON string for system metadata; some fields aren't handled well by
# Croissant, so you can access them here if need be.
records = dataset.records(record_set="system-metadata-raw")
# Corpus metadata, specifically metrics generated by ELCC's analyses
records = dataset.records(record_set="corpus-metadata")
# Raw corpus metadata
records = dataset.records(record_set="corpus-metadata-raw")

# `records` can now be iterated through to access the individual elements.
```


## Developing

### Running individual EC systems
For each emergent language entry, we provide wrapper code (in
`systems/*/code/`)  to create a reproducible environment and run the emergent
language-generating code.  Environments are specified precisely in the
`environment.yml` file; if you wish to edit the dependencies manually, it may
be easier to start with `environment.editable.yml` instead, if it exists.
Next, either run or look at `run.sh` or `run.py` to see the commands necessary
to produce to the corpora.

### Git submodules
This project uses git submodules to manage external dependencies.  Submodules
do not always operate in an intuitive way, so we provide a brief explanation of
how to use them here.  By default, submodules are not "init-ed" which means
that they will be empty after you clone the project.  If you would like to
populate a submodule (i.e., the directory pointing to another repo) to see or
use its code, run `git submodule init path/to/submodule` to mark it as init-ed.
Second, run `git submodule update` to populated init-ed submodules.  Run `git
submodule deinit -f path/to/submodule` to make the submodule empty again.


## Paper
The work in this repository is associated with [ELCC: the Emergent Language
Corpus Collection](https://arxiv.org/abs/2407.04158).  The analyses in this
paper use code provided in https://github.com/brendon-boldt/elcc-analysis.


### Citation
If you use this code or data in academic work, please cite:

    @article{boldt2024elcc,
      title={{ELCC}: the {E}mergent {L}anguage {C}orpus {C}ollection}, 
      author={Brendon Boldt and David Mortensen},
      year={2024},
      eprint={2407.04158},
      volume={2407.04158},
      archivePrefix={arXiv},
      journal={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.04158}, 
    }