--- license: mit configs: - config_name: default data_files: - split: train path: data/train-* - split: val path: data/val-* - split: val_dense path: data/val_dense-* - split: val_sparse path: data/val_sparse-* dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 825600000 num_examples: 1600000 - name: val num_bytes: 8256000 num_examples: 16000 - name: val_dense num_bytes: 2064000 num_examples: 4000 - name: val_sparse num_bytes: 82560000 num_examples: 160000 download_size: 354675733 dataset_size: 918480000 --- Data for [**Flip-Flop Language Modeling**](https://arxiv.org/abs/2306.00946). The task is to correctly execute the sequential operations of a 1-bit register. The Transformer architecture, despite being apparently built for this operation, makes sporadic extrapolation errors (*attention glitches*). An open challenge is to fix these without recourse to long-tailed data or a recurrent architecture. Splits reflect the FFLM setup from the paper: - `train`: 1.6M sequences from FFL(0.8) *(256 instructions, 80% ignore, 10% read, 10% write)*. - `val`: 16K sequences from FFL(0.8). - `val_dense`: 4K sequences from FFL(0.1). - `val_sparse`: 160K sequences from FFL(0.98). Usage --- ```python import torch import datasets dataset = datasets.load_dataset('synthseq/flipflop') dataset['train'][0] # {'text': 'w1i1w0i0 ... def tokenize_batch(batch): mapping = {'w': 0, 'r': 1, 'i': 2, '0': 3, '1': 4} tokenized_batch = [[mapping[char] for char in s] for s in batch['text']] return {'tokens': torch.tensor(tokenized_batch, dtype=torch.int64)} dataset.set_transform(tokenize_batch) dataset['train'][0] # {'tokens': tensor([0, 4, 2, 4, 0, 3, 2, 3, 2 ... ``` Citation --- ``` @article{liu2023exposing, title={Exposing Attention Glitches with Flip-Flop Language Modeling}, author={Liu, Bingbin and Ash, Jordan T and Goel, Surbhi and Krishnamurthy, Akshay and Zhang, Cyril}, journal={arXiv preprint arXiv:2306.00946}, year={2023} } ```