File size: 3,384 Bytes
1b3ed62 8c6e678 124b64a 1b3ed62 8c6e678 1b3ed62 8c6e678 1b3ed62 8c6e678 1b3ed62 124b64a 1b3ed62 c0e05b7 fa01510 c0e05b7 124b64a c0e05b7 fa01510 27d79a0 fa01510 c0e05b7 fa01510 c0e05b7 3035ec4 c0e05b7 111ebc6 c0e05b7 3035ec4 c0e05b7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
---
dataset_info:
features:
- name: x
sequence: float64
- name: 'y'
dtype: int64
splits:
- name: train
num_bytes: 1328000
num_examples: 4000
- name: test
num_bytes: 332000
num_examples: 1000
download_size: 2009200
dataset_size: 1660000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: apache-2.0
pretty_name: The MNIST-1D Dataset
size_categories:
- 1K<n<10K
---
> [!NOTE]
> This dataset card is based on the README file of the authors' GitHub repository: https://github.com/greydanus/mnist1d
>
# The MNIST-1D Dataset
Most machine learning models get around the same ~99% test accuracy on MNIST. Our dataset, MNIST-1D, is 100x smaller (default sample size: 4000+1000; dimensionality: 40) and does a better job of separating between models with/without nonlinearity and models with/without spatial inductive biases.
MNIST-1D is a core teaching dataset in Simon Prince's [Understanding Deep Learning](https://udlbook.github.io/udlbook/) textbook.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5e70f6048ce3c604d78fe133/VhgTkDsRQ24LVCsup9oMX.png)
## Comparing MNIST and MNIST-1D
| Dataset | Logistic Regression | MLP | CNN | GRU* | Human Expert |
|:----------------------|:---------------------|:------|:------|:------|:--------------|
| MNIST | 92% | 99+% | 99+% | 99+% | 99+% |
| MNIST-1D | 32% | 68% | 94% | 91% | 96% |
| MNIST-1D (shuffle**) | 32% | 68% | 56% | 57% | ~30% |
*Training the GRU takes at least 10x the walltime of the CNN.
**The term "shuffle" refers to shuffling the spatial dimension of the dataset, as in [Zhang et al. (2017)](https://arxiv.org/abs/1611.03530).
## Dataset Creation
This version of the dataset was created by using the pickle file provided by the dataset authors in the original repository: [mnist1d_data.pkl](https://github.com/greydanus/mnist1d/blob/master/mnist1d_data.pkl) and was generated like follows:
```python
import sys ; sys.path.append('..') # useful if you're running locally
import mnist1d
from datasets import Dataset, DatasetDict
# Load the data using the mnist1d library
args = mnist1d.get_dataset_args()
data = mnist1d.get_dataset(args, path='./mnist1d_data.pkl', download=True) # This is the default setting
# Load the data into a Hugging Face dataset and push it to the hub
train = Dataset.from_dict({"x": data["x"], "y":data["y"]})
test = Dataset.from_dict({"x": data["x_test"], "y":data["y_test"]})
DatasetDict({"train":train, "test":test}).push_to_hub("christopher/mnist1d")
```
The origina
## Dataset Usage
Using the `datasets` library:
```python
from datasets import load_dataset
train = load_dataset("christopher/mnist1d", split="train")
test = load_dataset("christopher/mnist1d", split="test")
train_test = load_dataset("christopher/mnist1d", split="train+test")
```
Then to get the data as numpy arrays:
```python
train.set_format("numpy")
x = train["x"]
y = train["y"]
```
## Citation
```json
@inproceedings{greydanus2024scaling,
title={Scaling down deep learning with {MNIST}-{1D}},
author={Greydanus, Sam and Kobak, Dmitry},
booktitle={Proceedings of the 41st International Conference on Machine Learning},
year={2024}
}
``` |