Datasets:

Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
mnist1d / README.md
christopher's picture
Update README.md
27d79a0 verified
|
raw
history blame
3.38 kB
metadata
dataset_info:
  features:
    - name: x
      sequence: float64
    - name: 'y'
      dtype: int64
  splits:
    - name: train
      num_bytes: 1328000
      num_examples: 4000
    - name: test
      num_bytes: 332000
      num_examples: 1000
  download_size: 2009200
  dataset_size: 1660000
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
license: apache-2.0
pretty_name: The MNIST-1D Dataset
size_categories:
  - 1K<n<10K

This dataset card is based on the README file of the authors' GitHub repository: https://github.com/greydanus/mnist1d

The MNIST-1D Dataset

Most machine learning models get around the same ~99% test accuracy on MNIST. Our dataset, MNIST-1D, is 100x smaller (default sample size: 4000+1000; dimensionality: 40) and does a better job of separating between models with/without nonlinearity and models with/without spatial inductive biases.

MNIST-1D is a core teaching dataset in Simon Prince's Understanding Deep Learning textbook.

image/png

Comparing MNIST and MNIST-1D

Dataset Logistic Regression MLP CNN GRU* Human Expert
MNIST 92% 99+% 99+% 99+% 99+%
MNIST-1D 32% 68% 94% 91% 96%
MNIST-1D (shuffle**) 32% 68% 56% 57% ~30%
*Training the GRU takes at least 10x the walltime of the CNN.

**The term "shuffle" refers to shuffling the spatial dimension of the dataset, as in Zhang et al. (2017).

Dataset Creation

This version of the dataset was created by using the pickle file provided by the dataset authors in the original repository: mnist1d_data.pkl and was generated like follows:

import sys ; sys.path.append('..')  # useful if you're running locally
import mnist1d
from datasets import Dataset, DatasetDict

# Load the data using the mnist1d library
args = mnist1d.get_dataset_args()
data = mnist1d.get_dataset(args, path='./mnist1d_data.pkl', download=True) # This is the default setting

# Load the data into a Hugging Face dataset and push it to the hub
train = Dataset.from_dict({"x": data["x"], "y":data["y"]})
test = Dataset.from_dict({"x": data["x_test"], "y":data["y_test"]})
DatasetDict({"train":train, "test":test}).push_to_hub("christopher/mnist1d")

The origina

Dataset Usage

Using the datasets library:

from datasets import load_dataset
train = load_dataset("christopher/mnist1d", split="train")
test = load_dataset("christopher/mnist1d", split="test")
train_test = load_dataset("christopher/mnist1d", split="train+test")

Then to get the data as numpy arrays:

train.set_format("numpy")
x = train["x"]
y = train["y"]

Citation

@inproceedings{greydanus2024scaling,
  title={Scaling down deep learning with {MNIST}-{1D}},
  author={Greydanus, Sam and Kobak, Dmitry},
  booktitle={Proceedings of the 41st International Conference on Machine Learning},
  year={2024}
}