Datasets:

Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
mnist1d / README.md
christopher's picture
Update README.md
c0e05b7 verified
|
raw
history blame
2.32 kB
metadata
dataset_info:
  features:
    - name: x
      sequence: float64
    - name: 'y'
      dtype: int64
  splits:
    - name: train
      num_bytes: 1328000
      num_examples: 4000
    - name: test
      num_bytes: 332000
      num_examples: 1000
  download_size: 2009200
  dataset_size: 1660000
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
license: apache-2.0
pretty_name: The MNIST-1D Dataset
size_categories:
  - 1K<n<10K

The following is taken from the authors' GitHub repository: https://github.com/greydanus/mnist1d

The MNIST-1D Dataset

Most machine learning models get around the same ~99% test accuracy on MNIST. Our dataset, MNIST-1D, is 100x smaller (default sample size: 4000+1000; dimensionality: 40) and does a better job of separating between models with/without nonlinearity and models with/without spatial inductive biases.

Dataset Creation

This version of the dataset was created by using the pickle file provided by the dataset authors in the original repository: mnist1d_data.pkl and was generated like follows:

import sys ; sys.path.append('..')  # useful if you're running locally
import mnist1d
from datasets import Dataset, DatasetDict

# Load the data using the mnist1d library
args = mnist1d.get_dataset_args()
data = mnist1d.get_dataset(args, path='./mnist1d_data.pkl', download=True) # This is the default setting

# Load the data into a Hugging Face dataset and push it to the hub
train = Dataset.from_dict({"x": data["x"], "y":data["y"]})
test = Dataset.from_dict({"x": data["x_test"], "y":data["y_test"]})
DatasetDict({"train":train, "test":test}).push_to_hub("christopher/mnist1d")

Dataset Usage

using the datasets library:

from datasets import load_dataset
train = load_dataset("christopher/mnist1d", split="train")
test = load_dataset("christopher/mnist1d", split="test")
all = load_dataset("christopher/mnist1d", split="train+test")

Citation

@inproceedings{greydanus2024scaling,
  title={Scaling down deep learning with {MNIST}-{1D}},
  author={Greydanus, Sam and Kobak, Dmitry},
  booktitle={Proceedings of the 41st International Conference on Machine Learning},
  year={2024}
}