mnist_ambiguous / README.md
mweiss's picture
Update README.md
afd43b5
|
raw
history blame
2.49 kB
metadata
license: cc-by-sa-3.0
task_categories:
  - image-classification
language:
  - en
pretty_name: mnist_ambigous
size_categories:
  - 10K<n<100K
source_datasets:
  - extended|mnist
annotations_creators:
  - machine-generated

Mnist-Ambiguous

This dataset contains mnist-like images, but with an unclear ground truth. For each image, there are two classes which could be considered true. Robust and uncertainty-aware DNNs should thus detect and flag these issues.

Features

Same as mnist, the supervised dataset has an image (28x28 int array) and a label (int).

Additionally, the following features are exposed for your convenience:

  • text_label (str): A textual representation of the probabilistic label, e.g. p(Pullover)=0.54, p(Shirt)=0.46
  • p_label (list of floats): Ground-Truth probabilities for each class (two nonzero values for our ambiguous images)
  • is_ambiguous (bool): Flag indicating if this is one of our ambiguous images (see 'splits' below)

Splits

We provide four splits:

  • test: 10'000 ambiguous images
  • train: 10'000 ambiguous images - adding ambiguous images to the training set makes sure test-time ambiguous images are in-distribution.
  • test_mixed: 20'000 images, consisting of the (shuffled) concatenation of our ambiguous test test and the nominal mnist test set by LeCun et. al.,
  • train_mixed: 70'000 images, consisting

For research targeting explicitly aleatoric uncertainty, we recommend training the model using train_mixed. Otherwise, our test set will lead to both epistemic and aleatoric uncertainty. Note that in related literature, these 'mixed' splits are sometimes denoted as dirty splits.

Assessment and Validity

For a brief discussion of the strength and weaknesses of this dataset, including a quantitative comparison to the (only) other ambiguous datasets available in the literature, we refer to our paper.

Paper

Pre-print here: https://arxiv.org/abs/2207.10495

Citation:

@misc{https://doi.org/10.48550/arxiv.2207.10495,
  doi = {10.48550/ARXIV.2207.10495},
  url = {https://arxiv.org/abs/2207.10495},
  author = {Weiss, Michael and Gómez, André García and Tonella, Paolo},
  title = {A Forgotten Danger in DNN Supervision Testing: Generating and Detecting True Ambiguity},
  publisher = {arXiv},
  year = {2022}
}

License

As this is a derivative work of mnist, which is CC-BY-SA 3.0 licensed, our dataset is released using the same license.