The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

Dataset Card for imagenet_augmented

This dataset provides an augmented version of a subset of ImageNet, used to benchmark how classical and synthetic augmentations impact large-scale image classification models.

All training data is organized by augmentation method, and the test/ set remains clean and unmodified. The dataset is compressed in .zip format and must be unzipped before use.

πŸ“₯ Download & Extract

wget https://huggingface.co/datasets/ianisdev/imagenet_augmented/resolve/main/imagenet.zip
unzip imagenet.zip

πŸ“ Dataset Structure

imagenet/
β”œβ”€β”€ test/                         # Clean test images (unaltered)
└── train/
    β”œβ”€β”€ traditional/             # Color jitter, rotation, flip
    β”œβ”€β”€ mixup/                   # Interpolated image pairs
    β”œβ”€β”€ miamix/                  # Color-affine blend
    β”œβ”€β”€ auto/                    # AutoAugment (torchvision)
    β”œβ”€β”€ lsb/                     # LSB-level bit noise
    β”œβ”€β”€ gan/                     # BigGAN class-conditional samples
    β”œβ”€β”€ vqvae/                   # VQ-VAE reconstructions
    └── fusion/                  # Pairwise blended jittered samples

Each folder uses ImageFolder format:

train/{augmentation}/{imagenet_class}/image.jpg
test/{imagenet_class}/image.jpg

Dataset Details

Dataset Sources

Uses

Direct Use

  • Large-scale model training with controlled augmentation types
  • Evaluating deep learning robustness at ImageNet-level complexity

Out-of-Scope Use

  • Not designed for exact ImageNet benchmarking (subset only)
  • Not recommended for production model training without validation on original ImageNet

Dataset Creation

Curation Rationale

To study how augmentation types affect generalization in large, fine-grained image classification tasks.

Source Data

A compressed ImageNet subset was augmented using multiple synthetic and classical pipelines.

Data Collection and Processing

  • Traditional: Flip, rotate, color jitter
  • Auto: AutoAugment (ImageNet policy)
  • Mixup, MIA Mix, Fusion: Pairwise augmentations with affine/jitter
  • GAN: Used pretrained BigGAN-deep-256
  • VQ-VAE: Reconstructed using a trained encoder-decoder model

Who are the source data producers?

Original ImageNet images are from the official ILSVRC dataset. Augmented samples were generated by Muhammad Anis Ur Rahman.

Bias, Risks, and Limitations

  • Some classes may contain visually distorted samples
  • GAN/VQ-VAE samples can introduce low-fidelity noise
  • Dataset may not reflect full ImageNet diversity

Recommendations

  • Use test/ set for consistent evaluation
  • Measure class-level confusion and error propagation
  • Evaluate robustness to real-world samples

Citation

BibTeX:

@misc{rahman2025imagenetaug,
  author = {Muhammad Anis Ur Rahman},
  title = {Augmented ImageNet Dataset for Image Classification},
  year = {2025},
  url = {https://huggingface.co/datasets/ianisdev/imagenet_augmented}
}

APA:

Rahman, M. A. U. (2025). Augmented ImageNet Dataset for Image Classification. Hugging Face. https://huggingface.co/datasets/ianisdev/imagenet_augmented

Downloads last month
64