Trains_and_Trams / README.md
ROSCOSMOS's picture
Update README.md
f652298 verified
metadata
license: mit
pretty_name: Trains and Trams
tags:
  - image
  - computer-vision
  - trains
  - trams
task_categories:
  - image-classification
language:
  - en
configs:
  - config_name: default
    data_files: train/**/*.arrow
    features:
      - name: image
        dtype: image
      - name: unique_id
        dtype: string
      - name: width
        dtype: int32
      - name: height
        dtype: int32
      - name: image_mode_on_disk
        dtype: string
      - name: original_file_format
        dtype: string
  - config_name: preview
    data_files: preview/**/*.arrow
    features:
      - name: image
        dtype: image
      - name: unique_id
        dtype: string
      - name: width
        dtype: int32
      - name: height
        dtype: int32
      - name: original_file_format
        dtype: string
      - name: image_mode_on_disk
        dtype: string

Trains and Trams

High resolution image subset from the Aesthetic-Train-V2 dataset containing a mixture of both Trains and Trams. There is some nuanced misalignment with how CLIP perceives the concepts of trains and trams during coarse searches therefor I have included both.

Dataset Details

  • Curator: Roscosmos
  • Version: 1.0.0
  • Total Images: 650
  • Average Image Size (on disk): ~5.5 MB compressed
  • Primary Content: Trains and Trams
  • Standardization: All images are standardized to RGB mode and saved at 95% quality for consistency.

Dataset Creation & Provenance

1. Original Master Dataset

This dataset is a subset derived from: zhang0jhon/Aesthetic-Train-V2

2. Iterative Curation Methodology

CLIP retrieval / manual curation.

Dataset Structure & Content

This dataset offers the following configurations/subsets:

  • Default (Full train data) configuration: Contains the full, high-resolution image data and associated metadata. This is the recommended configuration for model training and full data analysis. The default split for this configuration is train. Each example (row) in the dataset contains the following fields:

  • image: The actual image data. In the default (full) configuration, this is full-resolution. In the preview configuration, this is a viewer-compatible version.

  • unique_id: A unique identifier assigned to each image.

  • width: The width of the image in pixels (from the full-resolution image).

  • height: The height of the image in pixels (from the full-resolution image).

Usage

To download and load this dataset from the Hugging Face Hub:


from datasets import load_dataset, Dataset, DatasetDict

# Login using e.g. `huggingface-cli login` to access this dataset

# To load the full, high-resolution dataset (recommended for training):
# This will load the 'default' configuration's 'train' split.
ds_main = load_dataset("ROSCOSMOS/Trains_and_Trams", "default")

print("Main Dataset (default config) loaded successfully!")
print(ds_main)
print(f"Type of loaded object: {type(ds_main)}")

if isinstance(ds_main, Dataset):
    print(f"Number of samples: {len(ds_main)}")
    print(f"Features: {ds_main.features}")
elif isinstance(ds_main, DatasetDict):
    print(f"Available splits: {list(ds_main.keys())}")
    for split_name, dataset_obj in ds_main.items():
        print(f"  Split '{split_name}': {len(dataset_obj)} samples")
        print(f"  Features of '{split_name}': {dataset_obj.features}")

# The 'image' column will contain PIL Image objects.

Citation

@inproceedings{zhang2025diffusion4k,
    title={Diffusion-4K: Ultra-High-Resolution Image Synthesis with Latent Diffusion Models},
    author={Zhang, Jinjin and Huang, Qiuyu and Liu, Junjie and Guo, Xiefan and Huang, Di},
    year={2025},
    booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
}
@misc{zhang2025ultrahighresolutionimagesynthesis,
    title={Ultra-High-Resolution Image Synthesis: Data, Method and Evaluation},
    author={Zhang, Jinjin and Huang, Qiuyu and Liu, Junjie and Guo, Xiefan and Huang, Di},
    year={2025},
    note={arXiv:2506.01331},
}

Disclaimer and Bias Considerations

Please consider any inherent biases from the original dataset and those potentially introduced by the automated filtering (e.g., CLIP's biases) and manual curation process.

Contact

N/A