--- license: mit pretty_name: "Bridges" tags: ["image", "computer-vision", "bridge", "bridges", "landmarks", "high-resolution"] task_categories: ["image-classification"] language: ["en"] configs: - config_name: default data_files: "train/**/*.arrow" features: - name: image dtype: image - name: unique_id dtype: string - name: width dtype: int32 - name: height dtype: int32 - name: image_mode_on_disk dtype: string - name: original_file_format dtype: string --- # Bridges High resolution image subset from the Aesthetic-Train-V2 dataset, contains a collection of bridges from various parts of the world including many iconic landmark bridges. ## Dataset Details * **Curator:** Roscosmos * **Version:** 1.0.0 * **Total Images:** 760 * **Average Image Size (on disk):** ~5.7 MB compressed * **Primary Content:** Bridges * **Standardization:** All images are standardized to RGB mode and saved at 95% quality for consistency. ## Dataset Creation & Provenance ### 1. Original Master Dataset This dataset is a subset derived from: **`zhang0jhon/Aesthetic-Train-V2`** * **Link:** https://huggingface.co/datasets/zhang0jhon/Aesthetic-Train-V2 * **Providence:** Large-scale, high-resolution image dataset, refer to its original dataset card for full details. * **Original License:** MIT ### 2. Iterative Curation Methodology CLIP retrieval / manual curation. ## Dataset Structure & Content This dataset offers the following configurations/subsets: * **Default (Full `train` data) configuration:** Contains the full, high-resolution image data and associated metadata. This is the recommended configuration for model training and full data analysis. The default split for this configuration is `train`. Each example (row) in the dataset contains the following fields: * `image`: The actual image data. In the default (full) configuration, this is full-resolution. In the preview configuration, this is a viewer-compatible version. * `unique_id`: A unique identifier assigned to each image. * `width`: The width of the image in pixels (from the full-resolution image). * `height`: The height of the image in pixels (from the full-resolution image). ## Usage To download and load this dataset from the Hugging Face Hub: ```python from datasets import load_dataset, Dataset, DatasetDict # Login using e.g. `huggingface-cli login` to access this dataset # To load the full, high-resolution dataset (recommended for training): # This will load the 'default' configuration's 'train' split. ds_main = load_dataset("ROSCOSMOS/Bridges", "default") print("Main Dataset (default config) loaded successfully!") print(ds_main) print(f"Type of loaded object: {type(ds_main)}") if isinstance(ds_main, Dataset): print(f"Number of samples: {len(ds_main)}") print(f"Features: {ds_main.features}") elif isinstance(ds_main, DatasetDict): print(f"Available splits: {list(ds_main.keys())}") for split_name, dataset_obj in ds_main.items(): print(f" Split '{split_name}': {len(dataset_obj)} samples") print(f" Features of '{split_name}': {dataset_obj.features}") ``` ## Citation ```bibtex @inproceedings{zhang2025diffusion4k, title={Diffusion-4K: Ultra-High-Resolution Image Synthesis with Latent Diffusion Models}, author={Zhang, Jinjin and Huang, Qiuyu and Liu, Junjie and Guo, Xiefan and Huang, Di}, year={2025}, booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, } @misc{zhang2025ultrahighresolutionimagesynthesis, title={Ultra-High-Resolution Image Synthesis: Data, Method and Evaluation}, author={Zhang, Jinjin and Huang, Qiuyu and Liu, Junjie and Guo, Xiefan and Huang, Di}, year={2025}, note={arXiv:2506.01331}, } ``` ## Disclaimer and Bias Considerations Please consider any inherent biases from the original dataset and those potentially introduced by the automated filtering (e.g., CLIP's biases) and manual curation process. ## Contact N/A