Datasets:
File size: 5,805 Bytes
2d2962e 4d0ed0c 2d2962e 0ad48b9 6dccfbe 0ad48b9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
---
license: apache-2.0
task_categories:
- zero-shot-classification
- image-classification
- image-to-text
language:
- en
tags:
- remote-sensing
- image-classification
- multimodal
pretty_name: Sentinel-2 Land-cover Captioning Dataset
size_categories:
- 1K<n<10K
---
# Sentinel-2 Land-cover Captioning Dataset
The **Sentinel-2 Land-cover Captioning Dataset** (**S2LCD**) is a newly proposed dataset specifically designed for deep learning research on remote sensing image captioning. It comprises **1533** image patches, each of size **224 × 224** pixels, derived from Sentinel-2 L2A images. The dataset ensures a diverse representation of land cover and land use types in temperate regions, including forests, mountains, agricultural lands, and urban areas, each one with varying degrees of human influence.
Each image patch is accompanied by five captions exported in COCO format, resulting in a total of **7665** captions. These captions employ a broad vocabulary that combines natural language and the EAGLES lexicon, ensuring meticulous attention to detail.
The creation of this dataset involved three key activities:
1. **Definition of the Reference Taxonomy:** Establishing a language for describing scenes extracted from satellite images. This lexicon aims to balance the use of natural language for describing image content, ensuring caption variety and the incorporation of specific land use and land cover domain terms. The reference taxonomy is defined and proposed by the EAGLE group (EIONET Action Group on Land Monitoring in Europe), which serves as a standard lexicon in Europe for soil monitoring analyses based on satellite data.
2. **Extraction Procedure for Individual Images:** Sentinel-2 satellite images are distributed in multi-layer JP2 format, with one file for each spectral band and usually huge dimensions. Thus, a critical step in the dataset construction is to extract individual **224 × 224** RGB image patches from raw Sentinel-2 data. A custom Python tool was developed to this end.
3. **Caption Creation:** Annotation of each extracted **224 × 224** RGB image patch, ensuring linguistic variety and usage of domain-specific terms with the EAGLE lexicon was accomplished.
This dataset was introduced in our **paper/repository**: [RSDiX: Lightweight and Data-Efficient VLMs for Remote Sensing through Self-Distillation](https://github.com/NeuRoNeLab/RSDiX-CLIP). Please refer to it for further details on data collection, captioning methodology, and use cases.
<table>
<tr>
<td align="center">
<img src="S2LCD_images/S2A_L1C_20150925_N0204R065_160.tif" width="200"><br>
<strong>References:</strong><br>
<em>
Coastal hamlet near forest and water body.<br>
Water body surrounded by buildings and forest.<br>
Small coastal built-up area.<br>
Dense forestry coverage and water body.<br>
Abiotic artificial surface and forest coverage.
</em>
</td>
<td align="center">
<img src="S2LCD_images/S2A_L1C_20150925_N0204R065_751.tif" width="200"><br>
<strong>References:</strong><br>
<em>
Rocky areas with spread vegetation and sporadic snow coverage.<br>
Abiotic natural surface with permanent herbaceous and forest areas near snow coverage.<br>
Dense forest and bare soil.<br>
Wood and herbaceous coverage in mountain context.<br>
Forest area next to grasslands and snow coverage.
</em>
</td>
<td align="center">
<img src="S2LCD_images/S2A_L1C_20150925_N0204R065_363.tif" width="200"><br>
<strong>References:</strong><br>
<em>
Lake surrounded by herbaceous coverage and sporadic trees.<br>
Permanent herbaceous coverage surrounding a water body, also close to artificial abiotic surfaces.<br>
Woody areas among the grasslands near the town.<br>
Small areas of agricultural fields between grasslands with trees in line.<br>
Residential area with trees in line near grasslands and woody areas.
</em>
</td>
<td align="center">
<img src="S2LCD_images/S2A_L1C_20150925_N0204R065_2381.tif" width="200"><br>
<strong>References:</strong><br>
<em>
Airport areas and a small water body near extensive and intensive cultivated fields <br> with the presence of scattered and isolated buildings and a low density hamlet.<br>
Isolated built-up surrounded by cultivated fields near an airport.<br>
Small water body next to an airport.<br>
Scattered buildings and a small hamlet surrounded by cultivated fields.
</em>
</td>
</tr>
</table>
## Dataset Structure
The `dataset_s2lcd.json` file contains caption annotations for Sentinel-2 image patches. Each entry in the `images` array corresponds to a single image patch and includes:
- **`filename`**: The name of the image file (e.g., `"S2A_L1C_20150925_N0204R065_2329.tif"`).
- **`imgid`**: A unique integer identifier for the image.
- **`sentences`**: A list of five captions describing the image. Each caption is stored in a dictionary under the key `raw`, containing a natural language description of land use and land cover features visible in the image.
Example structure:
```json
{
"images": [
{
"filename": "example_image.tif",
"imgid": 0,
"sentences": [
{ "raw": "Caption 1" },
{ "raw": "Caption 2" },
...
]
},
...
]
}
```
## Other Sources
You can also access this dataset on [Kaggle](https://www.kaggle.com/datasets/angelonazzaro/sentinel-2-land-cover-captioning-dataset) and on our [Github repository](https://github.com/NeuRoNeLab/RSDiX-CLIP).
## License
This dataset is released under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. |