Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,8 @@ size_categories:
|
|
15 |
- 1K<n<10K
|
16 |
---
|
17 |
|
|
|
|
|
18 |
The **Sentinel-2 Land-cover Captioning Dataset** (**S2LCD**) is a newly proposed dataset specifically designed for deep learning research on remote sensing image captioning. It comprises **1533** image patches, each of size **224 × 224** pixels, derived from Sentinel-2 L2A images. The dataset ensures a diverse representation of land cover and land use types in temperate regions, including forests, mountains, agricultural lands, and urban areas, each one with varying degrees of human influence.
|
19 |
|
20 |
Each image patch is accompanied by five captions exported in COCO format, resulting in a total of **7665** captions. These captions employ a broad vocabulary that combines natural language and the EAGLES lexicon, ensuring meticulous attention to detail.
|
|
|
15 |
- 1K<n<10K
|
16 |
---
|
17 |
|
18 |
+
# Sentinel-2 Land-cover Captioning Dataset
|
19 |
+
|
20 |
The **Sentinel-2 Land-cover Captioning Dataset** (**S2LCD**) is a newly proposed dataset specifically designed for deep learning research on remote sensing image captioning. It comprises **1533** image patches, each of size **224 × 224** pixels, derived from Sentinel-2 L2A images. The dataset ensures a diverse representation of land cover and land use types in temperate regions, including forests, mountains, agricultural lands, and urban areas, each one with varying degrees of human influence.
|
21 |
|
22 |
Each image patch is accompanied by five captions exported in COCO format, resulting in a total of **7665** captions. These captions employ a broad vocabulary that combines natural language and the EAGLES lexicon, ensuring meticulous attention to detail.
|