# Convnextv2_base pretrained on BigEarthNet v2.0 using Sentinel-1 & Sentinel-2 bands
> **_NOTE:_** This version of the model has been trained with a different band order that is not compatible with the newer versions and does not match the order proposed in the technical documentation of Sentinel-2.
>
> The following bands (in the specified order) were used to train the models with version 0.1.1:
> - For models using Sentinel-1 only: Sentinel-1 bands `["VH", "VV"]`
> - For models using Sentinel-2 only: Sentinel-2 10m bands and 20m bands `["B02", "B03", "B04", "B08", "B05", "B06", "B07", "B11", "B12", "B8A"]`
> - For models using Sentinel-1 and Sentinel-2: Sentinel-2 10m bands and 20m bands and Sentinel-1 bands = `["B02", "B03", "B04", "B08", "B05", "B06", "B07", "B11", "B12", "B8A", "VH", "VV"]`
>
> Newer models are compatible with the order in the technical documentation of Sentinel-2 and were trained with the following band order:
> - For models using Sentinel-1 only: Sentinel-1 bands `["VV", "VH"]`
> - For models using Sentinel-2 only: Sentinel-2 10m bands and 20m bands `["B02", "B03", "B04", "B05", "B06", "B07", "B08", "B8A", "B11", "B12"]`
> - For models using Sentinel-1 and Sentinel-2: Sentinel-1 bands and Sentinel-2 10m bands and 20m bands `["VV", "VH", "B02", "B03", "B04", "B05", "B06", "B07", "B08", "B8A", "B11", "B12"]`
This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using
the Sentinel-1 & Sentinel-2 bands.
It was trained using the following parameters:
- Number of epochs: up to 100 (with early stopping after 5 epochs of no improvement based on validation average
precision macro)
- Batch size: 512
- Learning rate: 0.001
- Dropout rate: 0.15
- Drop Path rate: 0.15
- Learning rate scheduler: LinearWarmupCosineAnnealing for 1000 warmup steps
- Optimizer: AdamW
- Seed: 42
The weights published in this model card were obtained after 15 training epochs.
For more information, please visit the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts), where you can find the training scripts.
![[BigEarthNet](http://bigearth.net/)](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg)
The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results:
| Metric | Macro | Micro |
|:------------------|------------------:|------------------:|
| Average Precision | 0.692033 | 0.857302 |
| F1 Score | 0.626945 | 0.759608 |
| Precision | 0.692033 | 0.857302 |
# Example
| A Sentinel-2 image (true color representation) |
|:---------------------------------------------------:|
| ![[BigEarthNet](http://bigearth.net/)](example.png) |
| Class labels | Predicted scores |
|:--------------------------------------------------------------------------|--------------------------------------------------------------------------:|
| Agro-forestry areas
Arable land
Beaches, dunes, sands
...
Urban fabric
| 0.028380
0.569226
0.148004
...
0.016203
|
To use the model, download the codes that define the model architecture from the
[official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the
code below. Note that you have to install [`configilm`](https://pypi.org/project/configilm/) to use the provided code.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder")
```
e.g.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
"BIFOLD-BigEarthNetv2-0/convnextv2_base-all-v0.1.1")
```
If you use this model in your research or the provided code, please cite the following papers:
```bibtex
@article{clasen2024refinedbigearthnet,
title={reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis},
author={Clasen, Kai Norman and Hackel, Leonard and Burgert, Tom and Sumbul, Gencer and Demir, Beg{\"u}m and Markl, Volker},
year={2024},
eprint={2407.03653},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.03653},
}
```
```bibtex
@article{hackel2024configilm,
title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
journal={SoftwareX},
volume={26},
pages={101731},
year={2024},
publisher={Elsevier}
}
```