# Resnet18 pretained on BigEarthNet v2.0 using Sentinel-1 & Sentinel-2 bands
This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using the Sentinel-1 & Sentinel-2 bands.
It was trained using the following parameters:
- Number of epochs: up to 100 (with early stopping after 5 epochs of no improvement based on validation average
precision macro)
- Batch size: 512
- Learning rate: 0.001
- Dropout rate: 0.375
- Drop Path rate: 0.0
- Learning rate scheduler: LinearWarmupCosineAnnealing for 10_000 warmup steps
- Optimizer: AdamW
- Seed: 42
The weights published in this model card were obtained after 2 training epochs.
For more information, please visit the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts), where you can find the training scripts.
![[BigEarthNet](http://bigearth.net/)](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg)
The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results:
| Metric | Macro | Micro |
|:------------------|------------------:|------------------:|
| Average Precision | 0.256528 | 0.255071 |
| F1 Score | 0.187317 | 0.345789 |
| Precision | 0.143085 | 0.246815 |
# Example
| A Sentinel-2 image (true color representation) |
|:---------------------------------------------------:|
| ![[BigEarthNet](http://bigearth.net/)](example.png) |
| Class labels | Predicted scores |
|:--------------------------------------------------------------------------|--------------------------------------------------------------------------:|
| Agro-forestry areas
Arable land
Beaches, dunes, sands
...
Urban fabric
| 0.000000
1.000000
1.000000
...
0.998973
|
To use the model, download the codes that define the model architecture from the
[official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the
code below. Note that you have to install [`configilm`](https://pypi.org/project/configilm/) to use the provided code.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder")
```
e.g.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
"BIFOLD-BigEarthNetv2-0/resnet18-all-v0.1.1")
```
If you use this model in your research or the provided code, please cite the following papers:
```bibtex
CITATION FOR DATASET PAPER
```
```bibtex
@article{hackel2024configilm,
title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
journal={SoftwareX},
volume={26},
pages={101731},
year={2024},
publisher={Elsevier}
}
```