hackelle commited on
Commit
b3406cb
1 Parent(s): b3b2c10

Upload mixer_b16_224-all-v0.1.1

Browse files
Files changed (2) hide show
  1. README.md +5 -109
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,113 +1,9 @@
1
  ---
2
- thumbnail: "https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png"
3
  tags:
4
- - mixer_b16_224
5
- - BigEarthNet v2.0
6
- - Remote Sensing
7
- - Classification
8
- - image-classification
9
- - Multispectral
10
- library_name: configilm
11
- license: mit
12
- widget:
13
- - src: example.png
14
- example_title: Example
15
- output:
16
- - label: Agro-forestry areas
17
- score: 0.485380
18
- - label: Arable land
19
- score: 0.609600
20
- - label: Beaches, dunes, sands
21
- score: 0.064967
22
- - label: Broad-leaved forest
23
- score: 0.581914
24
- - label: Coastal wetlands
25
- score: 0.082138
26
  ---
27
 
28
- [TU Berlin](https://www.tu.berlin/) | [RSiM](https://rsim.berlin/) | [DIMA](https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/) | [BigEarth](http://www.bigearth.eu/) | [BIFOLD](https://bifold.berlin/)
29
- :---:|:---:|:---:|:---:|:---:
30
- <a href="https://www.tu.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/tu-berlin-logo-long-red.svg" style="font-size: 1rem; height: 2em; width: auto" alt="TU Berlin Logo"/> | <a href="https://rsim.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png" style="font-size: 1rem; height: 2em; width: auto" alt="RSiM Logo"> | <a href="https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/DIMA.png" style="font-size: 1rem; height: 2em; width: auto" alt="DIMA Logo"> | <a href="http://www.bigearth.eu/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BigEarth.png" style="font-size: 1rem; height: 2em; width: auto" alt="BigEarth Logo"> | <a href="https://bifold.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BIFOLD_Logo_farbig.png" style="font-size: 1rem; height: 2em; width: auto; margin-right: 1em" alt="BIFOLD Logo">
31
-
32
- # Mixer_b16_224 pretained on BigEarthNet v2.0 using Sentinel-1 & Sentinel-2 bands
33
-
34
- <!-- Optional images -->
35
- <!--
36
- [Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) | [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2)
37
- :---:|:---:
38
- <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-1"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_2.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-2 Satellite"/> | <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-2"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_1.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-1 Satellite"/>
39
- -->
40
-
41
- This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using the Sentinel-1 & Sentinel-2 bands.
42
- It was trained using the following parameters:
43
- - Number of epochs: up to 100
44
- - with early stopping
45
- - after 5 epochs of no improvement
46
- - based on validation average precision (macro)
47
- - the weights published in this model card were obtained after 14 training epochs
48
- - Batch size: 512
49
- - Learning rate: 0.001
50
- - Dropout rate: 0.15
51
- - Drop Path rate: 0.15
52
- - Learning rate scheduler: LinearWarmupCosineAnnealing for 1000 warmup steps
53
- - Optimizer: AdamW
54
- - Seed: 42
55
-
56
- The model was trained using the training script of the
57
- [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts).
58
- See details in this repository for more information on how to train the model given the parameters above.
59
-
60
- ![[BigEarthNet](http://bigearth.net/)](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg)
61
-
62
- The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results:
63
-
64
- | Metric | Value Macro | Value Micro |
65
- |:------------------|------------------:|------------------:|
66
- | Average Precision | 0.678691 | 0.849189 |
67
- | F1 Score | 0.633736 | 0.755143 |
68
- | Precision | 0.684609 | 0.777681 |
69
-
70
- # Example
71
- | Example Input (only RGB bands from Sentinel-2) |
72
- |:---------------------------------------------------:|
73
- | ![[BigEarthNet](http://bigearth.net/)](example.png) |
74
-
75
- | Example Output - Labels | Example Output - Scores |
76
- |:--------------------------------------------------------------------------|--------------------------------------------------------------------------:|
77
- | <p> Agro-forestry areas <br> Arable land <br> Beaches, dunes, sands <br> ... <br> Urban fabric </p> | <p> 0.485380 <br> 0.609600 <br> 0.064967 <br> ... <br> 0.230927 </p> |
78
-
79
-
80
- To use the model, download the codes that defines the model architecture from the
81
- [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the
82
- code below. Note, that you have to install `configilm` to use the provided code.
83
-
84
- ```python
85
- from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
86
-
87
- model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder")
88
- ```
89
-
90
- e.g.
91
-
92
- ```python
93
- from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
94
-
95
- model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
96
- "BIFOLD-BigEarthNetv2-0/BENv2-mixer_b16_224-all-v0.1.1")
97
- ```
98
-
99
- If you use this model in your research or the provided code, please cite the following papers:
100
- ```bibtex
101
- CITATION FOR DATASET PAPER
102
- ```
103
- ```bibtex
104
- @article{hackel2024configilm,
105
- title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
106
- author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
107
- journal={SoftwareX},
108
- volume={26},
109
- pages={101731},
110
- year={2024},
111
- publisher={Elsevier}
112
- }
113
- ```
 
1
  ---
 
2
  tags:
3
+ - model_hub_mixin
4
+ - pytorch_model_hub_mixin
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
 
7
+ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
+ - Library: [More Information Needed]
9
+ - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7b6a12485d61a109d82b9f28aab5b0c9583baa095f9888fe379352737ad29ec9
3
  size 238173812
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ffd187cef1f35aca4a037c80c0d8d9d1c8f71437985b2d488dd24295b3f9065
3
  size 238173812