g-astruc commited on
Commit
553e0d4
1 Parent(s): 8a991c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -3
README.md CHANGED
@@ -1,3 +1,75 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # OmniSat: Self-Supervised Modality Fusion for Earth Observation (ECCV 2024)
6
+
7
+ [Guillaume Astruc](https://gastruc.github.io/), [Nicolas Gonthier](https://ngonthier.github.io/), [Clement Mallet](https://www.umr-lastig.fr/clement-mallet/), [Loic Landrieu](https://loiclandrieu.com/)
8
+
9
+
10
+ Official models for [_OmniSat: Self-Supervised Modality Fusion for Earth Observation_](https://arxiv.org/pdf/2404.08351.pdf)
11
+
12
+ ## Abstract
13
+
14
+ We introduce OmniSat, a novel architecture that exploits the spatial alignment between multiple EO modalities to learn expressive multimodal representations without labels. We demonstrate the advantages of combining modalities of different natures across three downstream tasks (forestry, land cover classification, and crop mapping), and propose two augmented datasets with new modalities: PASTIS-HD and TreeSatAI-TS.
15
+ For more details and results, please check out our [github](https://github.com/gastruc/OmniSat) and [project page](https://gastruc.github.io/projects/omnisat.html).
16
+
17
+ <p align="center">
18
+ <img src="https://github.com/gastruc/OmniSat/assets/1902679/9fc20951-1cac-4891-b67f-53ed5e0675ad" width="800" height="400">
19
+ </p>
20
+
21
+ ## Datasets
22
+
23
+
24
+ | Dataset name | Modalities | Labels | Link
25
+ | ------------- | ---------------------------------------- | ------------------- | ------------------- |
26
+ | PASTIS-HD | **SPOT 6-7 (1m)** + S1/S2 (30-140 / year)| Crop mapping (0.2m) | [huggingface](https://huggingface.co/datasets/IGNF/PASTIS-HD) or [zenodo](https://zenodo.org/records/10908628) |
27
+ | TreeSatAI-TS | Aerial (0.2m) + **S1/S2 (10-70 / year)** | Forestry (60m) | [huggingface](https://huggingface.co/datasets/IGNF/TreeSatAI-Time-Series) |
28
+ | FLAIR | aerial (0.2m) + S2 (20-114 / year) | Land cover (0.2m) | [huggingface](https://huggingface.co/datasets/IGNF/FLAIR) |
29
+
30
+
31
+ <p align="center">
32
+ <img src="https://github.com/user-attachments/assets/18acbb19-6c90-4c9a-be05-0af24ded2052" width="800" height="400">
33
+ </p>
34
+
35
+ ### Inference 🔥
36
+
37
+ In order to load our pretrained models, you can run:
38
+
39
+ ```python
40
+ from models.huggingface import AnySat
41
+
42
+ ## Code to use pretrained weights
43
+ model = AnySat(size="base", pretrained=True) #Exists also "small" and "tiny"
44
+ ```
45
+
46
+ To get features from an observation of a batch of observations, you need to provide to the model a dictionnary where keys are from the list:
47
+ - "aerial": Single date tensor (Bx4xHxW) with 4 channels (RGB NiR), 0.2m resolution
48
+ - "aerial-flair": Single date tensor (Bx5xHxW) with 5 channels (RGB NiR Elevation), 0.2m resolution
49
+ - "spot": Single date tensor (Bx3xHxW) with 3 channels (RGB), 1m resolution
50
+ - "naip": Single date tensor (Bx4xHxW) with 3 channels (RGB), 1.25m resolution
51
+ - "s2": Time series tensor (BxTx10xHxW) with 10 channels (B0,B1???), 10m resolution
52
+ - "s1-asc": Time series tensor (BxTx2xHxW) with 2 channels (VV VH), 10m resolution
53
+ - "s1": Time series tensor (BxTx3xHxW) with 3 channels, 10m resolution
54
+ - "alos": Time series tensor (BxTx3xHxW) with 3 channels, 30m resolution
55
+ - "l7": Time series tensor (BxTx6xHxW) with 6 channels, 30m resolution
56
+ - "l8": Time series tensor (BxTx11xHxW) with 11 channels, rescaled to 10m resolution
57
+ - "modis": Time series tensor (BxTx7xHxW) with 7 channels, 250m resolution
58
+
59
+
60
+
61
+ Time series keys require a "{key}_dates" (for example "s2_dates") tensor of size BxT that value an integer that represent the day of the year.
62
+ Then, you can run:
63
+
64
+ ```python
65
+ features = AnySat(data)
66
+ ```
67
+ And then apply those features to the desired downstream task
68
+
69
+ To reproduce results, add new modalities, or do more experiments see the full code on [github]('https://github.com/gastruc/AnySat').
70
+
71
+ ### Citing 💫
72
+
73
+ ```bibtex
74
+
75
+ ```