File size: 4,900 Bytes
553e0d4 e1a79b8 553e0d4 7b97687 553e0d4 7b97687 553e0d4 689ccbb 553e0d4 689ccbb 553e0d4 689ccbb 553e0d4 5b510f9 553e0d4 e1a79b8 7b97687 553e0d4 7b97687 e1a79b8 553e0d4 e1a79b8 f8906fa 553e0d4 689ccbb 553e0d4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
license: mit
---
# AnySat: An Earth Observation Model for Any Resolutions, Scales, and Modalities (ArXiv 2024)
[Guillaume Astruc](https://gastruc.github.io/), [Nicolas Gonthier](https://ngonthier.github.io/), [Clement Mallet](https://www.umr-lastig.fr/clement-mallet/), [Loic Landrieu](https://loiclandrieu.com/)
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/662b7fba68ed7bbf40bfb0df/Jh9eOnMePFiL84TOzhe86.png" alt="image/png" width="600" height="300">
</p>
## Abstract
We introduce AnySat: a JEPA-based multimodal Earth Observation model that train simultaneously on diverse datasets with different scales, resolutions (spatial, spectral, temporal), and modality combinations.
For more details and results, please check out our [github](https://github.com/gastruc/AnySat) and [project page](https://gastruc.github.io/projects/omnisat.html).
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/662b7fba68ed7bbf40bfb0df/2tc0cFdOF2V0_KgptA-qV.png" alt="image/png" width="400" height="200">
</p>
### Inference 🔥
In order to load our pretrained models, you can run:
```python
from models.huggingface import AnySat
## Code to use pretrained weights
model = AnySat(size="base", pretrained=True) #Exists also "small" and "tiny"
```
To get features from an observation of a batch of observations, you need to provide to the model a dictionnary where keys are from the list:
| Dataset | Description | Tensor Size | Channels | Resolution |
|---------------|-----------------------------------|-----------------------------------------|-------------------------------------------|------------|
| aerial | Single date tensor |Bx4xHxW | RGB, NiR | 0.2m |
| aerial-flair | Single date tensor |Bx5xHxW | RGB, NiR, Elevation | 0.2m |
| spot | Single date tensor |Bx3xHxW | RGB | 1m |
| naip | Single date tensor |Bx4xHxW | RGB | 1.25m |
| s2 | Time series tensor |BxTx10xHxW | B2, B3, B4, B5, B6, B7, B8, B8a, B11, B12 | 10m |
| s1-asc | Time series tensor |BxTx2xHxW | VV, VH | 10m |
| s1 | Time series tensor |BxTx3xHxW | VV, VH, Ratio | 10m |
| alos | Time series tensor |BxTx3xHxW | HH, HV, Ratio | 30m |
| l7 | Time series tensor |BxTx6xHxW | B1, B2, B3, B4, B5, B7 | 30m |
| l8 | Time series tensor |BxTx11xHxW | B8, B1, B2, B3, B4, B5, B6, B7, B9, B10, B11 | 10m |
| modis | Time series tensor |BxTx7xHxW | B1, B2, B3, B4, B5, B6, B7 | 250m |
Time series keys require a "{key}_dates" (for example "s2_dates") tensor of size BxT that value an integer that represent the day of the year.
Then you have to choose at which scale you want te produce features. Scale argument is in meters and represent the size of the desired patch size.
Outputs will be composed of the concatenation of a class token and a flattened feature map where each feature encodes a scale x scale zone.
Scale should divide the spatial cover of all modalities and be a multiple of 10.
Then, you can run:
```python
features = AnySat(data, scale=scale) #where scale is the size in meters of patches
```
And then you can apply those features to the desired downstream task!
If you want to get a feature map at the density of a specific modality you can specify:
```python
features = AnySat(data, scale=scale, keep_subpatch=True, modality_keep=modality) #where modality is the name of the desired modality
```
Note that the features will be of size 2*D. If you have several modalities of the same desired resolution, you should pick the most informative one (or modify the code to concatenate also the other modalities)
Example of use of AnySat:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/662b7fba68ed7bbf40bfb0df/_x2ng-3c0jvLIP3R5WEwA.png)
To reproduce results, add new modalities, or do more experiments see the full code on [github]('https://github.com/gastruc/AnySat').
### Citing 💫
```bibtex
```
|