|
# Dronescapes Experts dataset |
|
|
|
This dataset is an extension of the original [dronescapes dataset](https://huggingface.co/dataset/Meehai/dronescapes) with new modalities generated using VRE 100% from scratch (aka pretrained experts). The only data that is not generable by VRE is the Ground Truth: semantic (human annotated), depth & normals (SfM) that is inherited from the original dataset for evaluation purposes only. |
|
|
|
 |
|
|
|
# 1. Downloading the data |
|
|
|
## Option 1. Download the pre-processed dataset from HuggingFace repository |
|
|
|
``` |
|
git lfs install # Make sure you have git-lfs installed (https://git-lfs.com) |
|
git clone https://huggingface.co/datasets/Meehai/dronescapes |
|
``` |
|
|
|
## Option 2. Generate all the modalities from raw videos |
|
|
|
Follow the instructions under [this file](./vre_dronescapes/commands.txt). |
|
|
|
Note: you can generate all the data except `semantic_segprop8` (human annotated), `depth_sfm_manual202204` and |
|
`normals_sfm_manual202204` (SfM tool was used). |
|
|
|
## 2. Using the data |
|
|
|
As per the split from the paper: |
|
|
|
<img src="split.png", width="500px"> |
|
|
|
The data is in `data/*` (if you used git clone) (it should match even if you download from huggingface). |
|
|
|
## 2.1 Using the provided viewer |
|
|
|
The simplest way to explore the data is to use the [provided notebook](scripts/dronescapes_viewer/dronescapes_viewer.ipynb). Upon running |
|
it, you should get a collage with all the default tasks, like the picture at the top. |
|
|
|
For a CLI-only method, you can use the VRE reader as well: |
|
|
|
```bash |
|
vre_reader data/test_set_annotated_only/ --config_path vre_dronescapes/cfg.yaml -I vre_dronescapes/semantic_mapper.py:get_new_semantic_mapped_tasks |
|
``` |
|
|
|
## 3. Evaluation |
|
|
|
See the original [dronescapes evaluation description & benchmark](https://huggingface.co/datasets/Meehai/dronescapes#3-evaluation-for-semantic-segmentation) for this. |
|
|