Update README.md
Browse files
README.md
CHANGED
@@ -1,20 +1,32 @@
|
|
1 |
-
|
2 |
|
3 |
-
This
|
4 |
|
5 |
-
Paper
|
|
|
6 |
|
7 |
-
|
8 |
|
9 |
-
We
|
10 |
-
The ANDH-Full task needs the **full trajectory** data as input, which is shown in the Huggingface repo.
|
11 |
|
12 |
-
|
|
|
13 |
|
|
|
|
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
import pandas as pd
|
17 |
import json
|
|
|
18 |
# Process each CSV file split
|
19 |
for split in ['train', 'val_seen', 'val_unseen', 'test_unseen']:
|
20 |
# Load the CSV data
|
@@ -42,7 +54,24 @@ for split in ['train', 'val_seen', 'val_unseen', 'test_unseen']:
|
|
42 |
json_data.append(entry)
|
43 |
|
44 |
# Save the data to a JSON file
|
45 |
-
json_output_path = f'./{split}
|
46 |
with open(json_output_path, 'w') as f:
|
47 |
json.dump(json_data, f, indent=4)
|
48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Dataset for Aerial Vision-and-Dialog Navigation (AVDN)
|
2 |
|
3 |
+
This repository contains the Aerial Vision-and-Dialog Navigation (AVDN) dataset, proposed in the paper "[Aerial Vision-and-Dialog Navigation](https://arxiv.org/pdf/2205.12219)." The dataset and associated tasks enable training and evaluation of models designed for navigation tasks guided by visual and dialog-based cues.
|
4 |
|
5 |
+
- **Paper**: [Aerial Vision-and-Dialog Navigation](https://arxiv.org/pdf/2205.12219)
|
6 |
+
- **Project Webpage**: [Aerial Vision-and-Dialog Navigation Project](https://sites.google.com/view/aerial-vision-and-dialog/home)
|
7 |
|
8 |
+
## Tasks
|
9 |
|
10 |
+
We introduce two tasks within the AVDN framework:
|
|
|
11 |
|
12 |
+
### ANDH task
|
13 |
+
The **ANDH task** involves **sub-trajectory** data, where navigation occurs over smaller segments of the overall route. Instructions for downloading the dataset related to this task are available on the project webpage.
|
14 |
|
15 |
+
### ANDH-Full task
|
16 |
+
The **ANDH-Full task** uses the **full trajectory** data, providing a more comprehensive view of the navigation route, from start to destination, __which is shown in this repo__ .
|
17 |
|
18 |
+
## Dataset format
|
19 |
+
|
20 |
+
The data in the CSV files provided in this repository represents the **full trajectory** data before it is split by dialog turns. The CSV format includes detailed information about each trajectory, including navigation instructions, GPS data, and path-related information.
|
21 |
+
|
22 |
+
To use this data for training and inference according to our current vision, you'll need to convert it from CSV format to JSON format. Below is a script that converts the CSV data into the required JSON format.
|
23 |
+
|
24 |
+
## CSV to JSON conversion script
|
25 |
+
|
26 |
+
```python
|
27 |
import pandas as pd
|
28 |
import json
|
29 |
+
|
30 |
# Process each CSV file split
|
31 |
for split in ['train', 'val_seen', 'val_unseen', 'test_unseen']:
|
32 |
# Load the CSV data
|
|
|
54 |
json_data.append(entry)
|
55 |
|
56 |
# Save the data to a JSON file
|
57 |
+
json_output_path = f'./{split}_full_data_from_csv.json'
|
58 |
with open(json_output_path, 'w') as f:
|
59 |
json.dump(json_data, f, indent=4)
|
60 |
+
|
61 |
+
```
|
62 |
+
|
63 |
+
## Explanations of the key fields in the dataset
|
64 |
+
|
65 |
+
Below are explanations of the key fields in the dataset, with all coordinates provided in the format [Latitude, Longitude]:
|
66 |
+
|
67 |
+
- **`map_name`**: The name of the satellite image file corresponding to the environment in which the navigation occurs.
|
68 |
+
- **`route_index`**: The index of the specific trajectory being followed by the agent within the map.
|
69 |
+
- **`instructions`**: Step-by-step natural language instructions guiding the agent along the route.
|
70 |
+
- **`gps_botm_left`**: GPS coordinates representing the bottom-left corner of the map's bounding box.
|
71 |
+
- **`gps_top_right`**: GPS coordinates representing the top-right corner of the map's bounding box.
|
72 |
+
- **`lng_ratio`**: The ratio used to scale the map's longitude (horizontal) distance to the corresponding pixel dimensions in the image.
|
73 |
+
- **`lat_ratio`**: The ratio used to scale the map's latitude (vertical) distance to the corresponding pixel dimensions in the image.
|
74 |
+
- **`angle`**: The initial heading direction of the drone at the start of the route. This is expressed in degrees, with 0° being East, 90° being North, 180° being West, and 270° being South.
|
75 |
+
- **`destination`**: The final destination of the trajectory, represented as a coordinate or location within the map.
|
76 |
+
- **`attention_list`**: A list of human-annotated areas that were the focus during data collection. Each area is represented by a coordinate (latitude, longitude) and a radius (in meters) defining a circular region of interest.
|
77 |
+
- **`gt_path_corners`**: A list of four corner points that represent the boundary of each view area along the route. The corners are listed in the following order: forward left corner, forward right corner, backward right corner, and backward left corner. This provides the ground truth view area at each step of the navigation.
|