File size: 5,080 Bytes
2b6f484
0ffa7e1
2b6f484
0ffa7e1
2b6f484
 
0ffa7e1
2b6f484
0ffa7e1
2b6f484
0ffa7e1
2b6f484
 
aa83e6f
2b6f484
 
aa83e6f
2b6f484
 
 
 
 
 
 
 
 
aa83e6f
 
2b6f484
aa83e6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b6f484
aa83e6f
 
2b6f484
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
# Dataset for Aerial Vision-and-Dialog Navigation (AVDN)

This repository contains the Aerial Vision-and-Dialog Navigation (AVDN) dataset, proposed in the paper "[Aerial Vision-and-Dialog Navigation](https://arxiv.org/pdf/2205.12219)." The dataset and associated tasks enable training and evaluation of models designed for navigation tasks guided by visual and dialog-based cues.

- **Paper**: [Aerial Vision-and-Dialog Navigation](https://arxiv.org/pdf/2205.12219)
- **Project Webpage**: [Aerial Vision-and-Dialog Navigation Project](https://sites.google.com/view/aerial-vision-and-dialog/home)

## Tasks

We introduce two tasks within the AVDN framework:

### ANDH task
The **ANDH task** involves **sub-trajectory** data, where navigation occurs over smaller segments of the overall route. Instructions for downloading the dataset related to this task are available on the project webpage.

### ANDH-Full task
The **ANDH-Full task** uses the **full trajectory** data, providing a more comprehensive view of the navigation route, from start to destination, __which is shown in this repo__ .

## Dataset format

The data in the CSV files provided in this repository represents the **full trajectory** data before it is split by dialog turns. The CSV format includes detailed information about each trajectory, including navigation instructions, GPS data, and path-related information. 

To use this data for training and inference according to our current vision, you'll need to convert it from CSV format to JSON format. Below is a script that converts the CSV data into the required JSON format.

## CSV to JSON conversion script

```python
import pandas as pd
import json

# Process each CSV file split
for split in ['train', 'val_seen', 'val_unseen', 'test_unseen']:
    # Load the CSV data
    df = pd.read_csv(f'./{split}_full_data.csv')
    
    # Initialize a list to hold JSON data
    json_data = []
    
    # Convert each row in the DataFrame back to a dictionary (similar to JSON structure)
    for _, row in df.iterrows():
        entry = {
            'pre_dialogs': [],
            'map_name': row['map_name'],
            'route_index': row['route_index'],
            'instructions': row['instructions'],
            'gps_botm_left': json.loads(row['gps_botm_left']) if isinstance(row['gps_botm_left'], str) else row['gps_botm_left'],
            'gps_top_right': json.loads(row['gps_top_right']) if isinstance(row['gps_top_right'], str) else row['gps_top_right'],
            'lng_ratio': float(row['lng_ratio']),
            'lat_ratio': float(row['lat_ratio']),
            'angle': row['angle'],
            'destination': json.loads(row['destination']) if isinstance(row['destination'], str) else row['destination'],
            'attention_list': json.loads(row['attention_list']) if isinstance(row['attention_list'], str) else row['attention_list'],
            'gt_path_corners': json.loads(row['gt_path_corners']) if isinstance(row['gt_path_corners'], str) else row['gt_path_corners']
        }
        json_data.append(entry)

    # Save the data to a JSON file
    json_output_path = f'./{split}_full_data_from_csv.json'
    with open(json_output_path, 'w') as f:
        json.dump(json_data, f, indent=4)

```

## Explanations of the key fields in the dataset

Below are explanations of the key fields in the dataset, with all coordinates provided in the format [Latitude, Longitude]:

- **`map_name`**: The name of the satellite image file corresponding to the environment in which the navigation occurs.
- **`route_index`**: The index of the specific trajectory being followed by the agent within the map.
- **`instructions`**: Step-by-step natural language instructions guiding the agent along the route.
- **`gps_botm_left`**: GPS coordinates representing the bottom-left corner of the map's bounding box.
- **`gps_top_right`**: GPS coordinates representing the top-right corner of the map's bounding box.
- **`lng_ratio`**: The ratio used to scale the map's longitude (horizontal) distance to the corresponding pixel dimensions in the image.
- **`lat_ratio`**: The ratio used to scale the map's latitude (vertical) distance to the corresponding pixel dimensions in the image.
- **`angle`**: The initial heading direction of the drone at the start of the route. This is expressed in degrees, with 0° being East, 90° being North, 180° being West, and 270° being South.
- **`destination`**: The final destination of the trajectory, represented as a coordinate or location within the map.
- **`attention_list`**: A list of human-annotated areas that were the focus during data collection. Each area is represented by a coordinate (latitude, longitude) and a radius (in meters) defining a circular region of interest.
- **`gt_path_corners`**: A list of four corner points that represent the boundary of each view area along the route. The corners are listed in the following order: forward left corner, forward right corner, backward right corner, and backward left corner. This provides the ground truth view area at each step of the navigation.