Datasets:

ArXiv:
License:
zl3466 commited on
Commit
e6eae55
โ€ข
1 Parent(s): 60f953a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -11
README.md CHANGED
@@ -4,37 +4,76 @@ tags:
4
  - Autonomous Driving
5
  - Computer Vision
6
  ---
7
- # Open MARS Dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/66651bd4e4be2069a695e5a1/ooi8v0KOUhWYDbqbfLkVG.jpeg)
10
 
11
  <br/>
12
 
13
- ## Welcome to the tutorial of Open MARS Dataset!
 
 
 
 
 
 
14
 
15
- Our paper has been accepted on CVPR 2024 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰
16
 
17
- Checkout our [project website](https://ai4ce.github.io/MARS/) for demo videos.
18
- Codes to reproduce the videos are available in `/visualize` folder of our [github repo](https://github.com/ai4ce/MARS).
19
 
20
  <br/>
21
 
22
- ## Intro
23
- ### The MARS dataset is collected with a fleet of autonomous vehicles from [MayMobility](https://maymobility.com/).
24
 
25
- Our dataset uses the same structure as the [NuScenes](https://www.nuscenes.org/nuscenes) Dataset:
 
 
 
 
 
26
 
27
  - Multitraversal: each location is saved as one NuScenes object, and each traversal is one scene.
28
  - Multiagent: the whole set is a NuScenes object, and each multiagent encounter is one scene.
29
 
30
  <br/>
31
 
32
- ## Download
33
- Both Multiagent and Multitraversal subsets are now available for [download on huggingface](https://huggingface.co/datasets/ai4ce/MARS).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
  <br/>
36
 
37
- ## Overview
38
  This tutorial explains how the NuScenes structure works in our dataset, including how you may access a scene and query its samples of sensor data.
39
 
40
  - [Devkit Initialization](#initialization)
 
4
  - Autonomous Driving
5
  - Computer Vision
6
  ---
7
+
8
+ # Multiagent Multitraversal Multimodal Self-Driving: Open MARS Dataset
9
+ [Yiming Li](https://roboticsyimingli.github.io/),
10
+ [Zhiheng Li](https://zl3466.github.io/),
11
+ [Nuo Chen](),
12
+ [Moonjun Gong](https://moonjungong.github.io/),
13
+ [Zonglin Lyu](),
14
+ [Zehong Wang](),
15
+ [Peili Jiang](),
16
+ [Chen Feng](https://engineering.nyu.edu/faculty/chen-feng)
17
+
18
+ [Paper](https://arxiv.org/abs/2406.09383)
19
+
20
+ [Tutorial](#tutorial)
21
+
22
+ Checkout our [project website](https://ai4ce.github.io/MARS/) for demo videos.
23
+ Codes to reproduce the videos are available in `/visualize` folder of `main` branch.
24
 
25
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/66651bd4e4be2069a695e5a1/ooi8v0KOUhWYDbqbfLkVG.jpeg)
26
 
27
  <br/>
28
 
29
+ # News
30
+
31
+ - [2024/06] Both Multiagent and Multitraversal subsets are now available for download on [huggingface](https://huggingface.co/datasets/ai4ce/MARS).
32
+
33
+ - [2024/06]The preprint version is available on [arXiv]([https://huggingface.co/datasets/ai4ce/MARS](https://arxiv.org/abs/2406.09383)).
34
+
35
+ - [2024/02] Our paper has been accepted on CVPR 2024 ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰
36
 
 
37
 
 
 
38
 
39
  <br/>
40
 
41
+ # Abstract
42
+ In collaboration with the self-driving company [May Mobility](https://maymobility.com/), we present the MARS dataset which unifies scenarios that enable multiagent, multitraversal, and multimodal autonomous vehicle research.
43
 
44
+ MARS is collected with a fleet of autonomous vehicles driving within a certain geographical area. Each vehicle has its own route and different vehicles may appear at nearby locations. Each vehicle is equipped with a LiDAR and surround-view RGB cameras.
45
+
46
+ We curate two subsets in MARS: one facilitates collaborative driving with multiple vehicles simultaneously present at the same location, and the other enables memory retrospection through asynchronous traversals of the same location by multiple vehicles. We conduct experiments in place recognition and neural reconstruction. More importantly, MARS introduces new research opportunities and challenges such as multitraversal 3D reconstruction, multiagent perception, and unsupervised object discovery.
47
+
48
+
49
+ #### Our dataset uses the same structure as the [NuScenes](https://www.nuscenes.org/nuscenes) Dataset:
50
 
51
  - Multitraversal: each location is saved as one NuScenes object, and each traversal is one scene.
52
  - Multiagent: the whole set is a NuScenes object, and each multiagent encounter is one scene.
53
 
54
  <br/>
55
 
56
+ # License
57
+ [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
58
+
59
+ <br/>
60
+
61
+ # Bibtex
62
+
63
+ ```
64
+ @InProceedings{Li_2024_CVPR,
65
+ author = {Li, Yiming and Li, Zhiheng and Chen, Nuo and Gong, Moonjun and Lyu, Zonglin and Wang, Zehong and Jiang, Peili and Feng, Chen},
66
+ title = {Multiagent Multitraversal Multimodal Self-Driving: Open MARS Dataset},
67
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
68
+ month = {June},
69
+ year = {2024},
70
+ pages = {22041-22051}
71
+ }
72
+ ```
73
 
74
  <br/>
75
 
76
+ # Tutorial
77
  This tutorial explains how the NuScenes structure works in our dataset, including how you may access a scene and query its samples of sensor data.
78
 
79
  - [Devkit Initialization](#initialization)