Datasets:

ArXiv:
License:
zl3466 commited on
Commit
e794755
·
verified ·
1 Parent(s): a29352a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +352 -0
README.md ADDED
@@ -0,0 +1,352 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-nd-4.0
3
+ tags:
4
+ - Autonomous Driving
5
+ - Computer Vision
6
+ ---
7
+ # Dataset Tutorial
8
+
9
+ ### The MARS dataset follows the same structure as the NuScenes Dataset.
10
+
11
+ Multitraversal: each location is saved as one NuScenes object, and each traversal is one scene.
12
+
13
+ Multiagent: the whole set is a NuScenes object, and each multi-agent encounter is one scene.
14
+
15
+ <br/>
16
+
17
+ ## Initialization
18
+ First, install `nuscenes-devkit` following NuScenes's repo tutorial, [Devkit setup section](https://github.com/nutonomy/nuscenes-devkit?tab=readme-ov-file#devkit-setup). The easiest way is install via pip:
19
+ ```
20
+ pip install nuscenes-devkit
21
+ ```
22
+
23
+ ## Usage:
24
+ Import NuScenes devkit:
25
+ ```
26
+ from nuscenes.nuscenes import NuScenes
27
+ ```
28
+
29
+ Multitraversal example: loading data of location 10:
30
+ ```
31
+ # The "version" variable is the name of the folder holding all .json metadata tables.
32
+ location = 10
33
+ mars_10 = NuScenes(version='v1.0', dataroot=f'/MARS_multitraversal/{location}', verbose=True)
34
+ ```
35
+
36
+ Multiagent example: loading data for the full set:
37
+ ```
38
+ mars_multiagent = NuScenes(version='v1.0', dataroot=f'/MARS_multiagent', verbose=True)
39
+ ```
40
+
41
+ <br/>
42
+
43
+ ## Scene
44
+ To see all scenes in one set (one location of the Multitraversal set, or the whole Multiagent set):
45
+ ```
46
+ print(nusc.scene)
47
+ ```
48
+ Output:
49
+ ```
50
+ [{'token': '97hitl8ya1335v8zkixvsj3q69tgx801', 'nbr_samples': 611, 'first_sample_token': 'udrq868482482o88p9r2n8b86li7cfxx', 'last_sample_token': '7s5ogk8m9id7apixkqoh3rep0s9113xu', 'name': '2023_10_04_scene_3_maisy', 'intersection': 10, 'err_max': 20068.00981996727},
51
+ {'token': 'o858jv3a464383gk9mm8at71ai994d3n', 'nbr_samples': 542, 'first_sample_token': '933ho5988jo3hu848b54749x10gd7u14', 'last_sample_token': 'os54se39x1px2ve12x3r1b87e0d7l1gn', 'name': '2023_10_04_scene_4_maisy', 'intersection': 10, 'err_max': 23959.357933579337},
52
+ {'token': 'xv2jkx6m0o3t044bazyz9nwbe5d5i7yy', 'nbr_samples': 702, 'first_sample_token': '8rqb40c919d6n5cd553c3j01v178k28m', 'last_sample_token': 'skr79z433oyi6jljr4nx7ft8c42549nn', 'name': '2023_10_04_scene_6_mike', 'intersection': 10, 'err_max': 27593.048433048432},
53
+ {'token': '48e90c7dx401j97391g6549zmljbg0hk', 'nbr_samples': 702, 'first_sample_token': 'ui8631xb2in5la133319c5301wvx1fib', 'last_sample_token': 'xrns1rpma4p00hf39305ckol3p91x59w', 'name': '2023_10_04_scene_9_mike', 'intersection': 10, 'err_max': 24777.237891737892},
54
+ ...
55
+ ]
56
+
57
+ ```
58
+
59
+ The scenes can then be retrieved by indexing:
60
+ ```
61
+ num_of_scenes = len(nusc.scene)
62
+ my_scene = nusc.scene[0] # scene at index 0, which is the first scene of this location
63
+ print(first_scene)
64
+ ```
65
+ Output:
66
+ ```
67
+ {'token': '97hitl8ya1335v8zkixvsj3q69tgx801',
68
+ 'nbr_samples': 611,
69
+ 'first_sample_token': 'udrq868482482o88p9r2n8b86li7cfxx',
70
+ 'last_sample_token': '7s5ogk8m9id7apixkqoh3rep0s9113xu',
71
+ 'name': '2023_10_04_scene_3_maisy',
72
+ 'intersection': 10,
73
+ 'err_max': 20068.00981996727}
74
+ ```
75
+ - `nbr_samples`: number of samples (frames) of this scene.
76
+ - `name`: name of the scene, including its date and name of the vehicle it is from (in this example, the data is from Oct. 4th 2023, vehicle maisy).
77
+ - `intersection`: location index.
78
+ - `err_max`: maximum time difference (in millisecond) between camera images of a same frame in this scene.
79
+
80
+ <br/>
81
+
82
+ ## Sample
83
+ Get the first sample (frame) of one scene:
84
+ ```
85
+ first_sample_token = my_scene['first_sample_token'] # get sample token
86
+ my_sample = nusc.get('sample', first_sample_token) # get sample metadata
87
+ print(my_sample)
88
+ ```
89
+
90
+ Output:
91
+ ```
92
+ {'token': 'udrq868482482o88p9r2n8b86li7cfxx',
93
+ 'timestamp': 1696454482883182,
94
+ 'prev': '',
95
+ 'next': 'v15b2l4iaq1x0abxr45jn6bi08j72i01',
96
+ 'scene_token': '97hitl8ya1335v8zkixvsj3q69tgx801',
97
+ 'data': {
98
+ 'CAM_FRONT_CENTER': 'q9e0pgk3wiot983g4ha8178zrnr37m50',
99
+ 'CAM_FRONT_LEFT': 'c13nf903o913k30rrz33b0jq4f0z7y2d',
100
+ 'CAM_FRONT_RIGHT': '67ydh75sam2dtk67r8m3bk07ba0lz3ib',
101
+ 'CAM_BACK_CENTER': '1n09qfm9vw65xpohjqgji2g58459gfuq',
102
+ 'CAM_SIDE_LEFT': '14up588181925s8bqe3pe44d60316ey0',
103
+ 'CAM_SIDE_RIGHT': 'x95k7rvhmxkndcj8mc2821c1cs8d46y5',
104
+ 'LIDAR_FRONT_CENTER': '13y90okaf208cqqy1v54z87cpv88k2qy',
105
+ 'IMU_TOP': 'to711a9v6yltyvxn5653cth9w2o493z4'
106
+ },
107
+ 'anns': []}
108
+ ```
109
+ - `prev`: token of the previous sample.
110
+ - `next`': token of the next sample.
111
+ - `data`: dict of data tokens of this sample's sensor data.
112
+ - `anns`: empty as we do not have annotation data at this moment.
113
+
114
+ <br/>
115
+
116
+ ## Sample Data
117
+ Our sensor names are different from NuScenes' sensor names. It is important that you use the correct name when querying sensor data. Our sensor names are:
118
+ ```
119
+ ['CAM_FRONT_CENTER',
120
+ 'CAM_FRONT_LEFT',
121
+ 'CAM_FRONT_RIGHT',
122
+ 'CAM_BACK_CENTER',
123
+ 'CAM_SIDE_LEFT',
124
+ 'CAM_SIDE_RIGHT',
125
+ 'LIDAR_FRONT_CENTER',
126
+ 'IMU_TOP']
127
+ ```
128
+
129
+ ---
130
+ ### Camera Data
131
+ ```
132
+ sensor = 'CAM_FRONT_CENTER'
133
+ sample_data_token = my_sample['data'][sensor]
134
+ FC_data = nusc.get('sample_data', sample_data_token)
135
+ print(FC_data)
136
+ ```
137
+ Output:
138
+ ```
139
+ {'token': 'q9e0pgk3wiot983g4ha8178zrnr37m50',
140
+ 'sample_token': 'udrq868482482o88p9r2n8b86li7cfxx',
141
+ 'ego_pose_token': 'q9e0pgk3wiot983g4ha8178zrnr37m50',
142
+ 'calibrated_sensor_token': 'r5491t78vlex3qii8gyh3vjp0avkrj47',
143
+ 'timestamp': 1696454482897062,
144
+ 'fileformat': 'jpg',
145
+ 'is_key_frame': True,
146
+ 'height': 464,
147
+ 'width': 720,
148
+ 'filename': 'sweeps/CAM_FRONT_CENTER/1696454482897062.jpg',
149
+ 'prev': '',
150
+ 'next': '33r4265w297khyvqe033sl2r6m5iylcr',
151
+ 'sensor_modality': 'camera',
152
+ 'channel': 'CAM_FRONT_CENTER'}
153
+ ```
154
+ - `ego_pose_token`: token of vehicle ego pose at the time of this sample.
155
+ - `calibrated_sensor_token`: token of sensor calibration information (e.g. distortion coefficient, camera intrinsics, sensor pose & location relative to vehicle, etc.).
156
+ - `is_key_frame`: disregard; all images have been marked as key frame in our dataset.
157
+ - `height`: image height in pixel
158
+ - `width`: image width in pixel
159
+ - `filename`: image directory relative to the dataset's root folder
160
+ - `prev`: previous data token for this sensor
161
+ - `next`: next data token for this sensor
162
+
163
+ All image data are already undistorted. You may now load the image in any ways you'd like. Here's an example using cv2:
164
+ ```
165
+ data_path, boxes, camera_intrinsic = nusc.get_sample_data(sample_data_token)
166
+ img = cv2.imread(data_path)
167
+ cv2.imshow('fc_img', img)
168
+ cv2.waitKey()
169
+ ```
170
+
171
+ Output:
172
+ ```
173
+ ('{$dataset_root}/MARS_multitraversal/10/sweeps/CAM_FRONT_CENTER/1696454482897062.jpg',
174
+ [],
175
+ array([[661.094568 , 0. , 370.6625195],
176
+ [ 0. , 657.7004865, 209.509716 ],
177
+ [ 0. , 0. , 1. ]]))
178
+ ```
179
+
180
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66651bd4e4be2069a695e5a1/piuFfzzsBrzW4LKgKHxAJ.png)
181
+
182
+ ---
183
+ ### LiDAR Data
184
+
185
+ Impoirt data calss "LidarPointCloud" from NuScenes devkit for convenient lidar pcd loading and manipulation.
186
+
187
+ The `.bcd.bin` LiDAR data in our dataset has 5 dimensions: [ x || y || z || intensity || ring ].
188
+
189
+ The 5-dimensional data array is in `pcd.points`. Below is an example of visualizing the pcd with Open3d interactive visualizer.
190
+
191
+
192
+ ```
193
+ from nuscenes.utils.data_classes import LidarPointCloud
194
+
195
+ sensor = 'LIDAR_FRONT_CENTER'
196
+ sample_data_token = my_sample['data'][sensor]
197
+ lidar_data = nusc.get('sample_data', sample_data_token)
198
+
199
+ data_path, boxes, _ = nusc.get_sample_data(my_sample['data'][sensor])
200
+
201
+ pcd = LidarPointCloud.from_file(data_path)
202
+ print(pcd.points)
203
+ pts = pcd.points[:3].T
204
+
205
+ # open3d visualizer
206
+ vis1 = o3d.visualization.Visualizer()
207
+ vis1.create_window(
208
+ window_name='pcd viewer',
209
+ width=256 * 4,
210
+ height=256 * 4,
211
+ left=480,
212
+ top=270)
213
+ vis1.get_render_option().background_color = [0, 0, 0]
214
+ vis1.get_render_option().point_size = 1
215
+ vis1.get_render_option().show_coordinate_frame = True
216
+
217
+ o3d_pcd = o3d.geometry.PointCloud()
218
+ o3d_pcd.points = o3d.utility.Vector3dVector(pts)
219
+
220
+ vis1.add_geometry(o3d_pcd)
221
+ while True:
222
+ vis1.update_geometry(o3d_pcd)
223
+ vis1.poll_events()
224
+ vis1.update_renderer()
225
+ time.sleep(0.005)
226
+ ```
227
+
228
+ Output:
229
+ ```
230
+ 5-d lidar data:
231
+ [[ 3.7755847e+00 5.0539265e+00 5.4277039e+00 ... 3.1050100e+00
232
+ 3.4012783e+00 3.7089713e+00]
233
+ [-6.3800979e+00 -7.9569578e+00 -7.9752398e+00 ... -7.9960880e+00
234
+ -7.9981585e+00 -8.0107889e+00]
235
+ [-1.5409404e+00 -3.2752687e-01 5.7313687e-01 ... 5.5921113e-01
236
+ -7.5427920e-01 6.6252775e-02]
237
+ [ 9.0000000e+00 1.6000000e+01 1.4000000e+01 ... 1.1000000e+01
238
+ 1.8000000e+01 1.6000000e+01]
239
+ [ 4.0000000e+00 5.3000000e+01 1.0200000e+02 ... 1.0500000e+02
240
+ 2.6000000e+01 7.5000000e+01]]
241
+ ```
242
+
243
+
244
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66651bd4e4be2069a695e5a1/gxyTJM7Y45AWE9k54Q9ur.png)
245
+
246
+
247
+ ---
248
+ ### IMU Data
249
+ IMU data in our dataset is saved as json files.
250
+ ```
251
+ sensor = 'IMU_TOP'
252
+ sample_data_token = my_sample['data'][sensor]
253
+ lidar_data = nusc.get('sample_data', sample_data_token)
254
+
255
+ data_path, boxes, _ = nusc.get_sample_data(my_sample['data'][sensor])
256
+
257
+ imu_data = json.load(open(data_path))
258
+ print(imu_data)
259
+ ```
260
+
261
+ Output:
262
+ ```
263
+ {'utime': 1696454482879084,
264
+ 'lat': 42.28098291158676,
265
+ 'lon': -83.74725341796875,
266
+ 'elev': 259.40500593185425,
267
+ 'vel': [0.19750464521348476, -4.99952995654127e-27, -0.00017731071625348704],
268
+ 'avel': [-0.0007668623868539726, -0.0006575787383553688, 0.0007131154834496556],
269
+ 'acc': [-0.28270150907337666, -0.03748669268679805, 9.785771369934082]}
270
+ ```
271
+ - `lat`: GPS latitude.
272
+ - `lon`: GPS longitude.
273
+ - `elev`: GPS elevation.
274
+ - `vel`: vehicle instant velocity [x, y, z] in m/s.
275
+ - `avel`: vehicle instant angular velocity [x, y, z] in rad/s.
276
+ - `acc`: vehicle instant acceleration [x, y, z] in m/s^2.
277
+
278
+ ---
279
+ ### Vehicle and Sensor Pose
280
+ Poses are represented as one rotation matrix and one translation matrix.
281
+ - rotation: quaternion [w, x, y, z]
282
+ - translation: [x, y, z] in meters
283
+
284
+ Sensor-to-vehicle poses may differ for different vehicles. But for each vehicle, its sensor poses should remain unchanged across all scenes & samples.
285
+
286
+ Vehicle ego pose can be quaried from sensor data. It should be the same for all sensors in the same sample.
287
+
288
+ ```
289
+ # get the vehicle ego pose at the time of this FC_data
290
+ vehicle_pose_fc = nusc.get('ego_pose', FC_data['ego_pose_token'])
291
+ print("vehicle pose: \n", vehicle_pose_fc, "\n")
292
+
293
+ # get the vehicle ego pose at the time of this lidar_data, should be the same as that queried from FC_data as they are from the same sample.
294
+ vehicle_pose = nusc.get('ego_pose', lidar_data['ego_pose_token'])
295
+ print("vehicle pose: \n", vehicle_pose, "\n")
296
+
297
+ # get camera pose relative to vehicle at the time of this sample
298
+ fc_pose = nusc.get('calibrated_sensor', FC_data['calibrated_sensor_token'])
299
+ print("CAM_FRONT_CENTER pose: \n", fc_pose, "\n")
300
+
301
+ # get lidar pose relative to vehicle at the time of this sample
302
+ lidar_pose = nusc.get('calibrated_sensor', lidar_data['calibrated_sensor_token'])
303
+ print("CAM_FRONT_CENTER pose: \n", lidar_pose)
304
+ ```
305
+
306
+ Output:
307
+ ```
308
+ vehicle pose:
309
+ {'token': 'q9e0pgk3wiot983g4ha8178zrnr37m50',
310
+ 'timestamp': 1696454482883182,
311
+ 'rotation': [-0.7174290249840286, 0.0, -0.0, -0.6966316057361065],
312
+ 'translation': [-146.83352790433003, -21.327001411798392, 0.0]}
313
+
314
+ vehicle pose:
315
+ {'token': '13y90okaf208cqqy1v54z87cpv88k2qy',
316
+ 'timestamp': 1696454482883182,
317
+ 'rotation': [-0.7174290249840286, 0.0, -0.0, -0.6966316057361065],
318
+ 'translation': [-146.83352790433003, -21.327001411798392, 0.0]}
319
+
320
+ CAM_FRONT_CENTER pose:
321
+ {'token': 'r5491t78vlex3qii8gyh3vjp0avkrj47',
322
+ 'sensor_token': '1gk062vf442xsn86xo152qw92596k8b9',
323
+ 'translation': [2.24715, 0.0, 1.4725],
324
+ 'rotation': [0.49834929780875276, -0.4844970241435727, 0.5050790448056688, -0.5116695901338464],
325
+ 'camera_intrinsic': [[661.094568, 0.0, 370.6625195], [0.0, 657.7004865, 209.509716], [0.0, 0.0, 1.0]],
326
+ 'distortion_coefficient': [0.122235, -1.055498, 2.795589, -2.639154]}
327
+
328
+ CAM_FRONT_CENTER pose:
329
+ {'token': '6f367iy1b5c97e8gu614n63jg1f5os19',
330
+ 'sensor_token': 'myfmnd47g91ijn0a7481eymfk253iwy9',
331
+ 'translation': [2.12778, 0.0, 1.57],
332
+ 'rotation': [0.9997984797097376, 0.009068089160690487, 0.006271772522201215, -0.016776012592418482]}
333
+
334
+ ```
335
+
336
+ <br/>
337
+
338
+ ## LiDAR-Image projection
339
+ - Use NuScenes devkit's `render_pointcloud_in_image()` method.
340
+ - The first variable is a sample token.
341
+ - Use `camera_channel` to specify the camera name you'd like to project the poiint cloud onto.
342
+ ```
343
+ nusc.render_pointcloud_in_image(my_sample['token'],
344
+ pointsensor_channel='LIDAR_FRONT_CENTER',
345
+ camera_channel='CAM_FRONT_CENTER',
346
+ render_intensity=False,
347
+ show_lidarseg=False)
348
+ ```
349
+
350
+ Output:
351
+
352
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66651bd4e4be2069a695e5a1/KV715ekDEgLt3CysI4R9S.png)