Update README.md
Browse files
README.md
CHANGED
@@ -43,6 +43,7 @@ This tutorial explains how the NuScenes structure works in our dataset, includin
|
|
43 |
- [Scene](#scene)
|
44 |
- [Sample](#sample)
|
45 |
- [Sample Data](#sample-data)
|
|
|
46 |
- [Camera](#camera-data)
|
47 |
- [LiDAR](#lidar-data)
|
48 |
- [IMU](#imu-data)
|
@@ -78,6 +79,7 @@ nusc = NuScenes(version='v1.0', dataroot=f'/MARS_multiagent', verbose=True)
|
|
78 |
|
79 |
<br/>
|
80 |
|
|
|
81 |
## Scene
|
82 |
To see all scenes in one set (one location of the Multitraversal set, or the whole Multiagent set):
|
83 |
```
|
@@ -152,6 +154,7 @@ Output:
|
|
152 |
<br/>
|
153 |
|
154 |
## Sample Data
|
|
|
155 |
Our sensor names are different from NuScenes' sensor names. It is important that you use the correct name when querying sensor data. Our sensor names are:
|
156 |
```
|
157 |
['CAM_FRONT_CENTER',
|
@@ -166,6 +169,9 @@ Our sensor names are different from NuScenes' sensor names. It is important that
|
|
166 |
|
167 |
---
|
168 |
### Camera Data
|
|
|
|
|
|
|
169 |
```
|
170 |
sensor = 'CAM_FRONT_CENTER'
|
171 |
sample_data_token = my_sample['data'][sensor]
|
@@ -198,8 +204,12 @@ Output:
|
|
198 |
- `prev`: previous data token for this sensor
|
199 |
- `next`: next data token for this sensor
|
200 |
|
201 |
-
|
|
|
|
|
202 |
```
|
|
|
|
|
203 |
data_path, boxes, camera_intrinsic = nusc.get_sample_data(sample_data_token)
|
204 |
img = cv2.imread(data_path)
|
205 |
cv2.imshow('fc_img', img)
|
@@ -228,6 +238,7 @@ The 5-dimensional data array is in `pcd.points`. Below is an example of visualiz
|
|
228 |
|
229 |
|
230 |
```
|
|
|
231 |
from nuscenes.utils.data_classes import LidarPointCloud
|
232 |
|
233 |
sensor = 'LIDAR_FRONT_CENTER'
|
|
|
43 |
- [Scene](#scene)
|
44 |
- [Sample](#sample)
|
45 |
- [Sample Data](#sample-data)
|
46 |
+
- [Sensor Names](#sensor-names)
|
47 |
- [Camera](#camera-data)
|
48 |
- [LiDAR](#lidar-data)
|
49 |
- [IMU](#imu-data)
|
|
|
79 |
|
80 |
<br/>
|
81 |
|
82 |
+
|
83 |
## Scene
|
84 |
To see all scenes in one set (one location of the Multitraversal set, or the whole Multiagent set):
|
85 |
```
|
|
|
154 |
<br/>
|
155 |
|
156 |
## Sample Data
|
157 |
+
### Sensor Names
|
158 |
Our sensor names are different from NuScenes' sensor names. It is important that you use the correct name when querying sensor data. Our sensor names are:
|
159 |
```
|
160 |
['CAM_FRONT_CENTER',
|
|
|
169 |
|
170 |
---
|
171 |
### Camera Data
|
172 |
+
All image data are already undistorted.
|
173 |
+
|
174 |
+
To load a piece data, we start with querying its `sample_data` dictionary object from the metadata:
|
175 |
```
|
176 |
sensor = 'CAM_FRONT_CENTER'
|
177 |
sample_data_token = my_sample['data'][sensor]
|
|
|
204 |
- `prev`: previous data token for this sensor
|
205 |
- `next`: next data token for this sensor
|
206 |
|
207 |
+
After getting the `sample_data` dictionary, Use NuScenes devkit's `get_sample_data()` function to retrieve the data's absolute path.
|
208 |
+
|
209 |
+
Then you may now load the image in any ways you'd like. Here's an example using `cv2`:
|
210 |
```
|
211 |
+
import cv2
|
212 |
+
|
213 |
data_path, boxes, camera_intrinsic = nusc.get_sample_data(sample_data_token)
|
214 |
img = cv2.imread(data_path)
|
215 |
cv2.imshow('fc_img', img)
|
|
|
238 |
|
239 |
|
240 |
```
|
241 |
+
import open3d as o3d
|
242 |
from nuscenes.utils.data_classes import LidarPointCloud
|
243 |
|
244 |
sensor = 'LIDAR_FRONT_CENTER'
|