Datasets:
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,126 +1,126 @@
|
|
1 |
-
# Physical AI Smart Spaces Dataset
|
2 |
-
|
3 |
-
## Overview
|
4 |
-
|
5 |
-
This comprehensive synthetic dataset, generated using the NVIDIA Omniverse Platform, supports advanced research in multi-target multi-camera (MTMC) tracking and 2D/3D object detection tasks. It includes synchronized videos from multiple cameras capturing diverse indoor environments, such as warehouses, hospitals, retail stores, and laboratories. Privacy is ensured, as the dataset contains no personal data.
|
6 |
-
|
7 |
-
The dataset has been officially adopted by the AI City Challenge:
|
8 |
-
- **MTMC_Tracking_2024**: Utilized at the 8th AI City Challenge Workshop at CVPR 2024.
|
9 |
-
- **MTMC_Tracking_2025**: Scheduled for the 9th AI City Challenge in 2025.
|
10 |
-
|
11 |
-
## Key Features of MTMC_Tracking_2025
|
12 |
-
- Inclusion of **3D Bounding Boxes**.
|
13 |
-
- Comprehensive **Calibration Data** (camera locations, orientations, intrinsic and extrinsic matrices).
|
14 |
-
- Multiple object classes beyond persons, including humanoids and autonomous mobile robots.
|
15 |
-
|
16 |
-
## Dataset Quantification
|
17 |
-
|
18 |
-
| Dataset | Annotation Type | Hours | Cameras | Object Classes & Counts | No. 3D Boxes | No. 2D Boxes | Total Size |
|
19 |
-
|-------------------------|-------------------------------------------------|-------------|----------------|-------------------------------------------------------------|--------------|--------------|------------|
|
20 |
-
| **MTMC_Tracking_2024** | 2D bounding boxes,
|
21 |
-
| **MTMC_Tracking_2025**<br>(Train & Validation only) | 2D & 3D bounding boxes,
|
22 |
-
|
23 |
-
## Dataset Structure
|
24 |
-
|
25 |
-
### Ground Truth Format (MOTChallenge) for `MTMC_Tracking_2024`
|
26 |
-
Annotations are provided in the following text format per line:
|
27 |
-
|
28 |
-
```
|
29 |
-
<camera_id> <obj_id> <frame_id> <xmin> <ymin> <width> <height> <xworld> <yworld>
|
30 |
-
```
|
31 |
-
|
32 |
-
- `<camera_id>`: Numeric identifier for the camera.
|
33 |
-
- `<obj_id>`: Consistent numeric identifier for each object across cameras.
|
34 |
-
- `<frame_id>`: Frame index starting from 0.
|
35 |
-
- `<xmin> <ymin> <width> <height>`: Axis-aligned bounding box coordinates in pixels (top-left origin).
|
36 |
-
- `<xworld> <yworld>`: Global coordinates (projected bottom points of objects) based on provided camera matrices.
|
37 |
-
|
38 |
-
The video file and calibration (camera matrix and homography) are provided for each camera view.
|
39 |
-
|
40 |
-
### Directory Structure for `MTMC_Tracking_2025`
|
41 |
-
- `videos/`: MP4 files, 1920×1080 pixels resolution at 30 FPS.
|
42 |
-
- `ground_truth.json`: Detailed ground truth annotations (see below).
|
43 |
-
- `calibration.json`: Camera calibration and metadata.
|
44 |
-
- `map.png`: Visualization map in top-down view.
|
45 |
-
|
46 |
-
### Ground Truth Format (JSON) for `MTMC_Tracking_2025`
|
47 |
-
Annotations per frame:
|
48 |
-
|
49 |
-
```json
|
50 |
-
{
|
51 |
-
"<frame_id>": [
|
52 |
-
{
|
53 |
-
"object_type": "<class_name>",
|
54 |
-
"object_id": <int>,
|
55 |
-
"3d_location": [x, y, z],
|
56 |
-
"3d_bounding_box_scale": [w, l, h],
|
57 |
-
"3d_bounding_box_rotation": [pitch, roll, yaw],
|
58 |
-
"2d_bounding_box_visible": {
|
59 |
-
"<camera_id>": [xmin, ymin, xmax, ymax]
|
60 |
-
}
|
61 |
-
}
|
62 |
-
]
|
63 |
-
}
|
64 |
-
```
|
65 |
-
|
66 |
-
### Calibration Format (JSON) for `MTMC_Tracking_2025`
|
67 |
-
Contains detailed calibration metadata per sensor:
|
68 |
-
|
69 |
-
```json
|
70 |
-
{
|
71 |
-
"calibrationType": "cartesian",
|
72 |
-
"sensors": [
|
73 |
-
{
|
74 |
-
"type": "camera",
|
75 |
-
"id": "<sensor_id>",
|
76 |
-
"coordinates": {"x": float, "y": float},
|
77 |
-
"scaleFactor": float,
|
78 |
-
"translationToGlobalCoordinates": {"x": float, "y": float},
|
79 |
-
"attributes": [
|
80 |
-
{"name": "fps", "value": float},
|
81 |
-
{"name": "direction", "value": float},
|
82 |
-
{"name": "direction3d", "value": "float,float,float"},
|
83 |
-
{"name": "frameWidth", "value": int},
|
84 |
-
{"name": "frameHeight", "value": int}
|
85 |
-
],
|
86 |
-
"intrinsicMatrix": [[f_x, 0, c_x], [0, f_y, c_y], [0, 0, 1]],
|
87 |
-
"extrinsicMatrix": [[3×4 matrix]],
|
88 |
-
"cameraMatrix": [[3×4 matrix]],
|
89 |
-
"homography": [[3×3 matrix]]
|
90 |
-
}
|
91 |
-
]
|
92 |
-
}
|
93 |
-
```
|
94 |
-
|
95 |
-
## Evaluation
|
96 |
-
|
97 |
-
- **2024 Edition**: Evaluation based on HOTA scores at the [2024 AI City Challenge Server](https://eval.aicitychallenge.org/aicity2024). The submission is currently disabled, as the ground truths of test set are provided with this release.
|
98 |
-
- **2025 Edition**: Evaluation system and test set forthcoming in the 2025 AI City Challenge.
|
99 |
-
|
100 |
-
## References
|
101 |
-
|
102 |
-
Please cite the following papers when using this dataset:
|
103 |
-
|
104 |
-
```bibtex
|
105 |
-
@InProceedings{Wang24AICity24,
|
106 |
-
author = {Shuo Wang and David C. Anastasiu and Zheng Tang and Ming-Ching Chang and Yue Yao and Liang Zheng and Mohammed Shaiqur Rahman and Meenakshi S. Arya and Anuj Sharma and Pranamesh Chakraborty and Sanjita Prajapati and Quan Kong and Norimasa Kobori and Munkhjargal Gochoo and Munkh-Erdene Otgonbold and Ganzorig Batnasan and Fady Alnajjar and Ping-Yang Chen and Jun-Wei Hsieh and Xunlei Wu and Sameer Satish Pusegaonkar and Yizhou Wang and Sujit Biswas and Rama Chellappa},
|
107 |
-
title = {The 8th {AI City Challenge}},
|
108 |
-
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
|
109 |
-
month = {June},
|
110 |
-
year = {2024},
|
111 |
-
}
|
112 |
-
|
113 |
-
@misc{Wang24BEVSUSHI,
|
114 |
-
author = {Yizhou Wang and Tim Meinhardt and Orcun Cetintas and Cheng-Yen Yang and Sameer Satish Pusegaonkar and Benjamin Missaoui and Sujit Biswas and Zheng Tang and Laura Leal-Taix{\'e}},
|
115 |
-
title = {{BEV-SUSHI}: {M}ulti-target multi-camera {3D} detection and tracking in bird's-eye view},
|
116 |
-
note = {arXiv:2412.00692},
|
117 |
-
year = {2024}
|
118 |
-
}
|
119 |
-
```
|
120 |
-
|
121 |
-
## Ethical Considerations
|
122 |
-
|
123 |
-
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
124 |
-
|
125 |
-
|
126 |
-
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
|
|
1 |
+
# Physical AI Smart Spaces Dataset
|
2 |
+
|
3 |
+
## Overview
|
4 |
+
|
5 |
+
This comprehensive synthetic dataset, generated using the NVIDIA Omniverse Platform, supports advanced research in multi-target multi-camera (MTMC) tracking and 2D/3D object detection tasks. It includes synchronized videos from multiple cameras capturing diverse indoor environments, such as warehouses, hospitals, retail stores, and laboratories. Privacy is ensured, as the dataset contains no personal data.
|
6 |
+
|
7 |
+
The dataset has been officially adopted by the AI City Challenge:
|
8 |
+
- **MTMC_Tracking_2024**: Utilized at the 8th AI City Challenge Workshop at CVPR 2024.
|
9 |
+
- **MTMC_Tracking_2025**: Scheduled for the 9th AI City Challenge in 2025.
|
10 |
+
|
11 |
+
## Key Features of MTMC_Tracking_2025
|
12 |
+
- Inclusion of **3D Bounding Boxes**.
|
13 |
+
- Comprehensive **Calibration Data** (camera locations, orientations, intrinsic and extrinsic matrices).
|
14 |
+
- Multiple object classes beyond persons, including humanoids and autonomous mobile robots.
|
15 |
+
|
16 |
+
## Dataset Quantification
|
17 |
+
|
18 |
+
| Dataset | Annotation Type | Hours | Cameras | Object Classes & Counts | No. 3D Boxes | No. 2D Boxes | Total Size |
|
19 |
+
|-------------------------|-------------------------------------------------|-------------|----------------|-------------------------------------------------------------|--------------|--------------|------------|
|
20 |
+
| **MTMC_Tracking_2024** | 2D bounding boxes, multi-camera tracking IDs | 212 | 953 | Person: 2,481 | N/A | 135M | 198 GB |
|
21 |
+
| **MTMC_Tracking_2025**<br>(Train & Validation only) | 2D & 3D bounding boxes, multi-camera tracking IDs | 42 | 504 | Person: 292<br>Forklift: 13<br>NovaCarter: 28<br>Transporter: 23<br>FourierGR1T2: 6<br>AgilityDigit: 1<br>**Overall:** 363 | 8.9M | 73M | 74 GB |
|
22 |
+
|
23 |
+
## Dataset Structure
|
24 |
+
|
25 |
+
### Ground Truth Format (MOTChallenge) for `MTMC_Tracking_2024`
|
26 |
+
Annotations are provided in the following text format per line:
|
27 |
+
|
28 |
+
```
|
29 |
+
<camera_id> <obj_id> <frame_id> <xmin> <ymin> <width> <height> <xworld> <yworld>
|
30 |
+
```
|
31 |
+
|
32 |
+
- `<camera_id>`: Numeric identifier for the camera.
|
33 |
+
- `<obj_id>`: Consistent numeric identifier for each object across cameras.
|
34 |
+
- `<frame_id>`: Frame index starting from 0.
|
35 |
+
- `<xmin> <ymin> <width> <height>`: Axis-aligned bounding box coordinates in pixels (top-left origin).
|
36 |
+
- `<xworld> <yworld>`: Global coordinates (projected bottom points of objects) based on provided camera matrices.
|
37 |
+
|
38 |
+
The video file and calibration (camera matrix and homography) are provided for each camera view.
|
39 |
+
|
40 |
+
### Directory Structure for `MTMC_Tracking_2025`
|
41 |
+
- `videos/`: MP4 files, 1920×1080 pixels resolution at 30 FPS.
|
42 |
+
- `ground_truth.json`: Detailed ground truth annotations (see below).
|
43 |
+
- `calibration.json`: Camera calibration and metadata.
|
44 |
+
- `map.png`: Visualization map in top-down view.
|
45 |
+
|
46 |
+
### Ground Truth Format (JSON) for `MTMC_Tracking_2025`
|
47 |
+
Annotations per frame:
|
48 |
+
|
49 |
+
```json
|
50 |
+
{
|
51 |
+
"<frame_id>": [
|
52 |
+
{
|
53 |
+
"object_type": "<class_name>",
|
54 |
+
"object_id": <int>,
|
55 |
+
"3d_location": [x, y, z],
|
56 |
+
"3d_bounding_box_scale": [w, l, h],
|
57 |
+
"3d_bounding_box_rotation": [pitch, roll, yaw],
|
58 |
+
"2d_bounding_box_visible": {
|
59 |
+
"<camera_id>": [xmin, ymin, xmax, ymax]
|
60 |
+
}
|
61 |
+
}
|
62 |
+
]
|
63 |
+
}
|
64 |
+
```
|
65 |
+
|
66 |
+
### Calibration Format (JSON) for `MTMC_Tracking_2025`
|
67 |
+
Contains detailed calibration metadata per sensor:
|
68 |
+
|
69 |
+
```json
|
70 |
+
{
|
71 |
+
"calibrationType": "cartesian",
|
72 |
+
"sensors": [
|
73 |
+
{
|
74 |
+
"type": "camera",
|
75 |
+
"id": "<sensor_id>",
|
76 |
+
"coordinates": {"x": float, "y": float},
|
77 |
+
"scaleFactor": float,
|
78 |
+
"translationToGlobalCoordinates": {"x": float, "y": float},
|
79 |
+
"attributes": [
|
80 |
+
{"name": "fps", "value": float},
|
81 |
+
{"name": "direction", "value": float},
|
82 |
+
{"name": "direction3d", "value": "float,float,float"},
|
83 |
+
{"name": "frameWidth", "value": int},
|
84 |
+
{"name": "frameHeight", "value": int}
|
85 |
+
],
|
86 |
+
"intrinsicMatrix": [[f_x, 0, c_x], [0, f_y, c_y], [0, 0, 1]],
|
87 |
+
"extrinsicMatrix": [[3×4 matrix]],
|
88 |
+
"cameraMatrix": [[3×4 matrix]],
|
89 |
+
"homography": [[3×3 matrix]]
|
90 |
+
}
|
91 |
+
]
|
92 |
+
}
|
93 |
+
```
|
94 |
+
|
95 |
+
## Evaluation
|
96 |
+
|
97 |
+
- **2024 Edition**: Evaluation based on HOTA scores at the [2024 AI City Challenge Server](https://eval.aicitychallenge.org/aicity2024). The submission is currently disabled, as the ground truths of test set are provided with this release.
|
98 |
+
- **2025 Edition**: Evaluation system and test set forthcoming in the 2025 AI City Challenge.
|
99 |
+
|
100 |
+
## References
|
101 |
+
|
102 |
+
Please cite the following papers when using this dataset:
|
103 |
+
|
104 |
+
```bibtex
|
105 |
+
@InProceedings{Wang24AICity24,
|
106 |
+
author = {Shuo Wang and David C. Anastasiu and Zheng Tang and Ming-Ching Chang and Yue Yao and Liang Zheng and Mohammed Shaiqur Rahman and Meenakshi S. Arya and Anuj Sharma and Pranamesh Chakraborty and Sanjita Prajapati and Quan Kong and Norimasa Kobori and Munkhjargal Gochoo and Munkh-Erdene Otgonbold and Ganzorig Batnasan and Fady Alnajjar and Ping-Yang Chen and Jun-Wei Hsieh and Xunlei Wu and Sameer Satish Pusegaonkar and Yizhou Wang and Sujit Biswas and Rama Chellappa},
|
107 |
+
title = {The 8th {AI City Challenge}},
|
108 |
+
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
|
109 |
+
month = {June},
|
110 |
+
year = {2024},
|
111 |
+
}
|
112 |
+
|
113 |
+
@misc{Wang24BEVSUSHI,
|
114 |
+
author = {Yizhou Wang and Tim Meinhardt and Orcun Cetintas and Cheng-Yen Yang and Sameer Satish Pusegaonkar and Benjamin Missaoui and Sujit Biswas and Zheng Tang and Laura Leal-Taix{\'e}},
|
115 |
+
title = {{BEV-SUSHI}: {M}ulti-target multi-camera {3D} detection and tracking in bird's-eye view},
|
116 |
+
note = {arXiv:2412.00692},
|
117 |
+
year = {2024}
|
118 |
+
}
|
119 |
+
```
|
120 |
+
|
121 |
+
## Ethical Considerations
|
122 |
+
|
123 |
+
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
124 |
+
|
125 |
+
|
126 |
+
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|