Datasets:

ArXiv:
File size: 6,036 Bytes
16e9cca
 
 
c2db178
16e9cca
c2db178
16e9cca
c2db178
16e9cca
c2db178
 
16e9cca
c2db178
 
16e9cca
 
c2db178
 
 
 
 
 
 
 
16e9cca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c2db178
16e9cca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c2db178
16e9cca
 
 
 
c2db178
 
 
 
 
 
 
16e9cca
 
 
 
 
 
 
55cf1bb
16e9cca
55cf1bb
16e9cca
 
 
 
 
 
 
 
 
 
 
 
c2db178
16e9cca
 
0c265c7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
# Physical AI Smart Spaces Dataset

## Overview
Comprehensive, annotated dataset for multi-camera tracking and 2D/3D object detection. This dataset is synthetically generated with Omniverse.

This dataset consists of over 250 hours of video from across nearly 1,500 cameras from indoor scenes in warehouses, hospitals, retail, and more. The dataset is time synchronized for tracking humans across multiple cameras using feature representation and no personal data.

## Dataset Description

### Dataset Owner(s)
NVIDIA

### Dataset Creation Date
We started to create this dataset in December, 2023. First version was completed and released as part of 8th AI City Challenge in conjunction with CVPR 2024.


### Dataset Characterization
- Data Collection Method: Synthetic
- Labeling Method: Automatic with IsaacSim

### Video Format
- Video Standard: MP4 (H.264)
- Video Resolution: 1080p
- Video Frame rate: 30 FPS

### Ground Truth Format (MOTChallenge) for `MTMC_Tracking_2024`
Annotations are provided in the following text format per line:

```
<camera_id> <obj_id> <frame_id> <xmin> <ymin> <width> <height> <xworld> <yworld>
```

- `<camera_id>`: Numeric identifier for the camera.
- `<obj_id>`: Consistent numeric identifier for each object across cameras.
- `<frame_id>`: Frame index starting from 0.
- `<xmin> <ymin> <width> <height>`: Axis-aligned bounding box coordinates in pixels (top-left origin).
- `<xworld> <yworld>`: Global coordinates (projected bottom points of objects) based on provided camera matrices.

The video file and calibration (camera matrix and homography) are provided for each camera view.

### Directory Structure for `MTMC_Tracking_2025`
- `videos/`: Video files.
- `ground_truth.json`: Detailed ground truth annotations (see below).
- `calibration.json`: Camera calibration and metadata.
- `map.png`: Visualization map in top-down view.

### Ground Truth Format (JSON) for `MTMC_Tracking_2025`
Annotations per frame:

```json
{
  "<frame_id>": [
    {
      "object_type": "<class_name>",
      "object_id": <int>,
      "3d_location": [x, y, z],
      "3d_bounding_box_scale": [w, l, h],
      "3d_bounding_box_rotation": [pitch, roll, yaw],
      "2d_bounding_box_visible": {
        "<camera_id>": [xmin, ymin, xmax, ymax]
      }
    }
  ]
}
```

### Calibration Format (JSON) for `MTMC_Tracking_2025`
Contains detailed calibration metadata per sensor:

```json
{
  "calibrationType": "cartesian",
  "sensors": [
    {
      "type": "camera",
      "id": "<sensor_id>",
      "coordinates": {"x": float, "y": float},
      "scaleFactor": float,
      "translationToGlobalCoordinates": {"x": float, "y": float},
      "attributes": [
        {"name": "fps", "value": float},
        {"name": "direction", "value": float},
        {"name": "direction3d", "value": "float,float,float"},
        {"name": "frameWidth", "value": int},
        {"name": "frameHeight", "value": int}
      ],
      "intrinsicMatrix": [[f_x, 0, c_x], [0, f_y, c_y], [0, 0, 1]],
      "extrinsicMatrix": [[3×4 matrix]],
      "cameraMatrix": [[3×4 matrix]],
      "homography": [[3×3 matrix]]
    }
  ]
}
```

### Evaluation

- **2024 Edition**: Evaluation based on HOTA scores at the [2024 AI City Challenge Server](https://eval.aicitychallenge.org/aicity2024). The submission is currently disabled, as the ground truths of test set are provided with this release.
- **2025 Edition**: Evaluation system and test set forthcoming in the 2025 AI City Challenge.

## Dataset Quantification

| Dataset                 | Annotation Type                                       | Hours | Cameras | Object Classes & Counts                                       | No. 3D Boxes | No. 2D Boxes | Total Size |
|-------------------------|-------------------------------------------------|-------------|----------------|-------------------------------------------------------------|--------------|--------------|------------|
| **MTMC_Tracking_2024** | 2D bounding boxes, multi-camera tracking IDs | 212 | 953 | Person: 2,481 | N/A | 135M | 198 GB |
| **MTMC_Tracking_2025**<br>(Train & Validation only) | 2D & 3D bounding boxes, multi-camera tracking IDs | 42 | 504 | Person: 292<br>Forklift: 13<br>NovaCarter: 28<br>Transporter: 23<br>FourierGR1T2: 6<br>AgilityDigit: 1<br>**Overall:** 363 | 8.9M | 73M | 74 GB |

## References

Please cite the following papers when using this dataset:

```bibtex
@InProceedings{Wang24AICity24,
author = {Shuo Wang and David C. Anastasiu and Zheng Tang and Ming-Ching Chang and Yue Yao and Liang Zheng and Mohammed Shaiqur Rahman and Meenakshi S. Arya and Anuj Sharma and Pranamesh Chakraborty and Sanjita Prajapati and Quan Kong and Norimasa Kobori and Munkhjargal Gochoo and Munkh-Erdene Otgonbold and Ganzorig Batnasan and Fady Alnajjar and Ping-Yang Chen and Jun-Wei Hsieh and Xunlei Wu and Sameer Satish Pusegaonkar and Yizhou Wang and Sujit Biswas and Rama Chellappa},
title = {The 8th {AI City Challenge},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
note = {arXiv:2404.09432},
month = {June},
year = {2024},
}

@misc{Wang24BEVSUSHI,
author = {Yizhou Wang and Tim Meinhardt and Orcun Cetintas and Cheng-Yen Yang and Sameer Satish Pusegaonkar and Benjamin Missaoui and Sujit Biswas and Zheng Tang and Laura Leal-Taix{\'e}},
title = {{BEV-SUSHI}: {M}ulti-target multi-camera {3D} detection and tracking in bird's-eye view},
note = {arXiv:2412.00692},
year = {2024}
}
```

## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications.  When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.   

Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).