File size: 3,128 Bytes
ec2bf28 36859fa ec2bf28 3e58b61 ec2bf28 5bf7fc9 dd26f79 717fbca ec2bf28 85993c7 be8e3dd ec2bf28 abd99f2 ec2bf28 bc13faa ec2bf28 bc13faa ec2bf28 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
license: apache-2.0
size_categories:
- 100B<n<1T
---
# **PDM-Lite Dataset for CARLA Leaderboard 2.0**
## Description
[PDM-Lite](https://github.com/OpenDriveLab/DriveLM/tree/DriveLM-CARLA/pdm_lite) is a state-of-the-art rule-based expert system for autonomous urban driving in [CARLA Leaderboard 2.0](https://leaderboard.carla.org/get_started/), and the first to successfully navigate all scenarios. This dataset was used to create the QA dataset for [DriveLM-Carla](https://github.com/OpenDriveLab/DriveLM/tree/DriveLM-CARLA), a benchmark for evaluating end-to-end autonomous driving algorithms with Graph Visual Question Answering (GVQA). DriveLM introduces GVQA as a novel approach, modeling perception, prediction, and planning through interconnected question-answer pairs, mimicking human reasoning processes. Additionally, this dataset was used for training [Transfuser++](https://kashyap7x.github.io/assets/pdf/students/Zimmerlin2024.pdf) with imitation learning, which achieved 1st place (map track) and 2nd place (sensor track) in the [CARLA Autonomous Driving Challenge 2024](https://opendrivelab.com/challenge2024/#carla). This dataset builds upon the [PDM-Lite](https://github.com/OpenDriveLab/DriveLM/tree/DriveLM-CARLA/pdm_lite) expert, incorporating enhancements from "[Tackling CARLA Leaderboard 2.0 with End-to-End Imitation Learning](https://kashyap7x.github.io/assets/pdf/students/Zimmerlin2024.pdf)".
For more information and a script for downloading and unpacking visit our [GitHub](https://github.com/OpenDriveLab/DriveLM/tree/DriveLM-CARLA).
## Dataset Features
- **High-Quality Data:** 5134 routes with 100 % route completion and zero infractions on 8 towns, sampled at 2 Hz, totaling 214,631 frames
- **Diverse Scenarios:** Covers 38 complex scenarios, including urban traffic, participants violating traffic rules, and high-speed highway driving
- **Focused Evaluation:** Short routes averaging 160 m in length
## Data Modalities
- **BEV Semantics Map:** 512x512 pixels, centered on ego vehicle, 2 pixels per meter resolution
- **Image Data:** 1024x512 pixels, RGB images, semantic segmentation, and depth information
- **Lidar Data:** Detailed lidar point clouds with 600,000 points per second
- **Augmented Data:** Augmented versions of RGB, semantic, depth, and lidar data
- **Simulator Data:** Comprehensive information on nearby objects
## License and Citation
Apache 2.0 license unless specified otherwise.
```bibtex
@inproceedings{sima2024drivelm,
title={DriveLM: Driving with Graph Visual Question Answering},
author={Chonghao Sima and Katrin Renz and Kashyap Chitta and Li Chen and Hanxue Zhang and Chengen Xie and Jens Beißwenger and Ping Luo and Andreas Geiger and Hongyang Li},
booktitle={European Conference on Computer Vision},
year={2024},
}
@misc{Beißwenger2024PdmLite,
title = {{PDM-Lite}: A Rule-Based Planner for CARLA Leaderboard 2.0},
author = {Bei{\ss}wenger, Jens},
howpublished = {\url{https://github.com/OpenDriveLab/DriveLM/blob/DriveLM-CARLA/docs/report.pdf}},
year = {2024},
school = {University of Tübingen},
}
``` |