|
--- |
|
license: bsd-3-clause |
|
language: |
|
- en |
|
--- |
|
# Scene Flow Models for Autonomous Driving Dataset |
|
|
|
<p align="center"> |
|
<a href="https://github.com/KTH-RPL/OpenSceneFlow"> |
|
<picture> |
|
<img alt="opensceneflow" src="https://github.com/KTH-RPL/OpenSceneFlow/blob/main/assets/docs/logo.png?raw=true" width="600"> |
|
</picture><br> |
|
</a> |
|
</p> |
|
|
|
π If you find [*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) useful to your research, please cite [**our works** π](#cite-us) and give [a star π](https://github.com/KTH-RPL/OpenSceneFlow) as encouragement. (ΰ©Λκ³βΛ)ΰ©β§ |
|
|
|
OpenSceneFlow is a codebase for point cloud scene flow estimation. |
|
Please check the usage on [KTH-RPL/OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow). |
|
|
|
<!-- - [DeFlow](https://arxiv.org/abs/2401.16122): Supervised learning scene flow, model included is trained on Argoverse 2. |
|
- [SeFlow](https://arxiv.org/abs/2407.01702): **Self-Supervised** learning scene flow, model included is trained on Argoverse 2. Paper also reported Waymo result, the weight cannot be shared according to [Waymo Term](https://waymo.com/open/terms/). More detail discussion [issue 8](https://github.com/KTH-RPL/SeFlow/issues/8#issuecomment-2464224813). |
|
- [SSF](https://arxiv.org/abs/2501.17821): Supervised learning long-range scene flow, model included is trained on Argoverse 2. |
|
- [Flow4D](https://ieeexplore.ieee.org/document/10887254): Supervised learning 4D network scene flow, model included is trained on Argoverse 2. --> |
|
|
|
The files we included and all test result reports can be found [v2 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/6) and [v1 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/2). |
|
* [ModelName_best].ckpt: means the model evaluated in the public leaderboard page provided by authors or our retrained with the best parameters. |
|
* [demo_data.zip](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip): 613Mb, a mini-dataset for user to quickly run train/val code. Check usage in [this section](https://github.com/KTH-RPL/SeFlow?tab=readme-ov-file#1-run--train). |
|
* [waymo_map.tar.gz](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/waymo_map.tar.gz): to successfully process waymo data with ground segmentation included to unified h5 file. Check usage in [this README](https://github.com/KTH-RPL/SeFlow/blob/main/dataprocess/README.md#waymo-dataset). |
|
|
|
|
|
<details> <summary>π <b>One repository, All methods!</b> </summary> |
|
<!-- <br> --> |
|
You can try following methods in our code without any effort to make your own benchmark. |
|
|
|
- [x] [SSF](https://arxiv.org/abs/2501.17821) (Ours π): ICRA 2025 |
|
- [x] [Flow4D](https://ieeexplore.ieee.org/document/10887254): RA-L 2025 |
|
- [x] [SeFlow](https://arxiv.org/abs/2407.01702) (Ours π): ECCV 2024 |
|
- [x] [DeFlow](https://arxiv.org/abs/2401.16122) (Ours π): ICRA 2024 |
|
- [x] [FastFlow3d](https://arxiv.org/abs/2103.01306): RA-L 2021 |
|
- [x] [ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](https://github.com/KTH-RPL/SeFlow/tools/zerof2ours.py). |
|
- [ ] [NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](https://github.com/KTH-RPL/SeFlow/assets/cuda/README.md), same (slightly better) performance. Done coding, public after review. |
|
- [ ] [FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. Done coding, public after review. |
|
- [ ] ... more on the way |
|
|
|
</details> |
|
|
|
## Cite Us |
|
|
|
*OpenSceneFlow* is designed by [Qingwen Zhang](https://kin-zhang.github.io/) from DeFlow and SeFlow project. If you find it useful, please cite our works: |
|
|
|
```bibtex |
|
@inproceedings{zhang2024seflow, |
|
author={Zhang, Qingwen and Yang, Yi and Li, Peizheng and Andersson, Olov and Jensfelt, Patric}, |
|
title={{SeFlow}: A Self-Supervised Scene Flow Method in Autonomous Driving}, |
|
booktitle={European Conference on Computer Vision (ECCV)}, |
|
year={2024}, |
|
pages={353β369}, |
|
organization={Springer}, |
|
doi={10.1007/978-3-031-73232-4_20}, |
|
} |
|
@inproceedings{zhang2024deflow, |
|
author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric}, |
|
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)}, |
|
title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving}, |
|
year={2024}, |
|
pages={2105-2111}, |
|
doi={10.1109/ICRA57147.2024.10610278} |
|
} |
|
@article{zhang2025himu, |
|
title={HiMo: High-Speed Objects Motion Compensation in Point Cloud}, |
|
author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Sina, Sharif Mansouri and Andersson, Olov and Jensfelt, Patric}, |
|
year={2025}, |
|
journal={arXiv preprint arXiv:2503.00803}, |
|
} |
|
``` |
|
|
|
And our excellent collaborators works as followings: |
|
|
|
```bibtex |
|
@article{kim2025flow4d, |
|
author={Kim, Jaeyeul and Woo, Jungwan and Shin, Ukcheol and Oh, Jean and Im, Sunghoon}, |
|
journal={IEEE Robotics and Automation Letters}, |
|
title={Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation}, |
|
year={2025}, |
|
volume={10}, |
|
number={4}, |
|
pages={3462-3469}, |
|
doi={10.1109/LRA.2025.3542327} |
|
} |
|
@article{khoche2025ssf, |
|
title={SSF: Sparse Long-Range Scene Flow for Autonomous Driving}, |
|
author={Khoche, Ajinkya and Zhang, Qingwen and Sanchez, Laura Pereira and Asefaw, Aron and Mansouri, Sina Sharif and Jensfelt, Patric}, |
|
journal={arXiv preprint arXiv:2501.17821}, |
|
year={2025} |
|
} |
|
``` |
|
|
|
Feel free to contribute your method and add your bibtex here by pull request! |