Model Card for VertebralBodiesCT-ResEncM
VertebralBodiesCT-ResEncM model is a deep learning segmentation model built using the nnU-Net framework. It is designed to identify thoracic and lumbar vertebral bodies in CT scans, specifically excluding the vertebral arches and spinous processes.
This is the smaller, faster model trained using a single fold with the ResEnc M presets. A larger, more performant version using 5 folds with the ResEnc L presets is available here.
Model Details
This model was developed for segmenting thoracic and lumbar vertebral bodies in CT scans, with a focus on anatomical landmark identification for applications such as body composition analysis and sarcopenia assessment. It was trained on a dataset derived from the TotalSegmentator and VerSe datasets, which were refined to focus solely on the vertebral bodies The model is available under the CC BY-SA 4.0 license and accompanied by the training labels and a pipeline for body composition analysis.
Uses
The VertebralBodiesCT-ResEncM model is intended for segmenting thoracic and lumbar vertebral bodies in CT scans. For instance after integration into a pipeline, it can be used for localizing measurements relative to anatomical landmarks. The model excludes vertebral arches and spinous processes, making it unsuitable for whole spine analysis. It is not intended for clinical application.
Bias, Risks, and Limitations
- Data Source Bias: The model, despite being trained in a heterogenous dataset, may not fully represent diverse populations, imaging techniques, or scanner types.
- Misfunction in Pathologies: The model may perform poorly in cases with uncommon or complex spinal pathologies not well-represented in the dataset.
- Persistent Labeling Errors: Some labeling errors may persist in the training dataset. This could cause segmentation errors, especially in cases with transitional vertebrae.
- Exclusion of Cervical Spine: The model is designed only for thoracic and lumbar vertebrae and the sacrum.
- Exclusion of Vertebral Arches and Spinous Processes: The labels focus solely on vertebral bodies, excluding vertebral arches and spinous processes.
- No Clinical Usage: This model is intended solely for research purposes.
How to Get Started with the Model
The model is a standard, unmodified nnU-Net model. Installation instructions for nnU-Net can be found here, with more information on inference here. We provide a integration into a Body Composition analysis pipeline.
Training Details
The training dataset consists of 1216 labels corresponding to CT scans derived from the TotalSegmentator and VerSe datasets. Detailed information on the labeling process can be found here. The model segments thoracic vertebrae T1 to T12 (labels 1-12), lumbar vertebrae L1 to L5 (labels 13-17), and the sacrum (label 19), with optional vertebrae such as T13 (label 21) or L6 (label 18). Training was conducted using a single fold over 1000 epochs. The dataset_fingerprint.json, model plan with model configuration and hyperparameters and training logs are available with the model.
Evaluation
The model's performance metrics were assessed in a separate test dataset, and its clinical value in identifying the third lumbar vertebra (L3) was explored in an independent oncological dataset. Model metrics are available here, and detailed results are described in our paper.
Technical Requirements
According to the residual encoder UNet presets, the nnU-Net ResEnc M requires a GPU with 9-11GB VRAM.
Citation
We deeply appreciate the foundational work of the TotalSegmentator and VerSe authors. The generation of the labels would not have been possible without their open-source contributions. If you use this model, please cite the following publications:
Wasserthal, J., Breit, H.-C., Meyer, M.T., Pradella, M., Hinck, D., Sauter, A.W., Heye, T., Boll, D., Cyriac, J., Yang, S., Bach, M., Segeroth, M. (2023). TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiology: Artificial Intelligence. https://doi.org/10.1148/ryai.230024
Sekuboyina, A., Husseini, M.E., Bayat, A., Löffler, M., Liebl, H., Li, H., Tetteh, G., Kukačka, J., Payer, C., Štern, D., Urschler, M., Chen, M., Cheng, D., Lessmann, N., Hu, Y., Wang, T., Yang, D., Xu, D., Ambellan, F., Amiranashvili, T., Ehlke, M., Lamecker, H., Lehnert, S., Lirio, M., Pérez de Olaguer, N., Ramm, H., Sahu, M., Tack, A., Zachow, S., Jiang, T., Ma, X., Angerman, C., Wang, X., Brown, K., Kirszenberg, A., Puybareau, É., Chen, D., Bai, Y., Rapazzo, B.H., Yeah, T., Zhang, A., Xu, S., Hou, F., He, Z., Zeng, C., Xiangshang, Z., Liming, X., Netherton, T.J., Mumme, R.P., Court, L.E., Huang, Z., He, C., Wang, L.-W., Ling, S.H., Huỳnh, L.D., Boutry, N., Jakubicek, R., Chmelik, J., Mulay, S., Sivaprakasam, M., Paetzold, J.C., Shit, S., Ezhov, I., Wiestler, B., Glocker, B., Valentinitsch, A., Rempfler, M., Menze, B.H., Kirschke, J.S. (2021). VerSe: A Vertebrae labelling and segmentation benchmark for multi-detector CT images. Medical Image Analysis. https://doi.org/10.1016/j.media.2021.102166
Haubold, J., Baldini, G., Parmar, V., Schaarschmidt, B.M., Koitka, S., Kroll, L., van Landeghem, N., Umutlu, L., Forsting, M., Nensa, F., Hosch, R. (2023). BOA: A CT-Based Body and Organ Analysis for Radiologists at the Point of Care. Investigative Radiology. https://doi.org/10.1097/RLI.0000000000001040
Since the model is based on the nnU-Net framework and the residual encoder UNet presets, please cite the following papers:
Isensee, F., Jaeger, P.F., Kohl, S. A., Petersen, J., Maier-Hein, K.H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211. https://doi.org/10.1038/s41592-020-01008-z
Isensee, F., Wald, T., Ulrich, C., Baumgartner, M. , Roy, S., Maier-Hein, K., Jaeger, P. (2024). nnU-Net Revisited: A Call for Rigorous Validation in 3D Medical Image Segmentation. arXiv preprint arXiv:2404.09556.
Please cite our dataset:
Hofmann F.O. et al. Thoracic & lumbar vertebral body labels corresponding to 1460 public CT scans. https://huggingface.co/datasets/fhofmann/VertebralBodiesCT-Labels/
Contact
Any feedback, questions or recommendations? Feel free to contact us! 🤗