Image Feature Extraction
Transformers
Safetensors
dinov2
rad-dino / README.md
fepegar's picture
Update README.md
f8c75fd verified
|
raw
history blame
9.03 kB
metadata
library_name: transformers
license: mit

Model card for RAD-DINO

Model description

RAD-DINO is a vision transformer model trained to encode chest X-rays using the self-supervised learning method DINOv2.

RAD-DINO is described in detail in RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision (Pérez-García, Sharma, Bond-Taylor et al., 2024).

  • Developed by: Microsoft Health Futures
  • Model type: Vision transformer
  • License: MIT
  • Finetuned from model: dinov2-base

Uses

RAD-DINO is shared for research purposes only. It is not meant to be used for clinical practice.

The model is a vision backbone that can be plugged to other models for downstream tasks. Some potential uses are:

  • Image classification, with a classifier trained on top of the CLS token
  • Image segmentation, with a decoder trained using the patch tokens
  • Clustering, using the image embeddings directly
  • Image retrieval, using nearest neighbors of the CLS token
  • Report generation, with a language model to decode text

Fine-tuning RAD-DINO is typically not necessary to obtain good performance in downstream tasks.

Bias, risks, and limitations

RAD-DINO was trained with data from three countries, therefore it might be biased towards population in the training data. Underlying biases of the training datasets may not be well characterised.

Getting started

>>> import torch
>>> from PIL import Image
>>> from transformers import AutoModel
>>> from transformers import AutoImageProcessor
>>>
>>> # Define a small function to get a sample image
>>> def download_sample_image() -> Image.Image:
...     """Download chest X-ray with CC license."""
...     import requests
...     from PIL import Image
...     base_url = "https://upload.wikimedia.org/wikipedia/commons"
...     image_url = f"{base_url}/2/20/Chest_X-ray_in_influenza_and_Haemophilus_influenzae.jpg"
...     headers = {"User-Agent": "fperezgarcia@microsoft.com"}
...     response = requests.get(image_url, headers=headers, stream=True)
...     return Image.open(response.raw)
...
>>> # Download the model
>>> repo = "microsoft/rad-dino"
>>> model = AutoModel.from_pretrained(repo).cuda()

>>> # The processor takes a PIL image, performs resizing, center-cropping, and
>>> # intensity normalization using stats from MIMIC-CXR, and returns a
>>> # dictionary with a PyTorch tensor ready for the encoder
>>> processor = AutoProcessor.from_pretrained(repo)
>>>
>>> # Download and preprocess a chest X-ray
>>> image = download_sample_image()
>>> inputs = processor(images=image, return_tensors="pt")
>>>
>>> # Encode the image!
>>> with torch.inference_mode():
>>>     outputs = model(**inputs)
>>>
>>> # Look at the CLS embeddings
>>> cls_embeddings = outputs.pooler_output.shape
>>> cls_embeddings  # (batch_size, num_channels)
torch.Size([1, 768])
>>>
>>> # Look at the patch embeddings (needs `pip install einops`)
>>> def reshape_patch_embeddings(flat_tokens: torch.Tensor) -> torch.Tensor:
...     """Reshape flat list of patch tokens into a nice grid."""
...     from einops import rearrange
...     image_size = processor.crop_size["height"]
...     patch_size = model.config.patch_size
...     embeddings_size = image_size // patch_size
...     patches_grid = rearrange(flat_tokens, "b (h w) c -> b c h w", h=embeddings_size)
...     return patches_grid
...
>>> flat_patch_embeddings = outputs.last_hidden_state[:, 1:]  # first token is CLS
>>> reshaped_patch_embeddings = reshape_patch_embeddings(flat_patch_embeddings)
>>> reshaped_patch_embeddings.shape  # (batch_size, num_channels, height, width)
torch.Size([1, 768, 16, 16])

Training details

Training data

We used images from five public, deidentified chest X-ray datasets to train this checkpoint of RAD-DINO. Images in the validation and test sets used to train MAIRA were excluded from the training set of RAD-DINO. The list of image files used for training is available at ./training_images.csv.

Dataset Num. images
MIMIC-CXR 368 960
CheXpert 223 648
NIH-CXR 112 120
PadChest 136 787
BRAX 41 260

Note this checkpoint is different from the one in the paper, where some private data was used.

Training procedure

We refer to the manuscript for a detailed description of the training procedure.

Preprocessing

All DICOM files were resized using B-spline interpolation so that their shorter size was 518, min-max scaled to [0, 255], and stored as PNG files.

Training hyperparameters

  • Training regime: fp16 using PyTorch-FSDP mixed-precision.

Evaluation

Our evaluation is best described in the manuscript.

Environmental impact

  • Hardware Type: NVIDIA A100 GPUs
  • Hours used: 47 hours/GPU × 4 nodes × 4 GPUs/node = 752 hours
  • Cloud Provider: Azure
  • Compute Region: West US 2
  • Carbon Emitted: 65.2 kg CO₂ eq.

Compute infrastructure

RAD-DINO was trained on Azure Machine Learning.

Hardware

We used four Standard_NC96ads_A100_v4 nodes with four NVIDIA A100 (80 GB) GPUs each.

Software

We leveraged the code in DINOv2 for training. We used SimpleITK and Pydicom for processing of DICOM files.

Citation

BibTeX:

@article{PerezGarcia2024RADDINOES,
  title={RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision},
  author={Fernando Pérez-García and Harshita Sharma and Sam Bond-Taylor and Kenza Bouzid and Valentina Salvatelli and Maximilian Ilse and Shruthi Bannur and Daniel C. Castro and Anton Schwaighofer and Matthew P. Lungren and Maria Teodora Wetscherek and Noel Codella and Stephanie L. Hyland and Javier Alvarez-Valle and Ozan Oktay},
  journal={ArXiv},
  year={2024},
  volume={abs/2401.10815},
  url={https://api.semanticscholar.org/CorpusID:267060839}
}

APA:

Pérez-García, F., Sharma, H., Bond-Taylor, S., Bouzid, K., Salvatelli, V., Ilse, M., Bannur, S., Castro, D.C., Schwaighofer, A., Lungren, M.P., Wetscherek, M.T., Codella, N., Hyland, S.L., Alvarez-Valle, J., & Oktay, O. (2024). RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision. ArXiv, abs/2401.10815.

Model card contact

Fernando Pérez-García (fperezgarcia@microsoft.com).