Phikon-based ABMIL models for metastasis detection
These are weakly-supervised, attention-based multiple instance learning models for binary metastasis detection (normal versus metastasis). The models were trained on the CAMELYON16 dataset using Phikon embeddings.
Data
- Training set consisted of 243 whole slide images (WSIs).
- 143 negative
- 100 positive
- 52 macrometastases
- 48 micrometastases
- Validation set consisted of 27 WSIs.
- 16 negative
- 11 positive
- 6 macrometastases
- 5 micrometastases
- Test set consisted of 129 WSIs.
- 80 negative
- 49 positive
- 22 macrometastases
- 27 micrometastases
Evaluation
Below are the classification results on the test set.
Seed | Sensitivity | Specificity | BA | Precision | F1 |
---|---|---|---|---|---|
0 | 0.939 | 0.963 | 0.951 | 0.939 | 0.939 |
1 | 0.571 | 0.950 | 0.761 | 0.875 | 0.691 |
2 | 0.857 | 1.000 | 0.929 | 1.000 | 0.923 |
3 | 0.898 | 0.988 | 0.943 | 0.978 | 0.936 |
4 | 0.959 | 0.950 | 0.955 | 0.922 | 0.940 |
How to reuse the model
The model expects 128 x 128 micrometer patches, embedded with the Phikon model.
import torch
from abmil import AttentionMILModel
model = AttentionMILModel(in_features=768, L=512, D=384, num_classes=2, gated_attention=True)
model.eval()
state_dict = torch.load("seed4/model_best.pt", map_location="cpu", weights_only=True)
model.load_state_dict(state_dict)
# Load a bag of features
bag = torch.ones(1000, 768)
with torch.inference_mode():
logits, attention = model(bag)