Update README.md
Browse files
README.md
CHANGED
@@ -8,127 +8,4 @@ sdk_version: 4.42.0
|
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: apache-2.0
|
11 |
-
---
|
12 |
-
|
13 |
-
|
14 |
-
<img src="appendix/icon.jpeg" width="150" alt="alt text">
|
15 |
-
|
16 |
-
# Multiple Sclerosis Prediction from MRI Scans
|
17 |
-
|
18 |
-
## Project Overview
|
19 |
-
This repository contains the implementation of a Vision Transformer (ViT) model finetuned and designed to predict Multiple Sclerosis (MS) from Magnetic Resonance Imaging (MRI) scans.
|
20 |
-
|
21 |
-
What helps the model distinguish between an MRI scan of an MS patient and a healthy subject are clear lesions in different areas of the brain that can be distinguished in the MRI scans.
|
22 |
-
By implementing the Vision Transformer (ViT) model for MRI scans and Multiple Sclerosis detection, we showcase the model's effectiveness in medical imaging applications.
|
23 |
-
|
24 |
-
## What is Multiple Sclerosis?
|
25 |
-
Multiple Sclerosis (MS) is a chronic demyelinating disease characterized by the presence of plaques in the white matter of the central nervous system. These plaques can disrupt the flow of information within the brain, and between the brain and the rest of the body. MS can be diagnosed using MRI, which helps identify the characteristic lesions associated with the disease. Usual onset is around age 20–50.
|
26 |
-
|
27 |
-
It can cause a wide range of symptoms including fatigue, vision problems, mobility issues, cognitive changes, numbness, pain, speech difficulties, mood changes, heat sensitivity, and muscle spasticity, which vary in severity and occurrence among individuals.
|
28 |
-
|
29 |
-
<img src="appendix/applsci-12-04920-g001.png" width="450" alt="alt text">
|
30 |
-
|
31 |
-
## Dataset
|
32 |
-
The dataset and initial modeling attempts are derived from the work by Macin et al. (2022). They proposed a computationally efficient model using Exemplar Multiple Parameters Local Phase Quantization (ExMPLPQ) combined with a k-nearest neighbor (kNN) classifier. Their model achieved high accuracy in detecting MS from MRI images.
|
33 |
-
|
34 |
-
**Original Article:** [An Accurate Multiple Sclerosis Detection Model Based on Exemplar Multiple Parameters Local Phase Quantization: ExMPLPQ](https://www.mdpi.com/2076-3417/12/10/4920)
|
35 |
-
|
36 |
-
The dataset consists of axial and sagittal MRI images divided into four classes:
|
37 |
-
1. **Control-Axial:** Axial MRI images from healthy individuals.
|
38 |
-
2. **Control-Sagittal:** Sagittal MRI images from healthy individuals.
|
39 |
-
3. **MS-Axial:** Axial MRI images from individuals diagnosed with MS.
|
40 |
-
4. **MS-Sagittal:** Sagittal MRI images from individuals diagnosed with MS.
|
41 |
-
|
42 |
-
### Criterions for diagnostic labeling of MS:
|
43 |
-
*Interpretation of MRI images requires experts to manually scrutinize multiple contiguous image sections for the presence of white matter lesions, with care being taken to distinguish MS plaques from lesions associated with diseases that present with similar symptoms, e.g., ischemic gliosis and central nervous system vasculitis. MS plaques are hyperintense in T2 sequence, oval-round shaped, at least 3mm in size, with asymmetric distribution. The typical location of MS plaques is as follows: periventricular: adjacent to the lateral ventricles; juxtracortical and cortical: localized to U fibers; infratentorial: located unilaterally or bilaterally paramedian adjacent to the brain stem, cerebellum, and 4th ventricle; spinal cord: cervical and thoracic localized, shorter than two vertebral segments, axially wedge-shaped, sagittal cigarette shaped, localized in peripherally located posterior and lateral columns.*
|
44 |
-
|
45 |
-
|
46 |
-
### Sample Distribution
|
47 |
-
We have to classes, MS or Non-MS.
|
48 |
-
After we balanced the classes using downsampling, each class contains 1293 samples (MRI images), ensuring a balanced dataset for training and evaluation.
|
49 |
-
|
50 |
-
## Model Architecture
|
51 |
-
We utilized and finetuned the Vision Transformer (ViT) model, specifically `google/vit-base-patch16-384` on our MRI dataset images.
|
52 |
-
|
53 |
-
### Vision Transformer (ViT)
|
54 |
-
- **Patch Embeddings:** The input image is divided into fixed-size patches (e.g., 16x16), each of which is flattened and embedded into a vector.
|
55 |
-
- **Positional Encoding:** Added to the patch embeddings to maintain spatial information.
|
56 |
-
- **Transformer Encoder:** Processes the encoded patches through multi-head self-attention layers and feed-forward neural networks.
|
57 |
-
- **Classification Head:** The output is pooled and passed through a fully connected layer to generate final class probabilities.
|
58 |
-
|
59 |
-
### Detailed Architecture
|
60 |
-
- **Base Vision Transformer:**
|
61 |
-
- Pre-trained on ImageNet-21k.
|
62 |
-
- Added Adaptive Average Pooling to minimize overfitting.
|
63 |
-
- **Fully Connected Layers:**
|
64 |
-
- Connected Layer: 128 neurons, ReLU activation.
|
65 |
-
- Linear layer with single output (1 or 0) for binary classification.
|
66 |
-
|
67 |
-
The following table lists the hyperparameters used in our model:
|
68 |
-
|
69 |
-
| Hyperparameter | Value |
|
70 |
-
|------------------|-----------------|
|
71 |
-
| Input Size | 384x384 |
|
72 |
-
| Dropout Rate | 0.5 |
|
73 |
-
| Weight Decay | 0.01 |
|
74 |
-
| Learning Rate | 1e-5 |
|
75 |
-
| Optimizer | AdamW |
|
76 |
-
| Batch Size | 8 |
|
77 |
-
|
78 |
-
### Cross-validation
|
79 |
-
To ensure robust evaluation and to prevent overfitting, we employed 6 fold cross-validation. This approach divides the dataset into six subsets, training the model on five subsets while using the sixth for validation (around 431 samples per fold-validation set).
|
80 |
-
|
81 |
-
## Results
|
82 |
-
Below are the results for the last epochs of each fold:
|
83 |
-
|
84 |
-
### Fold-wise Results
|
85 |
-
| Fold | Epochs | Train Loss | Val Loss | F1 | Accuracy | Precision | Recall |
|
86 |
-
|------|--------|------------|----------|---------|----------|-----------|--------|
|
87 |
-
| 1 | 3 | 0.5596 | 0.2153 | 0.92 | 0.92 | 0.92 | 0.92 |
|
88 |
-
| 2 | 6 | 0.5402 | 0.1795 | 0.94 | 0.94 | 0.94 | 0.94 |
|
89 |
-
| 3 | 4 | 0.5443 | 0.2065 | 0.92 | 0.92 | 0.93 | 0.92 |
|
90 |
-
| 4 | 4 | 0.5150 | 0.2093 | 0.94 | 0.94 | 0.94 | 0.94 |
|
91 |
-
| 5 | 5 | 0.5281 | 0.1826 | 0.95 | 0.95 | 0.95 | 0.95 |
|
92 |
-
| 6 | 5 | 0.5337 | 0.2232 | 0.94 | 0.94 | 0.95 | 0.94 |
|
93 |
-
|
94 |
-
### Average Metrics across all Folds
|
95 |
-
| Metric | Value |
|
96 |
-
|-----------------|---------|
|
97 |
-
| Accuracy | 0.9346 |
|
98 |
-
| Precision | 0.9378 |
|
99 |
-
| Recall | 0.9346 |
|
100 |
-
| F1 Score | 0.9345 |
|
101 |
-
|
102 |
-
### Class-wise Average Metrics
|
103 |
-
| Class | Precision | Recall | F1-Score |
|
104 |
-
|-----------------|-----------|--------|----------|
|
105 |
-
| NON-MS | 0.9229 | 0.9513 | 0.9356 |
|
106 |
-
| MS | 0.9527 | 0.9180 | 0.9335 |
|
107 |
-
|
108 |
-
<p align="left">
|
109 |
-
<img src="appendix/confusion_matrix.png" width="380" alt="alt text">
|
110 |
-
</p>
|
111 |
-
|
112 |
-
## Conclusion
|
113 |
-
In conclusion, our project aimed to build upon the existing research by applying modern computer vision techniques, primarily the Vision Transformer model, to classify MS from MRI images. Through cross-validation and fine-tuning, we achieved robust performance metrics, demonstrating the efficacy of ViT models in medical image classification tasks, with an average prediction Recall of 93%.
|
114 |
-
|
115 |
-
## Potential Problems and Future Directions
|
116 |
-
### Small Dataset
|
117 |
-
- The current dataset, while balanced and sufficient for initial trials, is relatively small. Small datasets may not capture the full variability required for robust model training and generalization.
|
118 |
-
|
119 |
-
### Higher Resolution Images
|
120 |
-
- Higher resolution images could provide more detailed information, potentially improving the model's ability to detect subtle features associated with MS.
|
121 |
-
|
122 |
-
### Model Complexity and Interpretability
|
123 |
-
- Vision Transformers are complex models, which makes interpreting the learned features challenging. Understanding why the model makes specific predictions is crucial in medical applications.
|
124 |
-
|
125 |
-
## Acknowledgments
|
126 |
-
This project was made possible by the pioneering efforts of [Macin et al., 2022](https://www.mdpi.com/2076-3417/12/10/4920), whose data and initial study laid the groundwork for our research.
|
127 |
-
|
128 |
-
**References:**
|
129 |
-
Macin, G., Tasci, B., Tasci, I., Faust, O., Barua, P.D., Dogan, S., Tuncer, T., Tan, R.-S., Acharya, U.R. An Accurate Multiple Sclerosis Detection Model Based on Exemplar Multiple Parameters Local Phase Quantization: ExMPLPQ. Appl. Sci. 2022, 12, 4920.
|
130 |
-
|
131 |
-
---
|
132 |
-
|
133 |
-
**Disclaimer:**
|
134 |
-
This project is intended for research and educational purposes only and should not be used as a diagnostic tool without formal clinical validation.
|
|
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: apache-2.0
|
11 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|