metadata
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: efficientformer-l3-300-Brain_Tumors_Image_Classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7817258883248731
language:
- en
pipeline_tag: image-classification
efficientformer-l3-300-Brain_Tumors_Image_Classification
This model is a fine-tuned version of snap-research/efficientformer-l3-300.
It achieves the following results on the evaluation set:
- Loss: 2.2761
- Accuracy: 0.7817
- F1
- Weighted: 0.7381
- Micro: 0.7817
- Macro: 0.7465
- Recall
- Weighted: 0.7817
- Micro: 0.7817
- Macro: 0.7771
- Precision
- Weighted: 0.8442
- Micro: 0.7817
- Macro: 0.8613
Model Description
Click here for the code that I used to create this model This project is part of a comparison of seventeen (17) transformers. Click here to see the README markdown file for the full projectIntended Uses & Limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.Training & Evaluation Data
Brain Tumor Image Classification DatasetSample Images
Class Distribution of Training Dataset
Class Distribution of Evaluation Dataset
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1.2856 | 1.0 | 180 | 1.4677 | 0.7284 | 0.6798 | 0.7284 | 0.6829 | 0.7284 | 0.7284 | 0.7133 | 0.8156 | 0.7284 | 0.8350 |
1.2856 | 2.0 | 360 | 2.1421 | 0.7563 | 0.7146 | 0.7563 | 0.7211 | 0.7563 | 0.7563 | 0.7471 | 0.8381 | 0.7563 | 0.8551 |
0.1405 | 3.0 | 540 | 2.2761 | 0.7817 | 0.7381 | 0.7817 | 0.7465 | 0.7817 | 0.7817 | 0.7771 | 0.8442 | 0.7817 | 0.8613 |
Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3