File size: 4,299 Bytes
7fd8962 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
---
license: apache-2.0
base_model: facebook/convnextv2-large-1k-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnextv2-large-1k-224-finetuned-cassava-leaf-disease
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8691588785046729
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-large-1k-224-finetuned-cassava-leaf-disease
This model is a fine-tuned version of [facebook/convnextv2-large-1k-224](https://huggingface.co/facebook/convnextv2-large-1k-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4210
- Accuracy: 0.8692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 240
- eval_batch_size: 240
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 960
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 8.2962 | 0.49 | 10 | 5.4110 | 0.0033 |
| 3.1666 | 0.99 | 20 | 2.0615 | 0.5883 |
| 1.4693 | 1.48 | 30 | 1.0935 | 0.6084 |
| 0.8718 | 1.98 | 40 | 0.7291 | 0.7463 |
| 0.6252 | 2.47 | 50 | 0.5894 | 0.7916 |
| 0.5198 | 2.96 | 60 | 0.5204 | 0.8299 |
| 0.4517 | 3.46 | 70 | 0.4658 | 0.8393 |
| 0.4266 | 3.95 | 80 | 0.4664 | 0.8407 |
| 0.4049 | 4.44 | 90 | 0.4337 | 0.8579 |
| 0.3817 | 4.94 | 100 | 0.4247 | 0.8523 |
| 0.3696 | 5.43 | 110 | 0.4146 | 0.8621 |
| 0.3577 | 5.93 | 120 | 0.4058 | 0.8607 |
| 0.3577 | 6.42 | 130 | 0.4047 | 0.8636 |
| 0.3354 | 6.91 | 140 | 0.3985 | 0.8617 |
| 0.3356 | 7.41 | 150 | 0.4025 | 0.8645 |
| 0.3286 | 7.9 | 160 | 0.4054 | 0.8673 |
| 0.3225 | 8.4 | 170 | 0.4062 | 0.8631 |
| 0.317 | 8.89 | 180 | 0.4007 | 0.8692 |
| 0.3101 | 9.38 | 190 | 0.3931 | 0.8701 |
| 0.293 | 9.88 | 200 | 0.3928 | 0.8682 |
| 0.2992 | 10.37 | 210 | 0.3942 | 0.8668 |
| 0.2968 | 10.86 | 220 | 0.3892 | 0.8692 |
| 0.2794 | 11.36 | 230 | 0.3988 | 0.8701 |
| 0.2707 | 11.85 | 240 | 0.3865 | 0.8762 |
| 0.2883 | 12.35 | 250 | 0.4040 | 0.8640 |
| 0.2784 | 12.84 | 260 | 0.3930 | 0.8692 |
| 0.2667 | 13.33 | 270 | 0.3985 | 0.8701 |
| 0.2642 | 13.83 | 280 | 0.4160 | 0.8668 |
| 0.2612 | 14.32 | 290 | 0.4086 | 0.8687 |
| 0.2586 | 14.81 | 300 | 0.3990 | 0.8668 |
| 0.2483 | 15.31 | 310 | 0.4111 | 0.8720 |
| 0.254 | 15.8 | 320 | 0.4082 | 0.8748 |
| 0.2283 | 16.3 | 330 | 0.4165 | 0.8668 |
| 0.246 | 16.79 | 340 | 0.4264 | 0.8692 |
| 0.2365 | 17.28 | 350 | 0.4185 | 0.8692 |
| 0.2388 | 17.78 | 360 | 0.4152 | 0.8650 |
| 0.2401 | 18.27 | 370 | 0.4169 | 0.8659 |
| 0.2334 | 18.77 | 380 | 0.4187 | 0.8696 |
| 0.2245 | 19.26 | 390 | 0.4192 | 0.8692 |
| 0.2291 | 19.75 | 400 | 0.4210 | 0.8692 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.1
|