Nguyen Tien commited on
Commit
827f042
1 Parent(s): 021a6b1

vuihocrnd/teacher-status-van-tiny-256

Browse files
Files changed (4) hide show
  1. README.md +13 -40
  2. all_results.json +5 -14
  3. eval_results.json +5 -9
  4. model.safetensors +1 -1
README.md CHANGED
@@ -5,32 +5,9 @@ tags:
5
  - generated_from_trainer
6
  datasets:
7
  - imagefolder
8
- metrics:
9
- - accuracy
10
- - recall
11
- - precision
12
  model-index:
13
  - name: teacher-status-van-tiny-256
14
- results:
15
- - task:
16
- name: Image Classification
17
- type: image-classification
18
- dataset:
19
- name: imagefolder
20
- type: imagefolder
21
- config: default
22
- split: train
23
- args: default
24
- metrics:
25
- - name: Accuracy
26
- type: accuracy
27
- value: 0.949438202247191
28
- - name: Recall
29
- type: recall
30
- value: 0.9514563106796117
31
- - name: Precision
32
- type: precision
33
- value: 0.9607843137254902
34
  ---
35
 
36
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -40,11 +17,16 @@ should probably proofread and complete it, then remove this comment. -->
40
 
41
  This model is a fine-tuned version of [Visual-Attention-Network/van-tiny](https://huggingface.co/Visual-Attention-Network/van-tiny) on the imagefolder dataset.
42
  It achieves the following results on the evaluation set:
43
- - Loss: 0.1802
44
- - Accuracy: 0.9494
45
- - F1 Score: 0.9561
46
- - Recall: 0.9515
47
- - Precision: 0.9608
 
 
 
 
 
48
 
49
  ## Model description
50
 
@@ -63,7 +45,7 @@ More information needed
63
  ### Training hyperparameters
64
 
65
  The following hyperparameters were used during training:
66
- - learning_rate: 5e-05
67
  - train_batch_size: 32
68
  - eval_batch_size: 32
69
  - seed: 42
@@ -72,16 +54,7 @@ The following hyperparameters were used during training:
72
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
73
  - lr_scheduler_type: linear
74
  - lr_scheduler_warmup_ratio: 0.1
75
- - num_epochs: 3
76
-
77
- ### Training results
78
-
79
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Recall | Precision |
80
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|:---------:|
81
- | 0.1511 | 0.96 | 12 | 0.1802 | 0.9494 | 0.9561 | 0.9515 | 0.9608 |
82
- | 0.2643 | 2.0 | 25 | 0.1674 | 0.9494 | 0.9557 | 0.9417 | 0.97 |
83
- | 0.3159 | 2.88 | 36 | 0.1692 | 0.9438 | 0.9510 | 0.9417 | 0.9604 |
84
-
85
 
86
  ### Framework versions
87
 
 
5
  - generated_from_trainer
6
  datasets:
7
  - imagefolder
 
 
 
 
8
  model-index:
9
  - name: teacher-status-van-tiny-256
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  This model is a fine-tuned version of [Visual-Attention-Network/van-tiny](https://huggingface.co/Visual-Attention-Network/van-tiny) on the imagefolder dataset.
19
  It achieves the following results on the evaluation set:
20
+ - eval_loss: 0.2176
21
+ - eval_accuracy: 0.9213
22
+ - eval_f1_score: 0.9307
23
+ - eval_recall: 0.9307
24
+ - eval_precision: 0.9307
25
+ - eval_runtime: 1.2169
26
+ - eval_samples_per_second: 146.275
27
+ - eval_steps_per_second: 4.931
28
+ - epoch: 11.28
29
+ - step: 141
30
 
31
  ## Model description
32
 
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - learning_rate: 0.0001
49
  - train_batch_size: 32
50
  - eval_batch_size: 32
51
  - seed: 42
 
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
  - lr_scheduler_type: linear
56
  - lr_scheduler_warmup_ratio: 0.1
57
+ - num_epochs: 30
 
 
 
 
 
 
 
 
 
58
 
59
  ### Framework versions
60
 
all_results.json CHANGED
@@ -1,16 +1,7 @@
1
  {
2
- "epoch": 2.88,
3
- "eval_accuracy": 0.949438202247191,
4
- "eval_f1_score": 0.9560975609756097,
5
- "eval_loss": 0.18015821278095245,
6
- "eval_precision": 0.9607843137254902,
7
- "eval_recall": 0.9514563106796117,
8
- "eval_runtime": 1.0836,
9
- "eval_samples_per_second": 164.273,
10
- "eval_steps_per_second": 5.537,
11
- "total_flos": 2.086337325249331e+16,
12
- "train_loss": 0.2572544482019212,
13
- "train_runtime": 70.0705,
14
- "train_samples_per_second": 68.203,
15
- "train_steps_per_second": 0.514
16
  }
 
1
  {
2
+ "eval_accuracy": 0.9213483146067416,
3
+ "eval_f1_score": 0.9306930693069307,
4
+ "eval_loss": 0.21757331490516663,
5
+ "eval_precision": 0.9306930693069307,
6
+ "eval_recall": 0.9306930693069307
 
 
 
 
 
 
 
 
 
7
  }
eval_results.json CHANGED
@@ -1,11 +1,7 @@
1
  {
2
- "epoch": 2.88,
3
- "eval_accuracy": 0.949438202247191,
4
- "eval_f1_score": 0.9560975609756097,
5
- "eval_loss": 0.18015821278095245,
6
- "eval_precision": 0.9607843137254902,
7
- "eval_recall": 0.9514563106796117,
8
- "eval_runtime": 1.0836,
9
- "eval_samples_per_second": 164.273,
10
- "eval_steps_per_second": 5.537
11
  }
 
1
  {
2
+ "eval_accuracy": 0.9213483146067416,
3
+ "eval_f1_score": 0.9306930693069307,
4
+ "eval_loss": 0.21757331490516663,
5
+ "eval_precision": 0.9306930693069307,
6
+ "eval_recall": 0.9306930693069307
 
 
 
 
7
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3aa9470b6fd902c8a038abe8e81b6b60a89f616548faa62b46ba612e4e585697
3
  size 15480696
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e081382c8d97fc5c53c6eeaa505cf5d9eb3290f8407ef789828966f03f76401
3
  size 15480696