kendrickfff commited on
Commit
9dab157
1 Parent(s): 3249c5f

End of training

Browse files
Files changed (1) hide show
  1. README.md +13 -13
README.md CHANGED
@@ -22,18 +22,18 @@ model-index:
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.02654867256637168
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment. -->
30
 
31
- # audio_classification (default from Skill Academy, I just learn and run the program provided)
32
 
33
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 2.6736
36
- - Accuracy: 0.0265
37
 
38
  ## Model description
39
 
@@ -67,19 +67,19 @@ The following hyperparameters were used during training:
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:------:|:----:|:---------------:|:--------:|
70
- | No log | 0.8 | 3 | 2.6313 | 0.1062 |
71
- | No log | 1.8667 | 7 | 2.6508 | 0.0708 |
72
- | 2.6379 | 2.9333 | 11 | 2.6587 | 0.0531 |
73
- | 2.6379 | 4.0 | 15 | 2.6631 | 0.0442 |
74
- | 2.6379 | 4.8 | 18 | 2.6712 | 0.0354 |
75
- | 2.6277 | 5.8667 | 22 | 2.6724 | 0.0354 |
76
- | 2.6277 | 6.9333 | 26 | 2.6745 | 0.0177 |
77
- | 2.6257 | 8.0 | 30 | 2.6736 | 0.0265 |
78
 
79
 
80
  ### Framework versions
81
 
82
  - Transformers 4.42.4
83
- - Pytorch 2.3.1+cu121
84
  - Datasets 2.21.0
85
  - Tokenizers 0.19.1
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.09734513274336283
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment. -->
30
 
31
+ # audio_classification
32
 
33
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 2.6440
36
+ - Accuracy: 0.0973
37
 
38
  ## Model description
39
 
 
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:------:|:----:|:---------------:|:--------:|
70
+ | No log | 0.8 | 3 | 2.6403 | 0.0708 |
71
+ | No log | 1.8667 | 7 | 2.6379 | 0.0796 |
72
+ | 2.6342 | 2.9333 | 11 | 2.6463 | 0.0619 |
73
+ | 2.6342 | 4.0 | 15 | 2.6517 | 0.0354 |
74
+ | 2.6342 | 4.8 | 18 | 2.6522 | 0.0177 |
75
+ | 2.6238 | 5.8667 | 22 | 2.6494 | 0.0619 |
76
+ | 2.6238 | 6.9333 | 26 | 2.6460 | 0.0796 |
77
+ | 2.622 | 8.0 | 30 | 2.6440 | 0.0973 |
78
 
79
 
80
  ### Framework versions
81
 
82
  - Transformers 4.42.4
83
+ - Pytorch 2.4.0+cu121
84
  - Datasets 2.21.0
85
  - Tokenizers 0.19.1