abecode commited on
Commit
b7a0df8
·
verified ·
1 Parent(s): 31fcea4

Training completed!

Browse files
README.md CHANGED
@@ -4,9 +4,6 @@ license: apache-2.0
4
  base_model: distilbert-base-uncased
5
  tags:
6
  - generated_from_trainer
7
- metrics:
8
- - accuracy
9
- - f1
10
  model-index:
11
  - name: distilbert-base-uncased-finetuned-emotion
12
  results: []
@@ -18,10 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
18
  # distilbert-base-uncased-finetuned-emotion
19
 
20
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
21
- It achieves the following results on the evaluation set:
22
- - Loss: 0.2096
23
- - Accuracy: 0.925
24
- - F1: 0.9249
25
 
26
  ## Model description
27
 
@@ -44,21 +37,13 @@ The following hyperparameters were used during training:
44
  - train_batch_size: 64
45
  - eval_batch_size: 64
46
  - seed: 42
47
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - num_epochs: 2
50
 
51
- ### Training results
52
-
53
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
54
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
55
- | 0.8021 | 1.0 | 250 | 0.2950 | 0.91 | 0.9096 |
56
- | 0.2381 | 2.0 | 500 | 0.2096 | 0.925 | 0.9249 |
57
-
58
-
59
  ### Framework versions
60
 
61
- - Transformers 4.44.2
62
- - Pytorch 2.4.1+cu121
63
- - Datasets 3.0.1
64
- - Tokenizers 0.19.1
 
4
  base_model: distilbert-base-uncased
5
  tags:
6
  - generated_from_trainer
 
 
 
7
  model-index:
8
  - name: distilbert-base-uncased-finetuned-emotion
9
  results: []
 
15
  # distilbert-base-uncased-finetuned-emotion
16
 
17
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
 
 
 
 
18
 
19
  ## Model description
20
 
 
37
  - train_batch_size: 64
38
  - eval_batch_size: 64
39
  - seed: 42
40
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
  - lr_scheduler_type: linear
42
  - num_epochs: 2
43
 
 
 
 
 
 
 
 
 
44
  ### Framework versions
45
 
46
+ - Transformers 4.48.3
47
+ - Pytorch 2.5.1+cu124
48
+ - Datasets 3.3.2
49
+ - Tokenizers 0.21.0
runs/Feb20_21-27-11_bc554d6b7727/events.out.tfevents.1740086833.bc554d6b7727.2871.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:768e8c50696a338214b039510da7c4151cde4ab2188e75fb7218e3840806a021
3
- size 6069
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60eed03d8ae6781e099328f75aa867c9d28008c7872bf1d9c4b7727b6dc59af3
3
+ size 6792
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:60a4cb22c91480958874be8a3f2c2bdf232f046a730177d99e32a7438c66d5d8
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c2d799c9aaff1ceb6998af2e25980abfd7ba61c1bd5ac9dc13d51722b280c9d
3
  size 5368