Ukhushn commited on
Commit
67d9ce2
·
1 Parent(s): bf351bd

Training in progress epoch 0

Browse files
Files changed (3) hide show
  1. README.md +6 -6
  2. config.json +1 -1
  3. tf_model.h5 +1 -1
README.md CHANGED
@@ -14,8 +14,8 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 2.8734
18
- - Validation Loss: 2.3864
19
  - Epoch: 0
20
 
21
  ## Model description
@@ -35,19 +35,19 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 874, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
- | 2.8734 | 2.3864 | 0 |
46
 
47
 
48
  ### Framework versions
49
 
50
- - Transformers 4.18.0
51
  - TensorFlow 2.8.0
52
- - Datasets 2.1.0
53
  - Tokenizers 0.12.1
 
14
 
15
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 2.6530
18
+ - Validation Loss: 2.2055
19
  - Epoch: 0
20
 
21
  ## Model description
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1437, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
+ | 2.6530 | 2.2055 | 0 |
46
 
47
 
48
  ### Framework versions
49
 
50
+ - Transformers 4.19.0
51
  - TensorFlow 2.8.0
52
+ - Datasets 2.2.1
53
  - Tokenizers 0.12.1
config.json CHANGED
@@ -18,6 +18,6 @@
18
  "seq_classif_dropout": 0.2,
19
  "sinusoidal_pos_embds": false,
20
  "tie_weights_": true,
21
- "transformers_version": "4.18.0",
22
  "vocab_size": 30522
23
  }
 
18
  "seq_classif_dropout": 0.2,
19
  "sinusoidal_pos_embds": false,
20
  "tie_weights_": true,
21
+ "transformers_version": "4.19.0",
22
  "vocab_size": 30522
23
  }
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ffe7408aacd66b932bac02550f05321f5425667019bceaa6f4deac13fee357a4
3
  size 363423680
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65d0835a51730ac4a286795c49e4559a479719649d79c50f2c743122a9aa1113
3
  size 363423680