rockstar4119 commited on
Commit
9fe12ba
·
1 Parent(s): 1fbbbe0

Training in progress epoch 0

Browse files
Files changed (3) hide show
  1. README.md +16 -25
  2. config.json +0 -2
  3. tf_model.h5 +3 -0
README.md CHANGED
@@ -3,23 +3,23 @@ library_name: transformers
3
  license: apache-2.0
4
  base_model: distilbert-base-uncased
5
  tags:
6
- - generated_from_trainer
7
- metrics:
8
- - accuracy
9
  model-index:
10
- - name: fine_tuned_model
11
  results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # fine_tuned_model
18
 
19
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.0005
22
- - Accuracy: 1.0
 
 
23
 
24
  ## Model description
25
 
@@ -38,28 +38,19 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 2e-05
42
- - train_batch_size: 16
43
- - eval_batch_size: 16
44
- - seed: 42
45
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
- - lr_scheduler_type: linear
47
- - num_epochs: 5
48
 
49
  ### Training results
50
 
51
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
- | No log | 1.0 | 151 | 0.0094 | 0.9967 |
54
- | No log | 2.0 | 302 | 0.0009 | 1.0 |
55
- | No log | 3.0 | 453 | 0.0006 | 1.0 |
56
- | 0.003 | 4.0 | 604 | 0.0005 | 1.0 |
57
- | 0.003 | 5.0 | 755 | 0.0005 | 1.0 |
58
 
59
 
60
  ### Framework versions
61
 
62
  - Transformers 4.47.1
63
- - Pytorch 2.5.1+cu121
64
  - Datasets 3.2.0
65
  - Tokenizers 0.21.0
 
3
  license: apache-2.0
4
  base_model: distilbert-base-uncased
5
  tags:
6
+ - generated_from_keras_callback
 
 
7
  model-index:
8
+ - name: rockstar4119/fine_tuned_model
9
  results: []
10
  ---
11
 
12
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
13
+ probably proofread and complete it, then remove this comment. -->
14
 
15
+ # rockstar4119/fine_tuned_model
16
 
17
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Train Loss: 0.2876
20
+ - Validation Loss: 0.0311
21
+ - Train Accuracy: 0.9967
22
+ - Epoch: 0
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
42
+ - training_precision: float32
 
 
 
 
 
43
 
44
  ### Training results
45
 
46
+ | Train Loss | Validation Loss | Train Accuracy | Epoch |
47
+ |:----------:|:---------------:|:--------------:|:-----:|
48
+ | 0.2876 | 0.0311 | 0.9967 | 0 |
 
 
 
 
49
 
50
 
51
  ### Framework versions
52
 
53
  - Transformers 4.47.1
54
+ - TensorFlow 2.17.1
55
  - Datasets 3.2.0
56
  - Tokenizers 0.21.0
config.json CHANGED
@@ -26,12 +26,10 @@
26
  "n_heads": 12,
27
  "n_layers": 6,
28
  "pad_token_id": 0,
29
- "problem_type": "single_label_classification",
30
  "qa_dropout": 0.1,
31
  "seq_classif_dropout": 0.2,
32
  "sinusoidal_pos_embds": false,
33
  "tie_weights_": true,
34
- "torch_dtype": "float32",
35
  "transformers_version": "4.47.1",
36
  "vocab_size": 30522
37
  }
 
26
  "n_heads": 12,
27
  "n_layers": 6,
28
  "pad_token_id": 0,
 
29
  "qa_dropout": 0.1,
30
  "seq_classif_dropout": 0.2,
31
  "sinusoidal_pos_embds": false,
32
  "tie_weights_": true,
 
33
  "transformers_version": "4.47.1",
34
  "vocab_size": 30522
35
  }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8510a8954ec2032e5fe04925a60c2023d47292bbe473cce9feb81125db05f1c
3
+ size 267957952