frogwang2000 commited on
Commit
fc66c3c
·
1 Parent(s): c391c7f

Training in progress epoch 0

Browse files
README.md CHANGED
@@ -1,20 +1,22 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - generated_from_trainer
5
  model-index:
6
- - name: my_awesome_eli5_clm-model
7
  results: []
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
 
13
- # my_awesome_eli5_clm-model
14
 
15
- This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 3.7368
 
 
18
 
19
  ## Model description
20
 
@@ -33,26 +35,19 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 2e-05
37
- - train_batch_size: 8
38
- - eval_batch_size: 8
39
- - seed: 42
40
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
- - lr_scheduler_type: linear
42
- - num_epochs: 3.0
43
 
44
  ### Training results
45
 
46
- | Training Loss | Epoch | Step | Validation Loss |
47
- |:-------------:|:-----:|:----:|:---------------:|
48
- | 3.8739 | 1.0 | 1130 | 3.7567 |
49
- | 3.7727 | 2.0 | 2260 | 3.7388 |
50
- | 3.7302 | 3.0 | 3390 | 3.7368 |
51
 
52
 
53
  ### Framework versions
54
 
55
  - Transformers 4.28.1
56
- - Pytorch 2.0.0+cpu
57
  - Datasets 2.11.0
58
  - Tokenizers 0.13.3
 
1
  ---
2
  license: apache-2.0
3
  tags:
4
+ - generated_from_keras_callback
5
  model-index:
6
+ - name: frogwang2000/my_awesome_eli5_clm-model
7
  results: []
8
  ---
9
 
10
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
+ probably proofread and complete it, then remove this comment. -->
12
 
13
+ # frogwang2000/my_awesome_eli5_clm-model
14
 
15
+ This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 3.9097
18
+ - Validation Loss: 3.7588
19
+ - Epoch: 0
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
+ - training_precision: float32
 
 
 
 
 
40
 
41
  ### Training results
42
 
43
+ | Train Loss | Validation Loss | Epoch |
44
+ |:----------:|:---------------:|:-----:|
45
+ | 3.9097 | 3.7588 | 0 |
 
 
46
 
47
 
48
  ### Framework versions
49
 
50
  - Transformers 4.28.1
51
+ - TensorFlow 2.12.0
52
  - Datasets 2.11.0
53
  - Tokenizers 0.13.3
config.json CHANGED
@@ -39,7 +39,6 @@
39
  "max_length": 50
40
  }
41
  },
42
- "torch_dtype": "float32",
43
  "transformers_version": "4.28.1",
44
  "use_cache": true,
45
  "vocab_size": 50257
 
39
  "max_length": 50
40
  }
41
  },
 
42
  "transformers_version": "4.28.1",
43
  "use_cache": true,
44
  "vocab_size": 50257
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
runs/May29_18-02-00_centos8.hardware/events.out.tfevents.1685354525.centos8.hardware.2496361.5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5241c2be1dcd0a38c05345444804fad34a151b1ae336fc442144db68953873b2
3
- size 5872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bab8eec76c55eeba6543d9f4da75489fcc41dd6ca6c42fa7fd03f1d8986ba7e7
3
+ size 6497
runs/May29_18-02-00_centos8.hardware/events.out.tfevents.1685357525.centos8.hardware.2496361.7 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b2c8344489cbe09ba9d787159f6a04cb05262096910d5751dfa0b3d8c2c1c0b
3
+ size 359
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "pad_token": "<|endoftext|>",
5
+ "unk_token": "<|endoftext|>"
6
+ }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e93336ac93f1e55867bf864c29888dc50c0ec6cc1cf8b61bd37501cc97b553ab
3
+ size 327745496
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "clean_up_tokenization_spaces": true,
5
+ "eos_token": "<|endoftext|>",
6
+ "model_max_length": 1024,
7
+ "tokenizer_class": "GPT2Tokenizer",
8
+ "unk_token": "<|endoftext|>"
9
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff