sohui commited on
Commit
845c7db
1 Parent(s): b34c26f

End of training

Browse files
Files changed (1) hide show
  1. README.md +7 -9
README.md CHANGED
@@ -6,8 +6,6 @@ tags:
6
  model-index:
7
  - name: nlpmodel
8
  results: []
9
- datasets:
10
- - anon8231489123/ShareGPT_Vicuna_unfiltered
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,7 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # nlpmodel
17
 
18
- This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the [ShareGPT_Vicuna_unfiltered](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) dataset.
19
 
20
  ## Model description
21
 
@@ -34,15 +32,15 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 5e-05
38
- - train_batch_size: 8
39
  - eval_batch_size: 8
40
  - seed: 42
41
- - gradient_accumulation_steps: 8
42
- - total_train_batch_size: 64
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: constant
45
- - num_epochs: 5
46
 
47
  ### Training results
48
 
@@ -53,4 +51,4 @@ The following hyperparameters were used during training:
53
  - Transformers 4.35.2
54
  - Pytorch 2.1.0+cu118
55
  - Datasets 2.15.0
56
- - Tokenizers 0.15.0
 
6
  model-index:
7
  - name: nlpmodel
8
  results: []
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
13
 
14
  # nlpmodel
15
 
16
+ This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
17
 
18
  ## Model description
19
 
 
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
35
+ - learning_rate: 0.0005
36
+ - train_batch_size: 4
37
  - eval_batch_size: 8
38
  - seed: 42
39
+ - gradient_accumulation_steps: 4
40
+ - total_train_batch_size: 16
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: constant
43
+ - num_epochs: 0.5
44
 
45
  ### Training results
46
 
 
51
  - Transformers 4.35.2
52
  - Pytorch 2.1.0+cu118
53
  - Datasets 2.15.0
54
+ - Tokenizers 0.15.0