bongseok commited on
Commit
75171c1
·
1 Parent(s): 3eaa52a

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - rouge
7
+ model-index:
8
+ - name: kobart_16_5.6e-5_datav2_min30_lp5.0_temperature1.0
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # kobart_16_5.6e-5_datav2_min30_lp5.0_temperature1.0
16
+
17
+ This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 2.7174
20
+ - Rouge1: 35.7621
21
+ - Rouge2: 12.8914
22
+ - Rougel: 23.6695
23
+ - Bleu1: 29.9954
24
+ - Bleu2: 17.513
25
+ - Bleu3: 10.317
26
+ - Bleu4: 5.8532
27
+ - Gen Len: 49.3147
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 5.6e-05
47
+ - train_batch_size: 16
48
+ - eval_batch_size: 128
49
+ - seed: 42
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: linear
52
+ - lr_scheduler_warmup_ratio: 0.1
53
+ - num_epochs: 5.0
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Bleu1 | Bleu2 | Bleu3 | Bleu4 | Gen Len |
58
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|:------:|:-------:|
59
+ | 1.9617 | 1.89 | 5000 | 2.6146 | 35.2828 | 12.4993 | 22.9894 | 29.2237 | 16.8919 | 9.7826 | 5.4461 | 48.0676 |
60
+ | 1.5272 | 3.78 | 10000 | 2.7174 | 35.7621 | 12.8914 | 23.6695 | 29.9954 | 17.513 | 10.317 | 5.8532 | 49.3147 |
61
+
62
+
63
+ ### Framework versions
64
+
65
+ - Transformers 4.25.1
66
+ - Pytorch 1.13.1+cu117
67
+ - Datasets 2.7.1
68
+ - Tokenizers 0.13.2