mrm8488 commited on
Commit
fddbd3a
1 Parent(s): 9c6ffc3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - summarize_from_feedback
7
+ metrics:
8
+ - rouge
9
+ model-index:
10
+ - name: flan-t5-large-finetuned-openai-summarize_from_feedback
11
+ results:
12
+ - task:
13
+ name: Sequence-to-sequence Language Modeling
14
+ type: text2text-generation
15
+ dataset:
16
+ name: summarize_from_feedback
17
+ type: summarize_from_feedback
18
+ config: comparisons
19
+ split: train
20
+ args: comparisons
21
+ metrics:
22
+ - name: Rouge1
23
+ type: rouge
24
+ value: 30.2401
25
+ - name: Rouge2
26
+ type: rouge
27
+ value: 11.4916
28
+ - name: RougeL
29
+ type: rouge
30
+ value: 24.6485
31
+ - name: RougeLSum
32
+ type: rouge
33
+ value: 26.1801
34
+ pipeline_tag: summarization
35
+ ---
36
+
37
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
38
+ should probably proofread and complete it, then remove this comment. -->
39
+
40
+ # flan-t5-large-finetuned-openai-summarize_from_feedback
41
+
42
+ This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the summarize_from_feedback dataset.
43
+ It achieves the following results on the evaluation set:
44
+ - Loss: 2.3118
45
+ - Rouge1: 30.2401
46
+ - Rouge2: 11.4916
47
+ - Rougel: 24.6485
48
+ - Rougelsum: 26.1801
49
+ - Gen Len: 18.8428
50
+
51
+ ## Model description
52
+
53
+ More information needed
54
+
55
+ ## Intended uses & limitations
56
+
57
+ More information needed
58
+
59
+ ## Training and evaluation data
60
+
61
+ More information needed
62
+
63
+ ## Training procedure
64
+
65
+ ### Training hyperparameters
66
+
67
+ The following hyperparameters were used during training:
68
+ - learning_rate: 5e-05
69
+ - train_batch_size: 16
70
+ - eval_batch_size: 32
71
+ - seed: 42
72
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
73
+ - lr_scheduler_type: linear
74
+ - num_epochs: 6
75
+
76
+ ### Training results
77
+
78
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
79
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
80
+ | 1.7678 | 1.0 | 5804 | 1.8833 | 29.3494 | 10.9406 | 23.9907 | 25.461 | 18.9265 |
81
+ | 1.5839 | 2.0 | 11608 | 1.8992 | 29.6239 | 11.1795 | 24.2927 | 25.7183 | 18.9358 |
82
+ | 1.4812 | 3.0 | 17412 | 1.8929 | 29.8899 | 11.2855 | 24.4193 | 25.9219 | 18.9189 |
83
+ | 1.4198 | 4.0 | 23216 | 1.8939 | 29.8897 | 11.2606 | 24.3262 | 25.8642 | 18.9309 |
84
+ | 1.3612 | 5.0 | 29020 | 1.9105 | 29.8469 | 11.2112 | 24.2483 | 25.7884 | 18.9396 |
85
+ | 1.3279 | 6.0 | 34824 | 1.9170 | 30.038 | 11.3426 | 24.4385 | 25.9675 | 18.9328 |
86
+
87
+
88
+ ### Framework versions
89
+
90
+ - Transformers 4.25.1
91
+ - Pytorch 1.13.0+cu116
92
+ - Datasets 2.8.0
93
+ - Tokenizers 0.13.2