lorenzoscottb commited on
Commit
c3e49a5
1 Parent(s): 7dc4c98

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - rouge
7
+ model-index:
8
+ - name: t5-base-DreamBank-Generation-Act-Char
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # t5-base-DreamBank-Generation-Act-Char
16
+
17
+ This model is a fine-tuned version of [DReAMy-lib/t5-base-DreamBank-Generation-NER-Char](https://huggingface.co/DReAMy-lib/t5-base-DreamBank-Generation-NER-Char) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.2627
20
+ - Rouge1: 0.4751
21
+ - Rouge2: 0.3939
22
+ - Rougel: 0.4564
23
+ - Rougelsum: 0.4549
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.001
43
+ - train_batch_size: 16
44
+ - eval_batch_size: 16
45
+ - seed: 42
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: linear
48
+ - num_epochs: 10
49
+ - mixed_precision_training: Native AMP
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
54
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
55
+ | No log | 1.0 | 49 | 0.4061 | 0.3684 | 0.2537 | 0.3495 | 0.3484 |
56
+ | No log | 2.0 | 98 | 0.3563 | 0.4151 | 0.3185 | 0.4043 | 0.4030 |
57
+ | No log | 3.0 | 147 | 0.3005 | 0.4456 | 0.3588 | 0.4294 | 0.4281 |
58
+ | No log | 4.0 | 196 | 0.2693 | 0.4743 | 0.3903 | 0.4586 | 0.4574 |
59
+ | No log | 5.0 | 245 | 0.2627 | 0.4751 | 0.3939 | 0.4564 | 0.4549 |
60
+ | No log | 6.0 | 294 | 0.2739 | 0.4744 | 0.3920 | 0.4612 | 0.4596 |
61
+ | No log | 7.0 | 343 | 0.2733 | 0.4702 | 0.3940 | 0.4557 | 0.4549 |
62
+ | No log | 8.0 | 392 | 0.2861 | 0.4739 | 0.3950 | 0.4614 | 0.4608 |
63
+ | No log | 9.0 | 441 | 0.3115 | 0.4645 | 0.3868 | 0.4524 | 0.4517 |
64
+ | No log | 10.0 | 490 | 0.3212 | 0.4655 | 0.3886 | 0.4524 | 0.4518 |
65
+
66
+
67
+ ### Framework versions
68
+
69
+ - Transformers 4.25.1
70
+ - Pytorch 1.12.1
71
+ - Datasets 2.5.1
72
+ - Tokenizers 0.12.1