yimiwang commited on
Commit
b7b80c7
·
verified ·
1 Parent(s): d855870

flan-t5-peft-mixsub

Browse files
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/flan-t5-large
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - rouge
8
+ model-index:
9
+ - name: flan-t5-large-peft-mixSub
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # flan-t5-large-peft-mixSub
17
+
18
+ This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 1.9408
21
+ - Rouge1: 39.2511
22
+ - Rouge2: 14.4613
23
+ - Rougel: 28.6907
24
+ - Rougelsum: 35.9795
25
+ - Gen Len: 97.8269
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 5e-05
45
+ - train_batch_size: 8
46
+ - eval_batch_size: 8
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - num_epochs: 3
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
55
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
56
+ | 2.1134 | 1.0 | 1630 | 1.9559 | 39.0131 | 14.3369 | 28.4611 | 35.7936 | 96.2081 |
57
+ | 2.0783 | 2.0 | 3260 | 1.9438 | 39.0184 | 14.3571 | 28.5375 | 35.7466 | 99.4684 |
58
+ | 2.0722 | 3.0 | 4890 | 1.9408 | 39.2511 | 14.4613 | 28.6907 | 35.9795 | 97.8269 |
59
+
60
+
61
+ ### Framework versions
62
+
63
+ - Transformers 4.38.2
64
+ - Pytorch 2.2.1+cu121
65
+ - Datasets 2.18.0
66
+ - Tokenizers 0.15.2
adapter_config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "base_model_name_or_path": "google/flan-t5-large",
3
+ "bias": "none",
4
+ "fan_in_fan_out": false,
5
+ "inference_mode": true,
6
+ "init_lora_weights": true,
7
+ "lora_alpha": 32,
8
+ "lora_dropout": 0.05,
9
+ "modules_to_save": null,
10
+ "peft_type": "LORA",
11
+ "r": 16,
12
+ "target_modules": [
13
+ "q",
14
+ "v"
15
+ ],
16
+ "task_type": "SEQ_2_SEQ_LM"
17
+ }
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:103e6a73b26bc732dacc5417939f6364f1e2ea299b8dff1821d9322572215dcf
3
+ size 18980874
logs/events.out.tfevents.1711684560.ip-10-25-205-144.180196.3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:927bf293a01d419fd933d3f296d92f1a86902ac74536de1c48c09f09d42ffdc2
3
+ size 8707
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a228df09c5b850027a26ba9be5f6cd962070f70cc01e989945bc894dba065468
3
+ size 5112