imdatta0 commited on
Commit
fcfd1e9
1 Parent(s): efdb152

qwen-OpenAssistant/oasst_top1_2023-08-25

Browse files
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen-14B
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: OpenAssistant_oasst_top1_2023-08-25
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # OpenAssistant_oasst_top1_2023-08-25
14
+
15
+ This model is a fine-tuned version of [Qwen/Qwen-14B](https://huggingface.co/Qwen/Qwen-14B) on an unknown dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 1.6972
18
+
19
+ ## Model description
20
+
21
+ More information needed
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 1e-05
37
+ - train_batch_size: 2
38
+ - eval_batch_size: 2
39
+ - seed: 42
40
+ - gradient_accumulation_steps: 8
41
+ - total_train_batch_size: 16
42
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
+ - lr_scheduler_type: cosine
44
+ - lr_scheduler_warmup_steps: 0.01
45
+ - num_epochs: 1
46
+
47
+ ### Training results
48
+
49
+ | Training Loss | Epoch | Step | Validation Loss |
50
+ |:-------------:|:-----:|:----:|:---------------:|
51
+ | 1.977 | 0.02 | 16 | 1.9487 |
52
+ | 1.7729 | 0.04 | 32 | 1.9455 |
53
+ | 1.8185 | 0.06 | 48 | 1.9395 |
54
+ | 1.8375 | 0.08 | 64 | 1.9311 |
55
+ | 1.8803 | 0.1 | 80 | 1.9205 |
56
+ | 1.754 | 0.12 | 96 | 1.9093 |
57
+ | 1.691 | 0.14 | 112 | 1.8976 |
58
+ | 1.7817 | 0.17 | 128 | 1.8860 |
59
+ | 1.7482 | 0.19 | 144 | 1.8742 |
60
+ | 1.8528 | 0.21 | 160 | 1.8616 |
61
+ | 1.7618 | 0.23 | 176 | 1.8486 |
62
+ | 1.7428 | 0.25 | 192 | 1.8356 |
63
+ | 1.6991 | 0.27 | 208 | 1.8208 |
64
+ | 1.7041 | 0.29 | 224 | 1.8058 |
65
+ | 1.7153 | 0.31 | 240 | 1.7919 |
66
+ | 1.7312 | 0.33 | 256 | 1.7777 |
67
+ | 1.6665 | 0.35 | 272 | 1.7658 |
68
+ | 1.6596 | 0.37 | 288 | 1.7567 |
69
+ | 1.7081 | 0.39 | 304 | 1.7492 |
70
+ | 1.6424 | 0.41 | 320 | 1.7407 |
71
+ | 1.6447 | 0.43 | 336 | 1.7341 |
72
+ | 1.7134 | 0.45 | 352 | 1.7285 |
73
+ | 1.6241 | 0.47 | 368 | 1.7230 |
74
+ | 1.706 | 0.5 | 384 | 1.7193 |
75
+ | 1.7142 | 0.52 | 400 | 1.7156 |
76
+ | 1.6345 | 0.54 | 416 | 1.7122 |
77
+ | 1.6012 | 0.56 | 432 | 1.7097 |
78
+ | 1.6742 | 0.58 | 448 | 1.7080 |
79
+ | 1.6555 | 0.6 | 464 | 1.7073 |
80
+ | 1.6765 | 0.62 | 480 | 1.7047 |
81
+ | 1.5234 | 0.64 | 496 | 1.7034 |
82
+ | 1.5538 | 0.66 | 512 | 1.7025 |
83
+ | 1.669 | 0.68 | 528 | 1.7015 |
84
+ | 1.5509 | 0.7 | 544 | 1.7007 |
85
+ | 1.5485 | 0.72 | 560 | 1.7002 |
86
+ | 1.6374 | 0.74 | 576 | 1.6993 |
87
+ | 1.6434 | 0.76 | 592 | 1.6986 |
88
+ | 1.6832 | 0.78 | 608 | 1.6983 |
89
+ | 1.6734 | 0.8 | 624 | 1.6979 |
90
+ | 1.6463 | 0.83 | 640 | 1.6979 |
91
+ | 1.5761 | 0.85 | 656 | 1.6976 |
92
+ | 1.5689 | 0.87 | 672 | 1.6976 |
93
+ | 1.6393 | 0.89 | 688 | 1.6975 |
94
+ | 1.6735 | 0.91 | 704 | 1.6974 |
95
+ | 1.5709 | 0.93 | 720 | 1.6974 |
96
+ | 1.7068 | 0.95 | 736 | 1.6971 |
97
+ | 1.5955 | 0.97 | 752 | 1.6973 |
98
+ | 1.7114 | 0.99 | 768 | 1.6972 |
99
+
100
+
101
+ ### Framework versions
102
+
103
+ - Transformers 4.35.0.dev0
104
+ - Pytorch 2.1.0+cu121
105
+ - Datasets 2.5.2
106
+ - Tokenizers 0.14.0
adapter_config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_mapping": null,
3
+ "base_model_name_or_path": "Qwen/Qwen-14B",
4
+ "bias": "none",
5
+ "fan_in_fan_out": false,
6
+ "inference_mode": true,
7
+ "init_lora_weights": true,
8
+ "layers_pattern": null,
9
+ "layers_to_transform": [
10
+ 30,
11
+ 31,
12
+ 32,
13
+ 33,
14
+ 34,
15
+ 35,
16
+ 36,
17
+ 37,
18
+ 38,
19
+ 39
20
+ ],
21
+ "lora_alpha": 16,
22
+ "lora_dropout": 0.1,
23
+ "modules_to_save": null,
24
+ "peft_type": "LORA",
25
+ "r": 8,
26
+ "revision": null,
27
+ "target_modules": [
28
+ "c_attn",
29
+ "c_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM"
32
+ }
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe57fb43ef81d710ff587e9fde0aa1806b15855f5a72bf2f91781f6e371feca2
3
+ size 15873058
qwen.tiktoken ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "eos_token": "<|endoftext|>",
3
+ "pad_token": "<|endoftext|>"
4
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {},
3
+ "additional_special_tokens": [],
4
+ "auto_map": {
5
+ "AutoTokenizer": [
6
+ "Qwen/Qwen-14B--tokenization_qwen.QWenTokenizer",
7
+ null
8
+ ]
9
+ },
10
+ "clean_up_tokenization_spaces": true,
11
+ "model_max_length": 8192,
12
+ "tokenizer_class": "QWenTokenizer",
13
+ "tokenizer_file": null
14
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed85ceb0a1bd3f5902286ef3d1b9d7aca7d02fb43e23a03da780749c7917d535
3
+ size 4600