JuanjoLopez19 commited on
Commit
c89cd67
·
verified ·
1 Parent(s): 505cd38

Training with 90/10 Spanish dataset, 50 epochs, 3 Batch Size, reduce_lr_on_plateau

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 1.1233
20
 
21
  ## Model description
22
 
@@ -36,29 +36,74 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0001
39
- - train_batch_size: 2
40
- - eval_batch_size: 2
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
- - lr_scheduler_type: constant
44
- - num_epochs: 5
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
- | Training Loss | Epoch | Step | Validation Loss |
50
- |:-------------:|:------:|:----:|:---------------:|
51
- | 1.4939 | 0.9995 | 1847 | 0.9488 |
52
- | 0.7585 | 1.9989 | 3694 | 0.8857 |
53
- | 0.4374 | 2.9984 | 5541 | 0.9555 |
54
- | 0.2946 | 3.9978 | 7388 | 1.0790 |
55
- | 0.2287 | 4.9973 | 9235 | 1.1233 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
 
58
  ### Framework versions
59
 
60
  - PEFT 0.10.0
61
- - Transformers 4.40.1
62
  - Pytorch 2.3.0+cu121
63
- - Datasets 2.19.0
64
  - Tokenizers 0.19.1
 
16
 
17
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 3.7372
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0001
39
+ - train_batch_size: 3
40
+ - eval_batch_size: 3
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
+ - lr_scheduler_type: reduce_lr_on_plateau
44
+ - num_epochs: 50
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
+ | Training Loss | Epoch | Step | Validation Loss |
50
+ |:-------------:|:-------:|:-----:|:---------------:|
51
+ | 1.4991 | 0.9992 | 1231 | 1.7493 |
52
+ | 1.5973 | 1.9984 | 2462 | 1.5530 |
53
+ | 0.7761 | 2.9976 | 3693 | 1.6627 |
54
+ | 0.4026 | 3.9968 | 4924 | 1.9381 |
55
+ | 0.3181 | 4.9959 | 6155 | 2.1410 |
56
+ | 0.2572 | 5.9951 | 7386 | 2.3047 |
57
+ | 0.2783 | 6.9943 | 8617 | 2.4170 |
58
+ | 0.1911 | 7.9935 | 9848 | 2.5913 |
59
+ | 0.2101 | 8.9927 | 11079 | 2.5669 |
60
+ | 0.1934 | 9.9919 | 12310 | 2.5707 |
61
+ | 0.1641 | 10.9911 | 13541 | 2.5205 |
62
+ | 0.1534 | 11.9903 | 14772 | 2.6706 |
63
+ | 0.1887 | 12.9894 | 16003 | 2.7875 |
64
+ | 0.1146 | 13.9886 | 17234 | 2.9092 |
65
+ | 0.0891 | 14.9878 | 18465 | 3.2176 |
66
+ | 0.0845 | 15.9870 | 19696 | 3.3288 |
67
+ | 0.0901 | 16.9862 | 20927 | 3.4202 |
68
+ | 0.0805 | 17.9854 | 22158 | 3.4854 |
69
+ | 0.0768 | 18.9846 | 23389 | 3.4997 |
70
+ | 0.0788 | 19.9838 | 24620 | 3.5510 |
71
+ | 0.0829 | 20.9830 | 25851 | 3.5782 |
72
+ | 0.0729 | 21.9821 | 27082 | 3.5944 |
73
+ | 0.0747 | 22.9813 | 28313 | 3.6143 |
74
+ | 0.0767 | 23.9805 | 29544 | 3.6171 |
75
+ | 0.0655 | 24.9797 | 30775 | 3.6633 |
76
+ | 0.0695 | 25.9789 | 32006 | 3.6780 |
77
+ | 0.0632 | 26.9781 | 33237 | 3.6896 |
78
+ | 0.0628 | 27.9773 | 34468 | 3.6971 |
79
+ | 0.0626 | 28.9765 | 35699 | 3.7027 |
80
+ | 0.0601 | 29.9756 | 36930 | 3.7070 |
81
+ | 0.0576 | 30.9748 | 38161 | 3.7114 |
82
+ | 0.1134 | 31.9740 | 39392 | 3.7157 |
83
+ | 0.1046 | 32.9732 | 40623 | 3.7186 |
84
+ | 0.1019 | 33.9724 | 41854 | 3.7199 |
85
+ | 0.0935 | 34.9716 | 43085 | 3.7234 |
86
+ | 0.0911 | 35.9708 | 44316 | 3.7252 |
87
+ | 0.0899 | 36.9700 | 45547 | 3.7271 |
88
+ | 0.0919 | 37.9692 | 46778 | 3.7285 |
89
+ | 0.0823 | 38.9683 | 48009 | 3.7299 |
90
+ | 0.0871 | 39.9675 | 49240 | 3.7312 |
91
+ | 0.0824 | 40.9667 | 50471 | 3.7322 |
92
+ | 0.0812 | 41.9659 | 51702 | 3.7332 |
93
+ | 0.0813 | 42.9651 | 52933 | 3.7342 |
94
+ | 0.0802 | 43.9643 | 54164 | 3.7350 |
95
+ | 0.0809 | 44.9635 | 55395 | 3.7359 |
96
+ | 0.0782 | 45.9627 | 56626 | 3.7368 |
97
+ | 0.0765 | 46.9619 | 57857 | 3.7369 |
98
+ | 0.0787 | 47.9610 | 59088 | 3.7370 |
99
+ | 0.076 | 48.9602 | 60319 | 3.7370 |
100
+ | 0.0756 | 49.9594 | 61550 | 3.7372 |
101
 
102
 
103
  ### Framework versions
104
 
105
  - PEFT 0.10.0
106
+ - Transformers 4.40.2
107
  - Pytorch 2.3.0+cu121
108
+ - Datasets 2.19.1
109
  - Tokenizers 0.19.1
adapter_config.json CHANGED
@@ -11,7 +11,7 @@
11
  "layers_to_transform": null,
12
  "loftq_config": {},
13
  "lora_alpha": 16,
14
- "lora_dropout": 0.1,
15
  "megatron_config": null,
16
  "megatron_core": "megatron.core",
17
  "modules_to_save": [
@@ -23,12 +23,12 @@
23
  "rank_pattern": {},
24
  "revision": null,
25
  "target_modules": [
26
- "gate_proj",
27
- "q_proj",
28
  "v_proj",
29
- "up_proj",
30
  "o_proj",
 
 
31
  "down_proj",
 
32
  "k_proj"
33
  ],
34
  "task_type": "CAUSAL_LM",
 
11
  "layers_to_transform": null,
12
  "loftq_config": {},
13
  "lora_alpha": 16,
14
+ "lora_dropout": 0.0001,
15
  "megatron_config": null,
16
  "megatron_core": "megatron.core",
17
  "modules_to_save": [
 
23
  "rank_pattern": {},
24
  "revision": null,
25
  "target_modules": [
 
 
26
  "v_proj",
 
27
  "o_proj",
28
+ "q_proj",
29
+ "up_proj",
30
  "down_proj",
31
+ "gate_proj",
32
  "k_proj"
33
  ],
34
  "task_type": "CAUSAL_LM",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9f565c29b31be498510feaf338d5574271b7b15fbdb7604970428180dcf4dda2
3
  size 1719726424
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:014322293212ffb6e0393d3a94fec34f4e01dc838fdb5213de770b63ffb95aa3
3
  size 1719726424
special_tokens_map.json CHANGED
@@ -13,7 +13,7 @@
13
  "rstrip": false,
14
  "single_word": false
15
  },
16
- "pad_token": "[PAD]",
17
  "unk_token": {
18
  "content": "<unk>",
19
  "lstrip": false,
 
13
  "rstrip": false,
14
  "single_word": false
15
  },
16
+ "pad_token": "</s>",
17
  "unk_token": {
18
  "content": "<unk>",
19
  "lstrip": false,
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
- size 493443
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json CHANGED
@@ -27,16 +27,15 @@
27
  "special": true
28
  }
29
  },
30
- "additional_special_tokens": [],
31
  "bos_token": "<s>",
32
- "chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
33
  "clean_up_tokenization_spaces": false,
34
  "eos_token": "</s>",
35
- "legacy": true,
36
  "model_max_length": 1000000000000000019884624838656,
37
- "pad_token": "[PAD]",
 
38
  "sp_model_kwargs": {},
39
- "spaces_between_special_tokens": false,
40
  "tokenizer_class": "LlamaTokenizer",
41
  "unk_token": "<unk>",
42
  "use_default_system_prompt": false
 
27
  "special": true
28
  }
29
  },
 
30
  "bos_token": "<s>",
31
+ "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set content = '<<SYS>>\\n' + system_message + '\\n<</SYS>>\\n\\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ ' ' + content.strip() + ' ' + eos_token }}{% endif %}{% endfor %}",
32
  "clean_up_tokenization_spaces": false,
33
  "eos_token": "</s>",
34
+ "legacy": false,
35
  "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": "</s>",
37
+ "padding_side": "right",
38
  "sp_model_kwargs": {},
 
39
  "tokenizer_class": "LlamaTokenizer",
40
  "unk_token": "<unk>",
41
  "use_default_system_prompt": false
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:19bb28e5672c55e66a32db22fd44684953525f72d6f8c939f855b9561732a50a
3
  size 5048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2c04dc2d8bfdfe87a7477d14f6eb9822955430937ff5730937bdba6f5c1fc36
3
  size 5048