lvcalucioli commited on
Commit
94a97ef
1 Parent(s): 6468723

llamantino7b_question_answering_finetuining

Browse files
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [swap-uniba/LLaMAntino-2-7b-hf-ITA](https://huggingface.co/swap-uniba/LLaMAntino-2-7b-hf-ITA) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 1.4122
22
 
23
  ## Model description
24
 
@@ -44,15 +44,22 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: constant
46
  - lr_scheduler_warmup_ratio: 0.03
47
- - num_epochs: 3
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
- | 1.6512 | 1.0 | 90 | 1.4570 |
54
- | 1.1883 | 2.0 | 180 | 1.3863 |
55
- | 0.8381 | 3.0 | 270 | 1.4122 |
 
 
 
 
 
 
 
56
 
57
 
58
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [swap-uniba/LLaMAntino-2-7b-hf-ITA](https://huggingface.co/swap-uniba/LLaMAntino-2-7b-hf-ITA) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 1.9046
22
 
23
  ## Model description
24
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: constant
46
  - lr_scheduler_warmup_ratio: 0.03
47
+ - num_epochs: 10
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
+ | 1.6731 | 1.0 | 90 | 1.5039 |
54
+ | 1.2298 | 2.0 | 180 | 1.4252 |
55
+ | 0.8714 | 3.0 | 270 | 1.4593 |
56
+ | 0.5776 | 4.0 | 360 | 1.4850 |
57
+ | 0.3747 | 5.0 | 450 | 1.5199 |
58
+ | 0.2584 | 6.0 | 540 | 1.6924 |
59
+ | 0.1781 | 7.0 | 630 | 1.8130 |
60
+ | 0.1373 | 8.0 | 720 | 1.6641 |
61
+ | 0.0993 | 9.0 | 810 | 1.7835 |
62
+ | 0.0728 | 10.0 | 900 | 1.9046 |
63
 
64
 
65
  ### Framework versions
adapter_config.json CHANGED
@@ -19,8 +19,8 @@
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
22
- "v_proj",
23
- "q_proj"
24
  ],
25
  "task_type": "CAUSAL_LM",
26
  "use_rslora": false
 
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
22
+ "q_proj",
23
+ "v_proj"
24
  ],
25
  "task_type": "CAUSAL_LM",
26
  "use_rslora": false
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:993954f9ce4becbddd037c39dc491eec4d1a6ee4902b806353dbb77551c460ca
3
  size 134235048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c2bfc2a55106a9664ae2b78b967063ba76640ded7bd529fbf1d1cfc15941e73
3
  size 134235048
tokenizer.json CHANGED
@@ -1,6 +1,11 @@
1
  {
2
  "version": "1.0",
3
- "truncation": null,
 
 
 
 
 
4
  "padding": null,
5
  "added_tokens": [
6
  {
 
1
  {
2
  "version": "1.0",
3
+ "truncation": {
4
+ "direction": "Right",
5
+ "max_length": 1024,
6
+ "strategy": "LongestFirst",
7
+ "stride": 0
8
+ },
9
  "padding": null,
10
  "added_tokens": [
11
  {
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:79dcf18fbeb323b5d98ac12737e836d00054fe2efd3d36492ac88a8e9c0bef19
3
  size 4411
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb77ff0653c5f5c2324aba636415478eb1b421b961ae164bd3772bdb21f0dc7c
3
  size 4411