lvcalucioli commited on
Commit
cafac3d
1 Parent(s): 4a0b940

llamantino7b_question_answering_finetuining

Browse files
Files changed (4) hide show
  1. README.md +12 -12
  2. adapter_model.safetensors +1 -1
  3. tokenizer.json +6 -1
  4. training_args.bin +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [swap-uniba/LLaMAntino-2-7b-hf-ITA](https://huggingface.co/swap-uniba/LLaMAntino-2-7b-hf-ITA) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 1.9046
22
 
23
  ## Model description
24
 
@@ -42,7 +42,7 @@ The following hyperparameters were used during training:
42
  - eval_batch_size: 4
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
- - lr_scheduler_type: constant
46
  - lr_scheduler_warmup_ratio: 0.03
47
  - num_epochs: 10
48
 
@@ -50,16 +50,16 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
- | 1.6731 | 1.0 | 90 | 1.5039 |
54
- | 1.2298 | 2.0 | 180 | 1.4252 |
55
- | 0.8714 | 3.0 | 270 | 1.4593 |
56
- | 0.5776 | 4.0 | 360 | 1.4850 |
57
- | 0.3747 | 5.0 | 450 | 1.5199 |
58
- | 0.2584 | 6.0 | 540 | 1.6924 |
59
- | 0.1781 | 7.0 | 630 | 1.8130 |
60
- | 0.1373 | 8.0 | 720 | 1.6641 |
61
- | 0.0993 | 9.0 | 810 | 1.7835 |
62
- | 0.0728 | 10.0 | 900 | 1.9046 |
63
 
64
 
65
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [swap-uniba/LLaMAntino-2-7b-hf-ITA](https://huggingface.co/swap-uniba/LLaMAntino-2-7b-hf-ITA) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.2277
22
 
23
  ## Model description
24
 
 
42
  - eval_batch_size: 4
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.03
47
  - num_epochs: 10
48
 
 
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
+ | 1.6498 | 1.0 | 90 | 0.0351 |
54
+ | 1.1895 | 2.0 | 180 | 0.0237 |
55
+ | 0.8519 | 3.0 | 270 | 0.0219 |
56
+ | 0.5762 | 4.0 | 360 | 0.0155 |
57
+ | 0.3852 | 5.0 | 450 | 0.0987 |
58
+ | 0.2445 | 6.0 | 540 | 0.1373 |
59
+ | 0.1554 | 7.0 | 630 | 0.2655 |
60
+ | 0.0989 | 8.0 | 720 | 0.0678 |
61
+ | 0.0607 | 9.0 | 810 | 0.1935 |
62
+ | 0.0397 | 10.0 | 900 | 0.2277 |
63
 
64
 
65
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9c2bfc2a55106a9664ae2b78b967063ba76640ded7bd529fbf1d1cfc15941e73
3
  size 134235048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cca94709411e6f43a71c53735a3796e0415d3643f0c62de07cddeba3c06cbea
3
  size 134235048
tokenizer.json CHANGED
@@ -1,6 +1,11 @@
1
  {
2
  "version": "1.0",
3
- "truncation": null,
 
 
 
 
 
4
  "padding": null,
5
  "added_tokens": [
6
  {
 
1
  {
2
  "version": "1.0",
3
+ "truncation": {
4
+ "direction": "Right",
5
+ "max_length": 1024,
6
+ "strategy": "LongestFirst",
7
+ "stride": 0
8
+ },
9
  "padding": null,
10
  "added_tokens": [
11
  {
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb77ff0653c5f5c2324aba636415478eb1b421b961ae164bd3772bdb21f0dc7c
3
  size 4411
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de3f6ccc8a4fe4d9d7af9f98b6ced1b4b4aa7db3a8e15dde4f62a932939aaecf
3
  size 4411