sezing commited on
Commit
64c2186
·
verified ·
1 Parent(s): 61ba339

mistralai/mistral-instruct-generation

Browse files
README.md CHANGED
@@ -20,7 +20,7 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 1.4969
24
 
25
  ## Model description
26
 
@@ -53,17 +53,17 @@ The following hyperparameters were used during training:
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:------:|:----:|:---------------:|
56
- | 1.6666 | 0.0260 | 20 | 1.5570 |
57
- | 1.5509 | 0.0521 | 40 | 1.5239 |
58
- | 1.5707 | 0.0781 | 60 | 1.5093 |
59
- | 1.5722 | 0.1042 | 80 | 1.5024 |
60
- | 1.5602 | 0.1302 | 100 | 1.4969 |
61
 
62
 
63
  ### Framework versions
64
 
65
- - PEFT 0.10.0
66
- - Transformers 4.40.2
67
  - Pytorch 2.3.0+cu121
68
  - Datasets 2.19.1
69
  - Tokenizers 0.19.1
 
20
 
21
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 1.4958
24
 
25
  ## Model description
26
 
 
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:------:|:----:|:---------------:|
56
+ | 1.6146 | 0.0260 | 20 | 1.5552 |
57
+ | 1.5344 | 0.0521 | 40 | 1.5207 |
58
+ | 1.5492 | 0.0781 | 60 | 1.5117 |
59
+ | 1.5122 | 0.1042 | 80 | 1.5025 |
60
+ | 1.5222 | 0.1302 | 100 | 1.4958 |
61
 
62
 
63
  ### Framework versions
64
 
65
+ - PEFT 0.11.1
66
+ - Transformers 4.41.1
67
  - Pytorch 2.3.0+cu121
68
  - Datasets 2.19.1
69
  - Tokenizers 0.19.1
adapter_config.json CHANGED
@@ -20,8 +20,8 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "q_proj",
24
- "v_proj"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "v_proj",
24
+ "q_proj"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5da2600a0dcc2d5a7b2838b49cb83a0c02dadda5a3e7e652007661ae36b8ed1c
3
  size 27280152
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c135165f24e4872595fd851b199995519eb32fd2ff9a80d599dadfb0d618763
3
  size 27280152
runs/May23_10-21-36_adfafe14d5e3/events.out.tfevents.1716459740.adfafe14d5e3.927.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01eb2c6517cdc18143d7ea2a33090651776339b4297f91cdce8a4c8211738c30
3
+ size 9088
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:413bc6e1d67a088848afdf175ac749c9f4d0ff09768e00b393021aabec09c5d4
3
- size 5048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b5c2f842e1e4cfebb26b3bf2db1186a93b0408b6f2d74b015075f3642dda696
3
+ size 5112