hawks23 commited on
Commit
72de2d9
·
verified ·
1 Parent(s): 3b17913

hawks23/amadeus_ai_v1

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 4.5349
20
 
21
  ## Model description
22
 
@@ -44,33 +44,23 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
- - num_epochs: 20
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 3.4924 | 1.0 | 75 | 3.2433 |
55
- | 3.1246 | 2.0 | 150 | 3.2148 |
56
- | 2.8655 | 3.0 | 225 | 3.2529 |
57
- | 2.6201 | 4.0 | 300 | 3.3286 |
58
- | 2.4044 | 5.0 | 375 | 3.4331 |
59
- | 2.2106 | 6.0 | 450 | 3.5828 |
60
- | 2.0348 | 7.0 | 525 | 3.7128 |
61
- | 1.9341 | 8.0 | 600 | 3.7719 |
62
- | 1.7997 | 9.0 | 675 | 3.8238 |
63
- | 1.7071 | 10.0 | 750 | 3.9716 |
64
- | 1.6537 | 11.0 | 825 | 4.0883 |
65
- | 1.5708 | 12.0 | 900 | 4.1066 |
66
- | 1.5302 | 13.0 | 975 | 4.1778 |
67
- | 1.4895 | 14.0 | 1050 | 4.2198 |
68
- | 1.4586 | 15.0 | 1125 | 4.2890 |
69
- | 1.4211 | 16.0 | 1200 | 4.3352 |
70
- | 1.3721 | 17.0 | 1275 | 4.4004 |
71
- | 1.3637 | 18.0 | 1350 | 4.4707 |
72
- | 1.3597 | 19.0 | 1425 | 4.5228 |
73
- | 1.3362 | 20.0 | 1500 | 4.5349 |
74
 
75
 
76
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 4.3259
20
 
21
  ## Model description
22
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
+ - num_epochs: 10
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 3.2904 | 1.0 | 75 | 3.2025 |
55
+ | 3.0004 | 2.0 | 150 | 3.1939 |
56
+ | 2.6935 | 3.0 | 225 | 3.2721 |
57
+ | 2.4059 | 4.0 | 300 | 3.3634 |
58
+ | 2.1472 | 5.0 | 375 | 3.5849 |
59
+ | 1.9307 | 6.0 | 450 | 3.7793 |
60
+ | 1.7398 | 7.0 | 525 | 3.8648 |
61
+ | 1.625 | 8.0 | 600 | 4.0542 |
62
+ | 1.5137 | 9.0 | 675 | 4.2189 |
63
+ | 1.4398 | 10.0 | 750 | 4.3259 |
 
 
 
 
 
 
 
 
 
 
64
 
65
 
66
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a715a8f0acb0d41b01eefa62a18ea8bcc5ac28348970bf167a016af6e80c099d
3
  size 8397056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:941cefb2ea268a8814473dba827dd126f5d1b9608de30ba3727747df79b4d350
3
  size 8397056
runs/Apr15_12-52-00_569488d6630f/events.out.tfevents.1713185520.569488d6630f.529.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd639c1ba3b5134fa56e1ff26ca64ff4c7365b1de4ce7cdc93e91065b2906bcb
3
+ size 5261
runs/Apr15_12-52-00_569488d6630f/events.out.tfevents.1713185942.569488d6630f.529.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a88df635a5e0234dfb6ecc1a4dd8d2fdc6cb2a43a060210bb4231a2bd443008
3
+ size 10426
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:59766e3f592bcd473a30623095e2120af5e8e92a20428519a75dd7d832dd07d9
3
  size 4856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40a93e9c2a6027890c9a819dafc4aa95cdcd2dbcf48e1ce376de603bc81ba2e4
3
  size 4856