imdatta0 commited on
Commit
2280f74
1 Parent(s): d16db60

End of training

Browse files
Files changed (2) hide show
  1. README.md +51 -51
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: unsloth/llama-3-8b-bnb-4bit
3
  library_name: peft
4
  license: llama3
5
  tags:
@@ -15,9 +15,9 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # Meta-Llama-3-8B_pct_reverse
17
 
18
- This model is a fine-tuned version of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 2.2068
21
 
22
  ## Model description
23
 
@@ -51,54 +51,54 @@ The following hyperparameters were used during training:
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
- | 2.2792 | 0.0206 | 8 | 2.3158 |
55
- | 2.3218 | 0.0412 | 16 | 2.3073 |
56
- | 2.2517 | 0.0618 | 24 | 2.2942 |
57
- | 2.3279 | 0.0824 | 32 | 2.2961 |
58
- | 2.3201 | 0.1030 | 40 | 2.2893 |
59
- | 2.2819 | 0.1236 | 48 | 2.2830 |
60
- | 2.2938 | 0.1442 | 56 | 2.2897 |
61
- | 2.3265 | 0.1648 | 64 | 2.3251 |
62
- | 2.2908 | 0.1854 | 72 | 2.3167 |
63
- | 2.3106 | 0.2060 | 80 | 2.3209 |
64
- | 2.3315 | 0.2266 | 88 | 2.2990 |
65
- | 2.3742 | 0.2472 | 96 | 2.2919 |
66
- | 2.3488 | 0.2678 | 104 | 2.2880 |
67
- | 2.314 | 0.2884 | 112 | 2.3035 |
68
- | 2.3673 | 0.3090 | 120 | 2.2985 |
69
- | 2.3705 | 0.3296 | 128 | 2.3009 |
70
- | 2.3157 | 0.3502 | 136 | 2.2976 |
71
- | 2.3184 | 0.3708 | 144 | 2.3016 |
72
- | 2.337 | 0.3914 | 152 | 2.3053 |
73
- | 2.3198 | 0.4120 | 160 | 2.2931 |
74
- | 2.3736 | 0.4326 | 168 | 2.3003 |
75
- | 2.3143 | 0.4532 | 176 | 2.2941 |
76
- | 2.3203 | 0.4738 | 184 | 2.2856 |
77
- | 2.3206 | 0.4944 | 192 | 2.2808 |
78
- | 2.3114 | 0.5150 | 200 | 2.2801 |
79
- | 2.3088 | 0.5356 | 208 | 2.2758 |
80
- | 2.3221 | 0.5562 | 216 | 2.2622 |
81
- | 2.3111 | 0.5768 | 224 | 2.2712 |
82
- | 2.2935 | 0.5974 | 232 | 2.2625 |
83
- | 2.2591 | 0.6180 | 240 | 2.2572 |
84
- | 2.292 | 0.6386 | 248 | 2.2506 |
85
- | 2.2829 | 0.6592 | 256 | 2.2498 |
86
- | 2.2384 | 0.6798 | 264 | 2.2441 |
87
- | 2.2969 | 0.7004 | 272 | 2.2413 |
88
- | 2.2779 | 0.7210 | 280 | 2.2349 |
89
- | 2.2786 | 0.7416 | 288 | 2.2305 |
90
- | 2.2706 | 0.7621 | 296 | 2.2290 |
91
- | 2.3034 | 0.7827 | 304 | 2.2217 |
92
- | 2.2433 | 0.8033 | 312 | 2.2191 |
93
- | 2.2205 | 0.8239 | 320 | 2.2187 |
94
- | 2.2574 | 0.8445 | 328 | 2.2120 |
95
- | 2.263 | 0.8651 | 336 | 2.2134 |
96
- | 2.2747 | 0.8857 | 344 | 2.2094 |
97
- | 2.241 | 0.9063 | 352 | 2.2079 |
98
- | 2.2071 | 0.9269 | 360 | 2.2091 |
99
- | 2.2329 | 0.9475 | 368 | 2.2075 |
100
- | 2.2705 | 0.9681 | 376 | 2.2069 |
101
- | 2.265 | 0.9887 | 384 | 2.2068 |
102
 
103
 
104
  ### Framework versions
 
1
  ---
2
+ base_model: unsloth/llama-3-8b
3
  library_name: peft
4
  license: llama3
5
  tags:
 
15
 
16
  # Meta-Llama-3-8B_pct_reverse
17
 
18
+ This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 2.1917
21
 
22
  ## Model description
23
 
 
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
+ | 2.2547 | 0.0206 | 8 | 2.2652 |
55
+ | 2.2857 | 0.0412 | 16 | 2.2722 |
56
+ | 2.217 | 0.0618 | 24 | 2.2663 |
57
+ | 2.2942 | 0.0824 | 32 | 2.2549 |
58
+ | 2.281 | 0.1030 | 40 | 2.2508 |
59
+ | 2.2541 | 0.1236 | 48 | 2.2708 |
60
+ | 2.2672 | 0.1442 | 56 | 2.2648 |
61
+ | 2.2887 | 0.1648 | 64 | 2.2698 |
62
+ | 2.2464 | 0.1854 | 72 | 2.2654 |
63
+ | 2.2805 | 0.2060 | 80 | 2.2734 |
64
+ | 2.3111 | 0.2266 | 88 | 2.2742 |
65
+ | 2.361 | 0.2472 | 96 | 2.2808 |
66
+ | 2.3418 | 0.2678 | 104 | 2.2802 |
67
+ | 2.3064 | 0.2884 | 112 | 2.2952 |
68
+ | 2.3509 | 0.3090 | 120 | 2.2841 |
69
+ | 2.3507 | 0.3296 | 128 | 2.2786 |
70
+ | 2.3 | 0.3502 | 136 | 2.2801 |
71
+ | 2.2953 | 0.3708 | 144 | 2.2772 |
72
+ | 2.3224 | 0.3914 | 152 | 2.2823 |
73
+ | 2.3055 | 0.4120 | 160 | 2.2739 |
74
+ | 2.3519 | 0.4326 | 168 | 2.2795 |
75
+ | 2.2988 | 0.4532 | 176 | 2.2694 |
76
+ | 2.3046 | 0.4738 | 184 | 2.2648 |
77
+ | 2.296 | 0.4944 | 192 | 2.2661 |
78
+ | 2.2908 | 0.5150 | 200 | 2.2650 |
79
+ | 2.2923 | 0.5356 | 208 | 2.2633 |
80
+ | 2.3062 | 0.5562 | 216 | 2.2469 |
81
+ | 2.289 | 0.5768 | 224 | 2.2516 |
82
+ | 2.2736 | 0.5974 | 232 | 2.2452 |
83
+ | 2.2414 | 0.6180 | 240 | 2.2406 |
84
+ | 2.2667 | 0.6386 | 248 | 2.2355 |
85
+ | 2.2595 | 0.6592 | 256 | 2.2354 |
86
+ | 2.2175 | 0.6798 | 264 | 2.2276 |
87
+ | 2.277 | 0.7004 | 272 | 2.2221 |
88
+ | 2.2576 | 0.7210 | 280 | 2.2161 |
89
+ | 2.2604 | 0.7416 | 288 | 2.2123 |
90
+ | 2.2526 | 0.7621 | 296 | 2.2118 |
91
+ | 2.2838 | 0.7827 | 304 | 2.2033 |
92
+ | 2.2214 | 0.8033 | 312 | 2.2009 |
93
+ | 2.2034 | 0.8239 | 320 | 2.2015 |
94
+ | 2.235 | 0.8445 | 328 | 2.1954 |
95
+ | 2.2444 | 0.8651 | 336 | 2.1971 |
96
+ | 2.2593 | 0.8857 | 344 | 2.1939 |
97
+ | 2.2222 | 0.9063 | 352 | 2.1929 |
98
+ | 2.1894 | 0.9269 | 360 | 2.1944 |
99
+ | 2.2138 | 0.9475 | 368 | 2.1927 |
100
+ | 2.2543 | 0.9681 | 376 | 2.1918 |
101
+ | 2.2462 | 0.9887 | 384 | 2.1917 |
102
 
103
 
104
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9d4a5f2216c77ccc6f045daa5866cee2fdbfbd632c6a228c6e8d48dfebe7ce32
3
  size 83945296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2394bf1e035c8605a735a737ed4fd9497d2b9aa6fc59f306d911639ce5f3dc0
3
  size 83945296