General Use Sampling:
Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near 0.3 at first or else you will get some weird results. This is mentioned by MistralAI at their Transformers section.
Best Samplers:
I found best success using the following for Starlight-V3-12B:
Temperature:0.7
-1.2
(Additional stopping strings will be necessary as you increase the temperature)
Top K:-1
Min P:0.05
Rep Penalty:1.03-1.1
Why Version 3?
Currently the other versions resulted in really bad results that I didn't upload them, the version number is just the internal version.
Goal
The idea is to keep the strengths of anthracite-org/magnum-12b-v2 while adding some more creativity that seems to be lacking in the model. Mistral-Nemo by itself seems to behave less sporadic due to the low temperature needed but this gets a bit repetitive, although it's still the best foundational model I've used so far, mainly since Mistral-Nemo native 128k context length is crazy good.
Results
I am not entirely pleased with the result of the merge but it seems okay, though base anthracite-org/magnum-12b-v2 might just be better by itself. However, I'll still experiement on different merge methods. Leaking of the training data used on both models seems a bit more apparent when using higher temperature values, especially the use of author notes on the system prompt. Generally I'd advise to create a stopping string for "```" to avoid the generation of the training data.
Original Models:
- UsernameJustAnother/Nemo-12B-Marlin-v5 (Thank you so much for your work ♥)
- anthracite-org/magnum-12b-v2 (Thank you so much for your work ♥)
Official Quants:
PPL = Perplexity, lower is better
Comparisons are done as QX_X Llama-3-8B against FP16 Llama-3-8B, recommended as a guideline and not as fact.
Quant Type | Note | Size |
---|---|---|
Q2_K | +3.5199 ppl @ Llama-3-8B | 4.79 GB |
Q3_K_S | +1.6321 ppl @ Llama-3-8B | 5.53 GB |
Q3_K_M | +0.6569 ppl @ Llama-3-8B | 6.08 GB |
Q3_K_L | +0.5562 ppl @ Llama-3-8B | 6.56 GB |
Q4_K_S | +0.2689 ppl @ Llama-3-8B | 7.12 GB |
Q4_K_M | +0.1754 ppl @ Llama-3-8B | 7.48 GB |
Q5_K_S | +0.1049 ppl @ Llama-3-8B | 8.52 GB |
Q5_K_M | +0.0569 ppl @ Llama-3-8B | 8.73 GB |
Q6_K | +0.0217 ppl @ Llama-3-8B | 10.1 GB |
Q8_0 | +0.0026 ppl @ Llama-3-8B | 13.00 GB |
Original Model Licenses & This Model License: Apache 2.0
- Downloads last month
- 40
Model tree for starble-dev/Starlight-V3-12B-GGUF
Base model
starble-dev/Starlight-V3-12B