Quantized models
Collection
Select models helpfully quantized by others as well as myself
•
59 items
•
Updated
•
2
This is an 8.0bpw h8 exl2 quant of a merger of two similar models with strong reasoning, hopefully resulting in "dense" encoding of said reasoning, was merged with a model targeting roleplay.
I've tested with ChatML prompts with temperature=1.1 and minP=0.03. The model itself supports Alpaca format prompts. The model claims a context length of 32K, but I found it lost coherence after 8K in informal testing. I prefer to stick with 8.0bpw h8 exl2 or Q8_0 GGUF for maximum coherence.
Alternative downloads:
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
layer_range: [0, 32]
- model: KatyTheCutie/LemonadeRP-4.5.3
layer_range: [0, 32]
# or, the equivalent models: syntax:
# models:
merge_method: slerp
base_model: KatyTheCutie/LemonadeRP-4.5.3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
Base model
grimjim/kukulemon-7B