Favorites Models [so far]
Collection
Models that shows enough potential to generate "smart" uncensored content. [Some are smart, some are not.]
•
9 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using unsloth/Mistral-Small-Instruct-2409 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: anthracite-org/magnum-v4-22b
parameters:
weight: 1.0 # Primary model for human-like writing
density: 0.88 # Solid foundation for clear, balanced text generation
- model: TheDrummer/Cydonia-22B-v1.3
parameters:
weight: 0.26 # Slightly reduced weight for creativity
density: 0.7 # Matches revised influence for subtle creativity
- model: TheDrummer/Cydonia-22B-v1.2
parameters:
weight: 0.16 # Reduced weight to dial back creativity overlap
density: 0.68 # Harmonized with the roleplay model reductions
- model: TheDrummer/Cydonia-22B-v1.1
parameters:
weight: 0.18 # Further reduced to minimize intrusive elements
density: 0.68 # Balanced density for roleplay accuracy
- model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
parameters:
weight: 0.28 # Reduced for less dominance of storytelling tropes
density: 0.77 # Adjusted density for smoother integration
- model: allura-org/MS-Meadowlark-22B
parameters:
weight: 0.3 # Retained for its balanced creativity
density: 0.72 # Supports descriptive fluency and accuracy
- model: spow12/ChatWaifu_v2.0_22B
parameters:
weight: 0.27 # Intact to retain anime-style RP nuance
density: 0.7 # Unmodified for balance with other models
- model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B
parameters:
weight: 0.2 # Slight reduction to balance Japanese context influence
density: 0.58 # Fine-tuned to support overall coherence
- model: crestf411/MS-sunfall-v0.7.0
parameters:
weight: 0.25 # Reduced weight for a subtler dramatic tone
density: 0.74 # Balanced density for smoother blending
- model: unsloth/Mistral-Small-Instruct-2409+rAIfle/Acolyte-LORA
parameters:
weight: 0.24 # Slight reduction for subtler varied content inputs
density: 0.7 # Aligned density for balanced integration
- model: InferenceIllusionist/SorcererLM-22B
parameters:
weight: 0.23 # Reduced for a more cohesive stylistic approach
density: 0.74 # Matches weight reduction for smoother outputs
- model: unsloth/Mistral-Small-Instruct-2409+Kaoeiri/Moingooistrial-22B-V1-Lora
parameters:
weight: 0.26 # Slightly dialed back for monster and mythical content
density: 0.72 # Balanced for seamless integration
- model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
weight: 0.12 # Light touch to prevent overheating
density: 0.65 # Low density to avoid conflict with roleplay-heavy models
merge_method: dare_ties # Optimal for diverse and complex model blending
base_model: unsloth/Mistral-Small-Instruct-2409
parameters:
density: 0.85 # Retained for logical and creative balance
epsilon: 0.09 # Small step size for smooth blending
lambda: 1.22 # Slightly adjusted scaling for refined sharpness
dtype: bfloat16