merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using unsloth/Mistral-Small-Instruct-2409 as a base.
Models Merged
The following models were included in the merge:
- Kaoeiri/MS-Quadrosiac-2409-22B
- DigitalSouls/BlackSheep-DigitalSoul-22B
- Kaoeiri/MS-Inky-2409-22B
- TheDrummer/Cydonia-22B-v1.1
- anthracite-org/magnum-v4-22b
- Darkknight535/MS-Moonlight-22B-v3
- Kaoeiri/MS_Moingooistral-2409-22B
- hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit
- Envoid/Mistral-Small-NovusKyver
- Jellywibble/MistralSmall1500CTXDummy
- ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
- Kaoeiri/MS_a-coolyte-2409-22B
- Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B
- Kaoeiri/MS-Magpantheonsel-lark-v4x1.6.2-Cydonia-vXXX-22B-5
- crestf411/MS-sunfall-v0.7.0
- Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
- TroyDoesAI/BlackSheep-MermaidMistral-22B
- InferenceIllusionist/SorcererLM-22B
Configuration
The following YAML configuration was used to produce this model:
models:
# Core Fiction and Character Detail Models (Increased Precision)
- model: Kaoeiri/MS_Moingooistral-2409-22B # Monster fiction core
parameters:
weight: 0.40 # Increased for better monster/character detail
density: 1.30 # Increased for richer character descriptions
- model: Kaoeiri/MS-Magpantheonsel-lark-v4x1.6.2-Cydonia-vXXX-22B-5 # Main writing engine
parameters:
weight: 1.0 # Maximized for core writing
density: 0.85 # Slightly increased for deeper character development
- model: anthracite-org/magnum-v4-22b # Added for writing recap and precision
parameters:
weight: 0.95
density: 0.84
# World Building & Character Interaction
- model: Kaoeiri/MS-Inky-2409-22B # Descriptive world dynamics
parameters:
weight: 0.45 # Increased for richer character environments
density: 0.82 # Increased for better world-character integration
- model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small # Rich interaction
parameters:
weight: 0.42 # Increased for deeper character interactions
density: 0.78
# Character Development Core
- model: DigitalSouls/BlackSheep-DigitalSoul-22B # Combat and conflict
parameters:
weight: 0.30 # Increased for better character conflict
density: 0.75
# Magical Elements (Rebalanced NovusKyver Sister)
- model: InferenceIllusionist/SorcererLM-22B
parameters:
weight: 0.15 # Increased for magical character traits
density: 0.78 # Increased for better integration
# Secondary Character Enhancement
- model: TheDrummer/Cydonia-22B-v1.1
parameters:
weight: 0.15 # Slight increase for character depth
density: 0.68
- model: crestf411/MS-sunfall-v0.7.0
parameters:
weight: 0.18 # Increased for writing precision
density: 0.72
- model: Kaoeiri/MS_a-coolyte-2409-22B
parameters:
weight: 0.22 # Increased for fictional character detail
density: 0.73
# Character Personality Development
- model: Kaoeiri/MS-Quadrosiac-2409-22B
parameters:
weight: 0.18
density: 0.73
# Enhanced Story and Character Building
- model: hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit
parameters:
weight: 0.25 # Increased for better character narrative
density: 0.72
- model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
weight: 0.15 # Increased for character roleplay
density: 0.65
- model: Darkknight535/MS-Moonlight-22B-v3
parameters:
weight: 0.25 # Increased for character detail
density: 0.65
# Reduced weight but maintained for diversity
- model: Jellywibble/MistralSmall1500CTXDummy
parameters:
weight: 0.12
density: 0.62
# Cultural and Character Depth
- model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B
parameters:
weight: 0.22 # Increased for news and cultural character traits
density: 0.62
# Rebalanced NovusKyver (Modified to maintain benefits while reducing instruction avoidance)
- model: Envoid/Mistral-Small-NovusKyver
parameters:
weight: 0.20 # Reduced to minimize instruction avoidance
density: 0.74 # Increased for better integration
- model: TroyDoesAI/BlackSheep-MermaidMistral-22B
parameters:
weight: 0.25 # Increased for character personality
density: 0.73
merge_method: dare_ties
base_model: unsloth/Mistral-Small-Instruct-2409
parameters:
density: 0.95 # Increased for maximum character detail
epsilon: 0.035 # Reduced for more consistent character behavior
lambda: 1.50 # Increased to enhance character creativity
dtype: bfloat16
tokenizer_source: union
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Kaoeiri/MS-MagpantheonselRP-22B-12.95-Fictional
Merge model
this model