mergekit-uploader's picture
Upload folder using huggingface_hub
e8abcbc verified
metadata
base_model:
  - mistralai/Mistral-Nemo-Instruct-2407
  - ReadyArt/Forgotten-Safeword-12B-3.6
  - PocketDoc/Dans-SakuraKaze-V1.0.0-12b
  - mistralai/Mistral-Nemo-Base-2407
  - TheDrummer/Rocinante-12B-v1.1
library_name: transformers
tags:
  - mergekit
  - merge

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using mistralai/Mistral-Nemo-Base-2407 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: mistralai/Mistral-Nemo-Base-2407
    # No parameters necessary for base model
  - model: mistralai/Mistral-Nemo-Instruct-2407
    parameters:
      density: 0.50  # Mid-level density for general instruction tuning
      weight: 0.25    # Moderate influence for balanced instruction-following
  - model: TheDrummer/Rocinante-12B-v1.1  # Highest influence (strong reasoning/language balance)
    parameters:
      density: 0.60  # Higher density for deeper reasoning and coherence
      weight: 0.35    # Primary influence model
  - model: ReadyArt/Forgotten-Safeword-12B-3.6  # Creativity & conversational nuance
    parameters:
      density: 0.50  # Balanced density for creative and nuanced responses
      weight: 0.20    # Mid-tier influence
  - model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b  # Second highest influence (natural conversation flow)
    parameters:
      density: 0.55  # Slightly high density for fluid conversation
      weight: 0.20    # Substantial influence in dialogue

merge_method: dare_ties
base_model: mistralai/Mistral-Nemo-Base-2407
parameters:
  normalize: true  # Ensures weight distribution remains balanced
  int8_mask: true  # Reduces memory usage while keeping precision
dtype: bfloat16  # Optimal balance between performance and efficiency