Edit model card

final_merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using ./storage2/input_models/Mistral-7B-v0.1_8133861 as a base.

Models Merged

The following models were included in the merge:

  • ./storage2/input_models/WizardMath-7B-V1.1_2027605156
  • ./storage2/input_models/Abel-7B-002_121690448
  • ./storage2/input_models/shisa-gamma-7b-v1_4025154171

Configuration

The following YAML configuration was used to produce this model:

base_model: ./storage2/input_models/Mistral-7B-v0.1_8133861
dtype: bfloat16
merge_method: dare_ties
parameters:
  int8_mask: 1.0
  normalize: 1.0
slices:
- sources:
  - layer_range: [0, 8]
    model: ./storage2/input_models/shisa-gamma-7b-v1_4025154171
    parameters:
      density: 0.6699910985974532
      weight: 0.13529360500839205
  - layer_range: [0, 8]
    model: ./storage2/input_models/WizardMath-7B-V1.1_2027605156
    parameters:
      density: 0.8652557087160213
      weight: 0.6985440552740758
  - layer_range: [0, 8]
    model: ./storage2/input_models/Abel-7B-002_121690448
    parameters:
      density: 0.4323464491414452
      weight: 0.8179823325064868
  - layer_range: [0, 8]
    model: ./storage2/input_models/Mistral-7B-v0.1_8133861
- sources:
  - layer_range: [8, 16]
    model: ./storage2/input_models/shisa-gamma-7b-v1_4025154171
    parameters:
      density: 1.0
      weight: 0.03216719764341956
  - layer_range: [8, 16]
    model: ./storage2/input_models/WizardMath-7B-V1.1_2027605156
    parameters:
      density: 0.6967615831667242
      weight: 0.8043194027622319
  - layer_range: [8, 16]
    model: ./storage2/input_models/Abel-7B-002_121690448
    parameters:
      density: 0.7897142847167249
      weight: 0.09233872355906134
  - layer_range: [8, 16]
    model: ./storage2/input_models/Mistral-7B-v0.1_8133861
- sources:
  - layer_range: [16, 24]
    model: ./storage2/input_models/shisa-gamma-7b-v1_4025154171
    parameters:
      density: 1.0
      weight: 0.6740405166949244
  - layer_range: [16, 24]
    model: ./storage2/input_models/WizardMath-7B-V1.1_2027605156
    parameters:
      density: 0.5417954561416459
      weight: 0.308476065247547
  - layer_range: [16, 24]
    model: ./storage2/input_models/Abel-7B-002_121690448
    parameters:
      density: 0.7841601014052402
      weight: 0.02993327454595157
  - layer_range: [16, 24]
    model: ./storage2/input_models/Mistral-7B-v0.1_8133861
- sources:
  - layer_range: [24, 32]
    model: ./storage2/input_models/shisa-gamma-7b-v1_4025154171
    parameters:
      density: 0.5892764365325144
      weight: 0.7288214753840682
  - layer_range: [24, 32]
    model: ./storage2/input_models/WizardMath-7B-V1.1_2027605156
    parameters:
      density: 0.8133101423312465
      weight: 0.06233401147902682
  - layer_range: [24, 32]
    model: ./storage2/input_models/Abel-7B-002_121690448
    parameters:
      density: 0.9351019303077212
      weight: 0.008694459163933368
  - layer_range: [24, 32]
    model: ./storage2/input_models/Mistral-7B-v0.1_8133861
Downloads last month
11
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.