EstopianMaid

This is a merge of pre-trained language models created using mergekit by Katy.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using TheBloke/Llama-2-13B-fp16 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: TheBloke/Llama-2-13B-fp16
dtype: float16
merge_method: task_arithmetic
slices:
- sources:
  - layer_range: [0, 40]
    model: TheBloke/Llama-2-13B-fp16
  - layer_range: [0, 40]
    model: BlueNipples/TimeCrystal-l2-13B
    parameters:
      weight: 0.75
  - layer_range: [0, 40]
    model: cgato/Thespis-13b-DPO-v0.7
    parameters:
      weight: 0.23
  - layer_range: [0, 40]
    model: KoboldAI/LLaMA2-13B-Estopia
    parameters:
      weight: 0.15
  - layer_range: [0, 40]
    model: NeverSleep/Noromaid-13B-0.4-DPO
    parameters:
      weight: 0.2
  - layer_range: [0, 40]
    model: Doctor-Shotgun/cat-v1.0-13b
    parameters:
      weight: 0.03
Downloads last month
1,727
GGUF
Model size
13B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for TheBigBlender/EstopianMaid-GGUF