Maiden-Unquirked-20B-gguf

This is a merge of pre-trained language models created using mergekit.

Merge Details

See Below

Merge Method

This model was merged using the DARE TIES merge method using TeeZee/DarkForest-20B-v2.0 as a base.

Models Merged

The following models were included in the merge:

Ollama Modelfile

FROM "./model/Maiden-Unquirked-20B-Q5_K_M.gguf"
TEMPLATE """
### Instruction:
{prompt}

### Response:
"""

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: TeeZee/DarkForest-20B-v2.0
  - model: athirdpath/Harmonia-20B
    parameters:
      weight: 0.5
      density: 1.0
  - model: TeeZee/BigMaid-20B-v1.0
    parameters:
      weight: 0.5
      density: 1.0
merge_method: dare_ties
base_model: TeeZee/DarkForest-20B-v2.0
parameters:
  int8_mask: true
dtype: bfloat16
name: maiden_unquirked
Downloads last month
6
GGUF
Model size
20B params
Architecture
llama

3-bit

4-bit

5-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for ND911/Maiden-Unquirked-20B-gguf