Edit model card

How to use (16bit):

ollama run timpal0l/beaglecatmunin "Hejsan!"

This model is a merge of timpal0l/Mistral-7B-v0.1-flashback-v2 and RJuro/munin-neuralbeagle-7b.

config.yaml

models:
  - model: timpal0l/Mistral-7B-v0.1-flashback-v2
    # No parameters necessary for base model
  - model: RJuro/munin-neuralbeagle-7b
    parameters:
      density: 0.53
      weight: 0.6
merge_method: dare_ties
base_model: timpal0l/Mistral-7B-v0.1-flashback-v2
parameters:
  int8_mask: true
dtype: bfloat16
Downloads last month
9
GGUF
Model size
7.24B params
Architecture
llama
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for timpal0l/BeagleCatMunin-GGUF