YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

UnFimbulvetr-20B - GGUF

Original model description:

base_model: ["Sao10K/Fimbulvetr-11B-v2"] library_name: transformers tags:

  • mergekit
  • merge

UnFimbulvetr-20B

Waifu to catch your attention

This is a merge of pre-trained language models created using mergekit.

NOTE: Only tested this just for a bit. YMMV.

Next Day Tests...

Downloaded the GGUF model that someone quantized... And... nope. No.

Do not use model.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

  • Sao10K/Fimbulvetr-11B-v2

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: FimbMagic
      layer_range: [0, 13]
  - sources:
    - model: FimbMagic
      layer_range: [8, 13]
  - sources:
    - model: FimbMagic
      layer_range: [12, 36]
  - sources:
    - model: FimbMagic
      layer_range: [12, 36]
  - sources:
    - model: FimbMagic
      layer_range: [36, 48]
  - sources:
    - model: FimbMagic
      layer_range: [36, 48]
merge_method: passthrough
dtype: bfloat16

Additional Notes

Fimbulvetr 11B is still a very good model. This model is for extreme trailblazers who wants to test stuff!

Eval results? Don't bother.

Last one before I sleep: I'm so sorry Sao10K...

Downloads last month
5
GGUF
Model size
19.9B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .