merge

This is a merge of pre-trained language models created using mergekit.

SOMEHOW ITS AAAACTUALLY USEABLE

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 16] # angepasst von [0, 24] auf [0, 16]
    model: concedo/KobbleTinyV2-1.1B
- sources:
  - layer_range: [5, 16] # angepasst von [8, 24] auf [5, 16]
    model: concedo/KobbleTinyV2-1.1B
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [5, 16] # angepasst von [8, 24] auf [5, 16]
    model: concedo/KobbleTinyV2-1.1B
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [16, 22] # angepasst von [24, 32] auf [16, 22]
    model: concedo/KobbleTinyV2-1.1B
Downloads last month
33
Safetensors
Model size
2.07B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Fischerboot/2b-tiny-llama-alpaca-instr

Finetuned
(3)
this model
Adapters
1 model
Merges
1 model

Collection including Fischerboot/2b-tiny-llama-alpaca-instr