llama / README.md
Hjgugugjhuhjggg's picture
Upload folder using huggingface_hub
25bba39 verified
|
raw
history blame
1.91 kB
metadata
base_model:
  - meta-llama/Llama-3.2-3B
  - PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
  - meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
tags:
  - mergekit
  - merge

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the breadcrumbs_ties merge method using meta-llama/Llama-3.2-3B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: breadcrumbs_ties
base_model: meta-llama/Llama-3.2-3B
tokenizer_source: PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
dtype: bfloat16
parameters:
  normalize: true
models:
  - model: meta-llama/Llama-3.2-3B-Instruct
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        method: int8
        value: 100
      quantization:
        method: int4
        value: 100
  - model: PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        method: int8
        value: 100
      quantization:
        method: int4
        value: 100