Tarek07's picture
Update README.md
c16981d verified
metadata
base_model:
  - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
  - SicariusSicariiStuff/Negative_LLAMA_70B
  - TheDrummer/Anubis-70B-v1
  - Sao10K/70B-L3.3-Cirrus-x1
  - Sao10K/L3.1-70B-Hanami-x1
  - nbeerbower/Llama-3.1-Nemotron-lorablated-70B
library_name: transformers
tags:
  - mergekit
  - merge
license: llama3.3

After a lot positive feedback on Progenitor V1.1, I got some advice regarding a couple of settings which I could finetune for hopefully better results. Mainly changing the tokenizer and letting the merge compute at full float32 before scaling down to bfloat16. With these two changes in place, and the rest identical to the Progenitor V1.1, I present Progenitor V2.1!

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Linear DELLA merge method using nbeerbower/Llama-3.1-Nemotron-lorablated-70B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Sao10K/L3.1-70B-Hanami-x1
    parameters:
      weight: 0.20
      density: 0.7
  - model: Sao10K/70B-L3.3-Cirrus-x1
    parameters:
      weight: 0.20
      density: 0.7
  - model: SicariusSicariiStuff/Negative_LLAMA_70B
    parameters:
      weight: 0.20
      density: 0.7
  - model: TheDrummer/Anubis-70B-v1
    parameters:
      weight: 0.20
      density: 0.7
  - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
    parameters:
      weight: 0.20
      density: 0.7
merge_method: della_linear
base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
parameters:
  epsilon: 0.2
  lambda: 1.1
dype: float32
out_dtype: bfloat16
tokenizer:
 source: Sao10K/70B-L3.3-Cirrus-x1