johannhartmann's picture
Update README.md
c7e4222 verified
|
raw
history blame
3.27 kB
metadata
tags:
  - merge
  - mergekit
  - lazymergekit
  - DiscoResearch/DiscoLM_German_7b_v1
  - DRXD1000/Phoenix
  - VAGOsolutions/SauerkrautLM-7b-v1-mistral
  - malteos/hermeo-7b
base_model:
  - DiscoResearch/DiscoLM_German_7b_v1
  - DRXD1000/Phoenix
  - VAGOsolutions/SauerkrautLM-7b-v1-mistral
  - malteos/hermeo-7b

Wiedervereinigung-7b-dpo-laser

image/png

Some of the best german models with 7b parameters as an dare_ties merge.

Since the original models based on mistral - three of them on the brilliant german LeoLM/leo-mistral-hessianai-7b - they are reunited in this merged model. Hence the name. To improve result quality they are dpo-trained with a german translation of oaast-dpo using our german fork of LLaMA-Factory. After that this model got a laserRMT treatment.

Wiedervereinigung-7b itself is a LazyMergekit merge of:

All the actual heavylifting has been done by the creators of these models.

🧩 Configuration

models:
  - model: LeoLM/leo-mistral-hessianai-7b
    # No parameters necessary for base model
  - model: DiscoResearch/DiscoLM_German_7b_v1
    parameters:
      density: 0.6
      weight: 0.25
  - model: DRXD1000/Phoenix
    parameters:
      density: 0.6
      weight: 0.25
  - model: VAGOsolutions/SauerkrautLM-7b-v1-mistral
    parameters:
      density: 0.6
      weight: 0.25
  - model: malteos/hermeo-7b
    parameters:
      density: 0.6
      weight: 0.25
merge_method: dare_ties
base_model: LeoLM/leo-mistral-hessianai-7b
parameters:
  int8_mask: true
dtype: bfloat16

mt-bench-de

The results are not bad, but some additional investment into dpo finetuning would probably help a lot.

{
    "first_turn": 6.4625,
    "second_turn": 5.6375,
    "categories": {
        "writing": 7.6,
        "roleplay": 7.5,
        "reasoning": 4.25,
        "math": 3.35,
        "coding": 3.1,
        "extraction": 8.15,
        "stem": 6.55,
        "humanities": 7.9
    },
    "average": 6.050000000000001
}

💻 Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mayflowergmbh/Wiedervereinigung-7b-dpo"
messages = [{"role": "user", "content": "Was ist ein large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])