stelterlab's picture
Update README.md
3e0ec99 verified
|
raw
history blame
9.66 kB
metadata
language:
  - de
  - en
  - it
  - fr
  - pt
  - nl
  - ar
  - es
license: apache-2.0
tags:
  - spectrum
  - sft
  - mlx
base_model: VAGOsolutions/SauerkrautLM-v2-14b-SFT
model-index:
  - name: SauerkrautLM-v2-14b-SFT
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 69.64
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 45.82
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 29.23
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 11.41
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 11.07
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 46.73
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
          name: Open LLM Leaderboard

stelterlab/SauerkrautLM-v2-14b-SFT-MLX

The Model stelterlab/SauerkrautLM-v2-14b-SFT-MLX was converted to MLX format from VAGOsolutions/SauerkrautLM-v2-14b-SFT using mlx-lm version 0.19.2.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("stelterlab/SauerkrautLM-v2-14b-SFT-MLX")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)

Original Weights by VAGOsolutions. Original Model Card follows:

SauerkrautLM-v2-14b-SFT

VAGO solutions SauerkrautLM-v2-14b-SFT

Fine-tuned Model - Celebrating one year of SauerkrautLM with our most advanced model yet, showcasing two-phase Spectrum Fine-Tuning

Introducing SauerkrautLM-14b-v2-SFT – our latest Sauerkraut version based on Qwen/Qwen2.5-14B, celebrating the one-year anniversary of SauerkrautLM!

  • Two-phase Spectrum Fine-Tuning approach
  • Phase 1: 25% layer targeting with 0.6B tokens
  • Phase 2: 20% layer targeting with 0.6B tokens
  • Enhanced mathematical capabilities, function calling, and multilingual performance

Table of Contents

  1. Overview of all SauerkrautLM-14b-v2 Models
  2. Model Details
  3. Evaluation
  4. Disclaimer
  5. Contact
  6. Collaborations
  7. Acknowledgement

All SauerkrautLM-v2-14b

Model HF EXL2 GGUF AWQ
SauerkrautLM-v2-14b-SFT Link coming soon coming soon coming soon
SauerkrautLM-v2-14b-DPO Link coming soon coming soon coming soon

Model Details

SauerkrautLM-v2-14b-SFT

  • Model Type: SauerkrautLM-v2-14b-SFT is a fine-tuned Model based on Qwen/Qwen2.5-14B
  • Language(s): German, English
  • License: Apache 2.0
  • Contact: VAGO solutions

Training Procedure

This model represents a significant advancement in our fine-tuning methodology, utilizing a two-phase Spectrum Fine-Tuning approach:

Phase 1 (25% Layer Targeting):

  • Training on 0.6B tokens with four distinct components:
    1. Mathematics data (curated using proprietary classifier)
    2. English performance data (from Sauerkraut-v1)
    3. High-quality German training data (from Sauerkraut-v1)
    4. Function calling data (from Sauerkraut-v2)

Phase 2 (20% Layer Targeting):

  • Training on additional 0.6B tokens with partial overlap:
    1. New mathematics data (classifier-selected)
    2. New English performance data (from Sauerkraut-v2)
    3. New German training data (from Sauerkraut-v2)
    4. Function calling data (from Sauerkraut-v2)

Dataset Composition:

  • Carefully curated mathematical content using a proprietary classification model
  • Premium multilingual data from both Sauerkraut-v1 and Sauerkraut-v2
  • Specialized function calling training data
  • High-quality German-English content across various domains

Objective and Results

This release marks the one-year anniversary of SauerkrautLM, showcasing our most advanced training methodology to date. The two-phase Spectrum Fine-Tuning approach allows for more nuanced learning while maintaining efficiency in resource usage. The model demonstrates significant improvements in:

  • Mathematical reasoning capabilities
  • Function calling proficiency
  • Multilingual performance
  • Instruction following
  • Common-sense reasoning

Evaluation

AGIEVAL SauerkrautLM-v2-14b-SFT-AGIEVAL

GPT4ALL SauerkrautLM-v2-14b-SFT-GPT4ALL

TRUTHFULQA SauerkrautLM-v2-14b-SFT-TRUTHFULQA

OPENLEADERBOARD 2 SauerkrautLM-v2-14b-SFT-OPENLEADERBOARD

MMLU 5-shot SauerkrautLM-v2-14b-SFT-MMLU-5shot

Berkeley Function Calling Leaderboard SauerkrautLM-v2-14b-SFT-BERKELEY

Please note that our benchmark results in absolute numbers may differ from the Hugging Face Leaderboard due to variations in benchmark evaluation pipelines. However, the relative differences remain consistent.

Disclaimer

We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.

Contact

If you are interested in customized LLMs for business applications, please get in contact with us via our website. We are also grateful for your feedback and suggestions.

Collaborations

We are also keenly seeking support and investment for our startup, VAGO solutions where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions

Acknowledgement

Many thanks to Qwen for providing such a valuable model to the Open-Source community.