--- language: - de - en - it - fr - pt - nl - ar - es license: apache-2.0 tags: - spectrum - sft - dpo base_model: - VAGOsolutions/SauerkrautLM-v2-14b-SFT datasets: - VAGOsolutions/SauerkrautLM-Fermented-GER-DPO - VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO model-index: - name: SauerkrautLM-v2-14b-DPO results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 74.12 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 50.93 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 27.34 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 9.28 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 13.78 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 45.75 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-DPO name: Open LLM Leaderboard --- ![SauerkrautLM-v2-14b-DPO](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-3.png "SauerkrautLM-v2-14b-DPO") ## VAGO solutions SauerkrautLM-v2-14b-DPO **DPO Fine-tuned Model** - *Enhanced DPO-tuned version with focus on English performance and german function calling irrelevance optimization* Introducing **SauerkrautLM-v2-14b-DPO** – our advanced DPO-tuned version based on [SauerkrautLM-v2-14b-SFT](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-SFT)! - Three-phase training approach combining SFT and DPO - Enhanced English language performance while maintaining German capabilities - Optimized function calling with improved german irrelevance handling - Comes with two new community datasets for custom training (release soon) # Table of Contents 1. [Overview of all SauerkrautLM-v2-14b Models](#all-SauerkrautLM-v2-14b) 2. [Model Details](#model-details) - [Training procedure](#training-procedure) 3. [Released Datasets](#released-datasets) 4. [Evaluation](#evaluation) 5. [Disclaimer](#disclaimer) 6. [Contact](#contact) 7. [Collaborations](#collaborations) 8. [Acknowledgement](#acknowledgement) ## All SauerkrautLM-v2-14b | Model | HF | EXL2 | GGUF | AWQ | |-------|-------|-------|-------|-------| | SauerkrautLM-14b-v2-SFT | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-SFT) | coming soon | coming soon | coming soon | | SauerkrautLM-14b-v2-DPO | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-DPO) | coming soon | coming soon | coming soon | ## Model Details **SauerkrautLM-v2-14b-DPO** - **Base Model:** [SauerkrautLM-v2-14b-SFT](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-SFT) - **Language(s):** English (primary), German - **License:** Apache 2.0 - **Contact:** [VAGO solutions](https://vago-solutions.ai) ## Training Procedure This model extends our two-phase SFT model with an additional DPO phase, creating a comprehensive three-phase training approach: **Phase 1 & 2 (SFT)**: - Identical to SauerkrautLM-v2-14b-SFT training - Phase 1: 25% layer targeting with 0.6B tokens - Phase 2: 20% layer targeting with 0.6B tokens **Phase 3 (DPO)**: - Spectrum Fine-Tuning targeting 15% of layers - Training on 80M tokens - Focus on English performance optimization - Integration of German performance preservation - Enhanced german function calling irrelevance handling **Dataset Composition for DPO**: - Extended previous DPO dataset - New SauerkrautLM-Fermented-GER-DPO dataset (release soon) - SauerkrautLM-Fermented-Irrelevance-GER-DPO dataset (release soon) - Carefully balanced to maintain German language capabilities ## Released Datasets As part of this release, we're making parts of two new datasets available to the community in a few days: **SauerkrautLM-Fermented-GER-DPO**: - 3,300 high-quality German training samples - Multiple judgment criteria for flexible filtering - Enables customized training approaches - Comprehensive metadata for sample selection **SauerkrautLM-Fermented-Irrelevance-GER-DPO**: - 2,000 specialized German training samples - Focus on function calling irrelevance optimization - Multiple filtering criteria included - Designed for community experimentation ## Objective and Results This DPO-enhanced version aims to: - Optimize English language performance - Maintain German language capabilities - Improve german function calling irrelevance handling - Provide valuable training resources to the community ## Evaluation (same diagrams as in SauerkrautLM-v2-14b-SFT model card) **AGIEVAL** ![SauerkrautLM-v2-14b-DPO-AGIEVAL](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-AGIEVAL.png "SauerkrautLM-v2-14b-DPO-AGIEVAL") **GPT4ALL** ![SauerkrautLM-v2-14b-DPO-GPT4ALL](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-GPT4ALL.png "SauerkrautLM-v2-14b-DPO-GPT4ALL") **TRUTHFULQA** ![SauerkrautLM-v2-14b-DPO-TRUTHFULQA](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-TRUTHFULQA.png "SauerkrautLM-v2-14b-DPO-TRUTHFULQA") **OPENLEADERBOARD 2** ![SauerkrautLM-14b-v2-DPO-OPENLEADERBOARD](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-OPENLEADERBOARD.png "SauerkrautLM-v2-14b-DPO-OPENLEADERBOARD") **MMLU 5-shot** ![SauerkrautLM-14b-v2-DPO-MMLU-5shot](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-MMLU-5shot.png "SauerkrautLM-v2-14b-DPO-MMLU-5shot") **Berkeley Function Calling Leaderboard** ![SauerkrautLM-v2-14b-DPO-BERKELEY](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-BERKELEY.png "SauerkrautLM-v2-14b-DPO-BERKELEY") Please note that our benchmark results in absolute numbers may differ from the Hugging Face Leaderboard due to variations in benchmark evaluation pipelines. However, the relative differences remain consistent. ## Disclaimer We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website. We are also grateful for your feedback and suggestions. ## Collaborations We are also keenly seeking support and investment for our startup, VAGO solutions where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.ai) ## Acknowledgement Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such a valuable base model, and to our community for their continued support and engagement. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-v2-14b-DPO) | Metric |Value| |-------------------|----:| |Avg. |36.87| |IFEval (0-Shot) |74.12| |BBH (3-Shot) |50.93| |MATH Lvl 5 (4-Shot)|27.34| |GPQA (0-shot) | 9.28| |MuSR (0-shot) |13.78| |MMLU-PRO (5-shot) |45.75|