---
language:
- en
- ko
license: mit
datasets: We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs
pipeline_tag: text-generation
model-index:
- name: FusionNet_7Bx2_MoE_Ko_DPO_Adapter_Attach
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 73.89
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dddsaty/FusionNet_7Bx2_MoE_Ko_DPO_Adapter_Attach
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 88.94
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dddsaty/FusionNet_7Bx2_MoE_Ko_DPO_Adapter_Attach
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 65.03
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dddsaty/FusionNet_7Bx2_MoE_Ko_DPO_Adapter_Attach
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 71.24
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dddsaty/FusionNet_7Bx2_MoE_Ko_DPO_Adapter_Attach
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 87.61
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dddsaty/FusionNet_7Bx2_MoE_Ko_DPO_Adapter_Attach
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 69.83
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dddsaty/FusionNet_7Bx2_MoE_Ko_DPO_Adapter_Attach
      name: Open LLM Leaderboard
---

**Explanation**
- With the base model, attached the DPO applied Adapter

**Base Model**
- [TomGrc/FusionNet_7Bx2_MoE_v0.1](https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_v0.1)

**Adapter Base Model**
- [yanolja/KoSOLAR-10.7B-v0.3](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.3)

**Adapter Corpus**
- [We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs](https://huggingface.co/datasets/We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs)

**Score**
|Average|ARC|HellaSwag|MMLU|TruthfulQA|Winogrande|GSM8K|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|76.09|73.89|88.94|65.03|71.24|87.61|69.83|

**Log**
- 2024.02.13: Initial version Upload

**LICENSE**
- MIT
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dddsaty__FusionNet_7Bx2_MoE_Ko_DPO_Adapter_Attach)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |76.09|
|AI2 Reasoning Challenge (25-Shot)|73.89|
|HellaSwag (10-Shot)              |88.94|
|MMLU (5-Shot)                    |65.03|
|TruthfulQA (0-shot)              |71.24|
|Winogrande (5-shot)              |87.61|
|GSM8k (5-shot)                   |69.83|