File size: 10,968 Bytes
ae5cd93 c20c9f0 c4f5aea ae5cd93 875c75b f3f938b 875c75b 360547e 875c75b c4f5aea |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 |
---
license: llama3
model-index:
- name: LLaMA3-iterative-DPO-final
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 53.34
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=RLHFlow/LLaMA3-iterative-DPO-final
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 29.79
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=RLHFlow/LLaMA3-iterative-DPO-final
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.0
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=RLHFlow/LLaMA3-iterative-DPO-final
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.47
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=RLHFlow/LLaMA3-iterative-DPO-final
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.08
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=RLHFlow/LLaMA3-iterative-DPO-final
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.08
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=RLHFlow/LLaMA3-iterative-DPO-final
name: Open LLM Leaderboard
---
# LLaMA3-iterative-DPO-final
## Introduction
We release an unofficial checkpoint of a state-of-the-art instruct model of its class, **LLaMA3-iterative-DPO-final**.
On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it),
and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling.
Even better, we provide a [detailed recipe](https://github.com/RLHFlow/Online-RLHF) to reproduce the model. Enjoy!
## Model Releases
See the [collection](https://huggingface.co/collections/RLHFlow/online-rlhf-663ae95fade1a39663dab218) of the training set, reward/preference model, SFT model.
- [SFT model](https://huggingface.co/RLHFlow/LLaMA3-SFT)
- [Reward model](https://huggingface.co/sfairXC/FsfairX-LLaMA3-RM-v0.1)
- This model is more like the concise version in the report. We are still working on the model realeasing due to some license issue....
## Dataset
- [Preference data mix](https://huggingface.co/datasets/hendrydong/preference_700K)
- [Prompt collection for RLHF training](https://huggingface.co/datasets/RLHFlow/prompt-collection-v0.1)
## Training methods
We have developed a simple and efficient online RLHF recipe for LLM instruct training. Our recipe is DPO-based and thus much cheaper and simpler to train and tune compared to PPO-based approaches.
Unlike widely-used offline DPO, the online component of our approach effectively mitigates distribution shifts during policy optimization.
For a detailed exposition, please refer to our accompanying technical report.
## Chat Benchmarks
| **Model** | **Size** | **Method** | **LC Alpaca-Eval-V2** | **MT-Bench** | **Chat-Arena-Hard** |
|-------------------------|----------|-------------------|-----------------------|--------------|---------------------|
| **Small Open-Sourced Models** | | | | | |
| Gemma-7B-it | 7B | SFT | 10.4 | 6.38 | 7.5 |
| Zephyr-7B-beta | 7B | Vanilla DPO | 13.1 | 7.34 | - |
| Mistral-7B-v0.2-it | 7B | SFT | 17.1 | 7.51 | 12.6 |
| Open-Chat-0106 | 7B | SFT | 15.6 | 7.8 | - |
| Starling-7B-beta | 7B | PPO | 25.8 | 8.12 | 23.0 |
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 20.6 |
| **Ours** | | | | | |
| Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 5.6 |
| Ours (DPO baseline) | 8B | Vanilla DPO | 22.5 | 8.17 | 22.4 |
| Ours (Online RLHF) | 8B | Iterative DPO | **37.2** | **8.46** | **29.1** |
| **Large Open-Sourced Models** | | | | | |
| Vicuna-33b-v1.3 | 33B | SFT | 17.6 | 7.12 | 8.6 |
| Yi-34B-Chat | 34B | SFT | 27.2 | - | 23.1 |
| Mixtral-8x7B-it | 45B* | SFT | 23.7 | 8.30 | 23.4 |
| Tulu-2-DPO-70B | 70B | Vanilla DPO | 21.2 | 7.89 | 15.0 |
| LLaMA-3-70B-it | 70B | RS+DPO+PPO | 34.4 | 8.95 | 41.1 |
| Mixtral-8x22B-it | 141B* | SFT | 30.9 | 8.66 | 36.4 |
| **Proprietary Models** | | | | | |
| GPT-3.5-turbo-1106 | - | - | 19.3 | 8.35 | 18.9 |
| GPT-3.5-turbo-0613 | - | - | 22.7 | 8.39 | 24.8 |
| GPT-4-0613 | - | - | 30.2 | 9.18 | 37.9 |
| Claude-3-Opus | - | - | 40.5 | 9.00 | 60.4 |
| GPT-4 Turbo (04/09) | - | - | 55.0 | - | 82.6 |
## Academic Benchmarks
| **Model** | **Size** | **Method** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** |
|----------------------------|----------|-----------------|------------|----------|---------------|----------------|---------|----------|
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 |
| Ours (SFT baseline) | 8B | SFT | 74.2 | 64.7 | 65.2 | 53.4 | 61.4 | 62.3 |
| Ours (DPO baseline) | 8B | Vanilla DPO | 79.8 | 64.5 | 63.4 | 61.8 | 65.2 | 60.3 |
| Ours (Iterative RLHF) | 8B | Iterative DPO | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 |
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final")
tokenizer = AutoTokenizer.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final")
messages = [
{"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"},
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = model_inputs.to(device)
model.to(device)
output_tokens = model.generate(model_inputs, max_new_tokens=1024, do_sample=True)
model_outputs = tokenizer.batch_decode(output_tokens)
print(model_outputs[0])
```
## Limitations
RLHFlow/LLaMA3-iterative-DPO-final is an unofficial checkpoint developed to illustrate the power of online iterative RLHF and is for research purpose. While safety and ethical considerations are integral to our alignment process,
there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions.
We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage.
## Citation
Please cite our techical report if you find our model is useful for your research or product.
```
@misc{dong2024rlhf,
title={RLHF Workflow: From Reward Modeling to Online RLHF},
author={Hanze Dong and Wei Xiong and Bo Pang and Haoxiang Wang and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang},
year={2024},
eprint={2405.07863},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{xiong2024iterative,
title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint},
author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang},
year={2024},
eprint={2312.11456},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_RLHFlow__LLaMA3-iterative-DPO-final)
| Metric |Value|
|-------------------|----:|
|Avg. |19.96|
|IFEval (0-Shot) |53.34|
|BBH (3-Shot) |29.79|
|MATH Lvl 5 (4-Shot)| 0.00|
|GPQA (0-shot) | 4.47|
|MuSR (0-shot) | 5.08|
|MMLU-PRO (5-shot) |25.08|
|