File size: 567 Bytes
565016f
 
 
 
 
 
a578fc8
 
fc53fe6
565016f
bef0d87
1
2
3
4
5
6
7
8
9
10
11
---
license: apache-2.0
language:
- en
---

# Starling-LM-7B-alpha-ExPO

The extrapolated (ExPO) model based on [`berkeley-nest/Starling-LM-7B-alpha`](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [`openchat/openchat_3.5`](https://huggingface.co/openchat/openchat_3.5), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper.

Specifically, we obtain this model by extrapolating **(alpha = 0.5)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.