--- library_name: transformers license: mit datasets: - hendrydong/preference_700K base_model: - meta-llama/Llama-3.1-8B-Instruct --- This is the bandit reward model introduced in the preprint **Segmenting Text and Learning Their Rewards for Improved RLHF in Language Models** (https://arxiv.org/abs/2501.02790). For more details, please visit our repository at https://github.com/yinyueqin/DenseRewardRLHF-PPO.