Model Card for Tulu V2.5 7B RM - UltraFeedback Value Model
Tulu is a series of language models that are trained to act as helpful assistants. Tulu V2.5 is a series of models trained using DPO and PPO starting from the Tulu 2 suite. This is a value model produced during the PPO training of this model. It was initialised from the Tulu v2.5 7B UltraFeedback RM. We release the value model as it may provide a good starting point for additional research or improved decoding with our released PPO models.
At time of writing, you may have to install transformers from source to get the LlamaForTokenClassification
class.
For more details, read the paper: Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback.
.Model description
- Model type: One model belonging to a suite of RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets.
- Language(s) (NLP): English
- License: Apache 2.0.
- Finetuned from model: meta-llama/Llama-2-7b-hf
Model Sources
- Repository: https://github.com/allenai/open-instruct
- Dataset: Prompts used to train this model can be found here - specifically the
ultrafeedback_mean_aspects
split. - Model Family: The collection of related models can be found here.
Input Format
The model is trained to use the following format (note the newlines):
<|user|>
Your message here!
<|assistant|>
For best results, format all inputs in this manner. Make sure to include a newline after <|assistant|>
, this can affect generation quality quite a bit.
We have included a chat template in the tokenizer implementing this template.
Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed of the Tulu V2 mix dataset, which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs. We then further trained the model with a Jax PPO trainer built on EasyLM on the dataset mentioned above. This model is meant as a research artefact.
Training hyperparameters
The following hyperparameters were used during overall PPO training:
- learning_rate: 1e-06
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
- KL penalty coefficient: 0.05
Citation
If you find Tulu 2.5 is useful in your work, please cite it with:
@misc{ivison2024unpacking,
title={{Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback}},
author={{Hamish Ivison and Yizhong Wang and Jiacheng Liu and Ellen Wu and Valentina Pyatkin and Nathan Lambert and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi}}
year={2024},
eprint={2406.09279},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 17
Model tree for hamishivi/tulu-v2.5-7b-uf-mean-7b-uf-rm-value
Base model
meta-llama/Llama-2-7b-hf