Update README.md
Browse files
README.md
CHANGED
@@ -60,7 +60,7 @@ Tulu V2 70B is a fine-tuned version of Llama 2 that was trained on a mix of publ
|
|
60 |
|
61 |
## Intended uses & limitations
|
62 |
|
63 |
-
The model was
|
64 |
<!--We then further aligned the model with a [Jax DPO trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_dpo.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contains 64k prompts and model completions that are ranked by GPT-4.
|
65 |
|
66 |
|
|
|
60 |
|
61 |
## Intended uses & limitations
|
62 |
|
63 |
+
The model was fine-tuned on a filtered and preprocessed of the [Tulu V2 mix dataset](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs.
|
64 |
<!--We then further aligned the model with a [Jax DPO trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_dpo.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contains 64k prompts and model completions that are ranked by GPT-4.
|
65 |
|
66 |
|