yleo commited on
Commit
f484e7a
·
verified ·
1 Parent(s): 1b02c5f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -8,7 +8,6 @@ tags:
8
  ---
9
  ---
10
 
11
- Update README.md
12
  # 🦜 EmertonOmniBeagle-7B-dpo
13
 
14
  EmertonOmniBeagle-7B-dpo is a DPO fine-tune of [mlabonne/OmniBeagle14-7B](https://huggingface.co/mlabonne/OmniBeagle-7B) using the [yleo/emerton_dpo_pairs](https://huggingface.co/datasets/yleo/emerton_dpo_pairs) preference dataset created from [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) by replacing gpt 3.5 answer by a gpt4 Turbo answer. Then, gpt4 Turbo is put as chosen whereas gpt4 is put as rejected.
 
8
  ---
9
  ---
10
 
 
11
  # 🦜 EmertonOmniBeagle-7B-dpo
12
 
13
  EmertonOmniBeagle-7B-dpo is a DPO fine-tune of [mlabonne/OmniBeagle14-7B](https://huggingface.co/mlabonne/OmniBeagle-7B) using the [yleo/emerton_dpo_pairs](https://huggingface.co/datasets/yleo/emerton_dpo_pairs) preference dataset created from [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) by replacing gpt 3.5 answer by a gpt4 Turbo answer. Then, gpt4 Turbo is put as chosen whereas gpt4 is put as rejected.