JosephusCheung commited on
Commit
938ac99
·
1 Parent(s): 254116c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -46,10 +46,10 @@ For details, please refer to the version without DPO training: [CausalLM/14B](ht
46
  | **CausalLM/14B-DPO-α** | **7.618868** |
47
  | **CausalLM/7B-DPO-α** | **7.038125** |
48
 
49
- Dec 2, 2023
50
- Rank **#2** non-base model, of its size on 🤗 Open LLM Leaderboard, outperforms ALL ~13B chat models including microsoft/Orca-2-13b.
51
 
52
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63468a143ea42ee2cb49ddd1/Df5BcU3Pxzt2oKjuzArvk.png)
53
 
54
  It should be noted that this is not a version that continues training on CausalLM/14B & 7B, but rather an optimized version that has undergone DPO training concurrently on a previous training branch, and some detailed parameters may have changed. You will still need to download the full model.
55
 
 
46
  | **CausalLM/14B-DPO-α** | **7.618868** |
47
  | **CausalLM/7B-DPO-α** | **7.038125** |
48
 
49
+ Dec 3, 2023
50
+ Rank **#1** non-base model, of its size on 🤗 Open LLM Leaderboard, outperforms **ALL** ~13B chat models.
51
 
52
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63468a143ea42ee2cb49ddd1/8nV0yOTteP208bjbCv5MC.png)
53
 
54
  It should be noted that this is not a version that continues training on CausalLM/14B & 7B, but rather an optimized version that has undergone DPO training concurrently on a previous training branch, and some detailed parameters may have changed. You will still need to download the full model.
55