Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
tags:
|
4 |
+
- yi
|
5 |
+
- moe
|
6 |
+
license_name: yi-license
|
7 |
+
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
|
8 |
+
---
|
9 |
+
|
10 |
+
this is a DPO fine-tuned MoE model for [TomGrc/FusionNet_34Bx2_MoE_v0.1](https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE_v0.1)
|
11 |
+
|
12 |
+
|
13 |
+
```
|
14 |
+
DPO Trainer
|
15 |
+
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
|
16 |
+
```
|
17 |
+
|
18 |
+
|