Update README.md
Browse files
README.md
CHANGED
@@ -41,11 +41,6 @@ tags:
|
|
41 |
</p>
|
42 |
|
43 |
|
44 |
-
### Dataset:
|
45 |
-
1. Selected from OpenOrca
|
46 |
-
2. Intel Orca-DPO-Pairs
|
47 |
-
3. Privately Crafted Dataset
|
48 |
-
|
49 |
|
50 |
## Outperformer GPT3.5turbo & Claude-v1
|
51 |
|
@@ -100,6 +95,16 @@ and you carefully consider each step before providing answers.
|
|
100 |
\n\n### Instruction:\n{instruction}\n\n### Response:
|
101 |
|
102 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
103 |
## Created By xDAN-AI at 2023-12-15
|
104 |
## Eval by FastChat: https://github.com/lm-sys/FastChat.git
|
105 |
|
|
|
41 |
</p>
|
42 |
|
43 |
|
|
|
|
|
|
|
|
|
|
|
44 |
|
45 |
## Outperformer GPT3.5turbo & Claude-v1
|
46 |
|
|
|
95 |
\n\n### Instruction:\n{instruction}\n\n### Response:
|
96 |
|
97 |
|
98 |
+
### Dataset:
|
99 |
+
1. Selected from OpenOrca
|
100 |
+
2. Intel Orca-DPO-Pairs
|
101 |
+
3. Privately Crafted Dataset
|
102 |
+
|
103 |
+
### Training:
|
104 |
+
1. SFT with Mixed dataset from OpenOrca
|
105 |
+
2. The Next DPO dataset made by xDAN-AI
|
106 |
+
3. The Next DPO Training method by xDAN-AI
|
107 |
+
|
108 |
## Created By xDAN-AI at 2023-12-15
|
109 |
## Eval by FastChat: https://github.com/lm-sys/FastChat.git
|
110 |
|