yi-01-ai
commited on
Commit
•
3e28ee8
1
Parent(s):
8a75d12
Auto Sync from git://github.com/01-ai/Yi.git/commit/57b9e9f4e777740875aae331382d394997a97513
Browse files
README.md
CHANGED
@@ -133,7 +133,7 @@ pipeline_tag: text-generation
|
|
133 |
>
|
134 |
> The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama.
|
135 |
|
136 |
-
- Both Yi and Llama are
|
137 |
|
138 |
- Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.
|
139 |
|
@@ -153,10 +153,15 @@ pipeline_tag: text-generation
|
|
153 |
|
154 |
## News
|
155 |
|
|
|
|
|
|
|
|
|
|
|
156 |
<details open>
|
157 |
<summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary>
|
158 |
<br>
|
159 |
-
In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to
|
160 |
</details>
|
161 |
|
162 |
<details open>
|
|
|
133 |
>
|
134 |
> The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama.
|
135 |
|
136 |
+
- Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018.
|
137 |
|
138 |
- Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.
|
139 |
|
|
|
153 |
|
154 |
## News
|
155 |
|
156 |
+
<details open>
|
157 |
+
<summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary>
|
158 |
+
</details>
|
159 |
+
|
160 |
+
|
161 |
<details open>
|
162 |
<summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary>
|
163 |
<br>
|
164 |
+
In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance.
|
165 |
</details>
|
166 |
|
167 |
<details open>
|