Update README.md
Browse files
README.md
CHANGED
@@ -9,10 +9,10 @@ license_link: LICENSE
|
|
9 |
</h1>
|
10 |
</div>
|
11 |
|
|
|
|
|
|
|
12 |
|
13 |
-
<div style="text-align: center;">
|
14 |
-
[Switch to English](README.md) | [切换到中文](README_zh.md)
|
15 |
-
</div>
|
16 |
|
17 |
|
18 |
# Introduction
|
@@ -21,13 +21,13 @@ Index-1.9B-32K is a language model with only 1.9 billion parameters, yet it supp
|
|
21 |
|
22 |
**Despite its small size (about 2% of models like GPT-4), Index-1.9B-32K demonstrates excellent long-text processing capabilities**. Below are comparison results with GPT-4 and GPT-3.5-turbo-16k:
|
23 |
<div style="text-align: center;">
|
24 |
-
<img src="
|
25 |
<p><strong>Comparison of Index-1.9B-32K with GPT-4 and other models in long-text capability</strong></p>
|
26 |
</div>
|
27 |
|
28 |
In a 32K-length needle-in-a-haystack test, Index-1.9B-32K achieved excellent results, as shown in the figure below. The only exception was a small yellow spot (91.08 points) in the region of (32K length, 10% depth), with all other areas performing excellently in mostly green zones.
|
29 |
<div style="text-align: center;">
|
30 |
-
<img src="
|
31 |
<p><strong>NeedleBench Evaluation</strong></p>
|
32 |
</div>
|
33 |
|
@@ -64,7 +64,7 @@ CUDA_VISIBLE_DEVICES=0 python cli_long_text_demo.py --model_path '/path/to/model
|
|
64 |
```
|
65 |
- Run & Interaction Example (Translation and summarization of the Bilibili financial report released on 2024.8.22 in English --- [Original English report here](https://github.com/bilibili/Index-1.9B/tree/main/demo/data/user_long_text.txt)):
|
66 |
<div style="text-align: center;">
|
67 |
-
<img src="
|
68 |
<p><strong>Translation and Summary (Bilibili financial report released on 2024.8.22)</strong></p>
|
69 |
</div>
|
70 |
|
|
|
9 |
</h1>
|
10 |
</div>
|
11 |
|
12 |
+
[Switch to English](https://huggingface.co/IndexTeam/Index-1.9B-32K/blob/main/README.md)
|
13 |
+
|
14 |
+
[切换到中文](https://huggingface.co/IndexTeam/Index-1.9B-32K/blob/main/README_zh.md)
|
15 |
|
|
|
|
|
|
|
16 |
|
17 |
|
18 |
# Introduction
|
|
|
21 |
|
22 |
**Despite its small size (about 2% of models like GPT-4), Index-1.9B-32K demonstrates excellent long-text processing capabilities**. Below are comparison results with GPT-4 and GPT-3.5-turbo-16k:
|
23 |
<div style="text-align: center;">
|
24 |
+
<img src="z-attach-pic-pk-all.png" alt="" width="800">
|
25 |
<p><strong>Comparison of Index-1.9B-32K with GPT-4 and other models in long-text capability</strong></p>
|
26 |
</div>
|
27 |
|
28 |
In a 32K-length needle-in-a-haystack test, Index-1.9B-32K achieved excellent results, as shown in the figure below. The only exception was a small yellow spot (91.08 points) in the region of (32K length, 10% depth), with all other areas performing excellently in mostly green zones.
|
29 |
<div style="text-align: center;">
|
30 |
+
<img src="z-attach-pic-needle-bench-en.png" alt="" width="900">
|
31 |
<p><strong>NeedleBench Evaluation</strong></p>
|
32 |
</div>
|
33 |
|
|
|
64 |
```
|
65 |
- Run & Interaction Example (Translation and summarization of the Bilibili financial report released on 2024.8.22 in English --- [Original English report here](https://github.com/bilibili/Index-1.9B/tree/main/demo/data/user_long_text.txt)):
|
66 |
<div style="text-align: center;">
|
67 |
+
<img src="z-attach-pic-qa-mark.png" alt="" width="1000">
|
68 |
<p><strong>Translation and Summary (Bilibili financial report released on 2024.8.22)</strong></p>
|
69 |
</div>
|
70 |
|