Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ license: apache-2.0
|
|
13 |
# LongAlign-7B-64k
|
14 |
|
15 |
<p align="center">
|
16 |
-
π€ <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> β’ π» <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> β’ π <a href="https://arxiv.org/" target="_blank">[LongAlign Paper]</a>
|
17 |
</p>
|
18 |
|
19 |
**LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length.
|
|
|
13 |
# LongAlign-7B-64k
|
14 |
|
15 |
<p align="center">
|
16 |
+
π€ <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> β’ π» <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> β’ π <a href="https://arxiv.org/abs/2401.18058" target="_blank">[LongAlign Paper]</a>
|
17 |
</p>
|
18 |
|
19 |
**LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length.
|