bys0318 commited on
Commit
24038a8
β€’
1 Parent(s): cceaa2e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -13,7 +13,7 @@ license: apache-2.0
13
  # LongAlign-7B-64k
14
 
15
  <p align="center">
16
- πŸ€— <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> β€’ πŸ’» <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> β€’ πŸ“ƒ <a href="https://arxiv.org/" target="_blank">[LongAlign Paper]</a>
17
  </p>
18
 
19
  **LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length.
 
13
  # LongAlign-7B-64k
14
 
15
  <p align="center">
16
+ πŸ€— <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> β€’ πŸ’» <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> β€’ πŸ“ƒ <a href="https://arxiv.org/abs/2401.18058" target="_blank">[LongAlign Paper]</a>
17
  </p>
18
 
19
  **LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length.