bys0318 commited on
Commit
e82b92f
β€’
1 Parent(s): 9dd5b30

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ pipeline_tag: text-generation
18
  πŸ€— <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> β€’ πŸ’» <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> β€’ πŸ“ƒ <a href="https://arxiv.org/" target="_blank">[LongAlign Paper]</a>
19
  </p>
20
 
21
- **LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **Chat-LongBench** that evaluate the instruction-following capability on queries of 10k-100k length.
22
 
23
  ## All Models
24
 
 
18
  πŸ€— <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> β€’ πŸ’» <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> β€’ πŸ“ƒ <a href="https://arxiv.org/" target="_blank">[LongAlign Paper]</a>
19
  </p>
20
 
21
+ **LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length.
22
 
23
  ## All Models
24