File size: 2,487 Bytes
527c074 3835c42 527c074 3835c42 dea6a6d 3835c42 8142565 3835c42 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
license: mit
language:
- en
metrics:
- perplexity
---
# CLEX: Continuous Length Extrapolation for Large Language Models
This repo stores the checkpoint of CLEX-7B-Chat-16K
## Features and Highlights of CLEX
![CLEX_diagram](https://github.com/DAMO-NLP-SG/CLEX/assets/18526640/063ffe34-0116-4759-92bf-e22fc7264cdf)
- **Simple and Clear**: _MINIMAL_ code and architecture changes. Only one up-and-down projection layer introduced, _NO_ recurrent memory caching or sparse attention required.
- **Train Short, Test Long**: _NO_ performance drop on the sequences _4x~8x longer_ than the training ones (see [here](https://github.com/DAMO-NLP-SG/CLEX#language-modelling)).
- **Continuous Length Extrapolation**: Explicitly modeling the continuous dynamics of context window size during length extrapolation.
More details about long-text modeling with our CLEX can be found at the git [repo](https://github.com/DAMO-NLP-SG/CLEX).
## Model Zoo
| Model Name | Model Type | Starting Point | Train Data |Train Length | MAX Test Length |
|:-----|:-----|:-----------|:-----------|:-----------|:-----------|
| CLEX-7B-4K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 4K | 16K |
| CLEX-7B-Chat-4K | chat | CLEX-7B-4K | [UltraChat](https://github.com/thunlp/UltraChat) | 4K | 16K |
| CLEX-7B-16K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 16K | 64K |
| **CLEX-7B-Chat-16K** (this checkpoint) | chat | CLEX-7B-16K | [UltraChat](https://github.com/thunlp/UltraChat) | 16K | 64K |
## How to Use
```bash
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/CLEX-7B-Chat-16K", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("DAMO-NLP-SG/CLEX-7B-Chat-16K", torch_dtype=torch.bfloat16, trust_remote_code=True)
inputs = tokenizer("What is CLEX?", return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Citation
If you find our project useful, hope you can star our repo and cite our paper as follows:
```
@article{damonlpsg2023clex,
author = {Chen, Guanzheng and Li, Xin and Meng, Zaiqiao and Liang, Shangsong and Bing, Lidong},
title = {CLEX: Continuous Length Extrapolation for Large Language Models},
year = 2023,
journal = {arXiv preprint arXiv:2310.16450},
url = {https://arxiv.org/abs/2310.16450}
}
|