emozilla commited on
Commit
55ef6a8
·
1 Parent(s): 62ff41f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ metrics:
3
+ - perplexity
4
+ library_name: transformers
5
+ license: apache-2.0
6
+ language:
7
+ - en
8
+ ---
9
+
10
+ # Model Card: Nous-Yarn-Llama-2-70b-32k
11
+
12
+ [Preprint (arXiv)](https://arxiv.org/abs/2309.00071)
13
+ [GitHub](https://github.com/jquesnelle/yarn)
14
+ ![yarn](https://raw.githubusercontent.com/jquesnelle/yarn/70b/data/proofpile-long-small-32k-70b.csv.png)
15
+
16
+ ## Model Description
17
+
18
+ Nous-Yarn-Llama-2-70b-32k is a state-of-the-art language model for long context, further pretrained on long context data for 400 steps using the YaRN extension method.
19
+ It is an extension of [Llama-2-70b-hf](meta-llama/Llama-2-70b-hf) and supports a 32k token context window.
20
+
21
+ To use, pass `trust_remote_code=True` when loading the model, for example
22
+
23
+ ```python
24
+ model = AutoModelForCausalLM.from_pretrained("NousResearch/Yarn-Llama-2-70b-32k",
25
+ use_flash_attention_2=True,
26
+ torch_dtype=torch.bfloat16,
27
+ device_map="auto",
28
+ trust_remote_code=True)
29
+ ```
30
+
31
+ In addition you will need to use the latest version of `transformers` (until 4.35 comes out)
32
+ ```sh
33
+ pip install git+https://github.com/huggingface/transformers
34
+ ```
35
+
36
+ ## Benchmarks
37
+
38
+ Long context benchmarks:
39
+ | Model | Context Window | 1k PPL | 2k PPL | 4k PPL | 8k PPL | 16k PPL | 32k PPL |
40
+ |-------|---------------:|-------:|--------:|------:|-------:|--------:|--------:|
41
+ | [Llama-2-70b-hf](meta-llama/Llama-2-70b-hf) | 4k | - | - | - | - | - | - |
42
+ | [Yarn-Llama-2-70b-32k](https://huggingface.co/NousResearch/Yarn-Llama-2-70b-32k) | 32k | - | - | - | - | - | - |
43
+
44
+ Short context benchmarks showing that quality degradation is minimal:
45
+ | Model | Context Window | ARC-c | Hellaswag | MMLU | Truthful QA |
46
+ |-------|---------------:|------:|----------:|-----:|------------:|
47
+ | [Llama-2-70b-hf](meta-llama/Llama-2-70b-hf) | 4k | - | - | - | - |
48
+ | [Yarn-Llama-2-70b-32k](https://huggingface.co/NousResearch/Yarn-Llama-2-70b-32k) | 32k | - | - | - | - |
49
+
50
+ ## Collaborators
51
+
52
+ - [bloc97](https://github.com/bloc97): Methods, paper and evals
53
+ - [@theemozilla](https://twitter.com/theemozilla): Methods, paper, model training, and evals
54
+ - [@EnricoShippole](https://twitter.com/EnricoShippole): Model training
55
+ - [honglu2875](https://github.com/honglu2875): Paper and evals
56
+
57
+ The authors would like to thank LAION AI for their support of compute for this model.
58
+ It was trained on the [JUWELS](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels) supercomputer.