Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ Qwen2 is the new series of Qwen large language models. For Qwen2, we release a n
|
|
15 |
|
16 |
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
|
17 |
|
18 |
-
Qwen2-MoE-57B-A14B-Instruct supports a context length of up to
|
19 |
|
20 |
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/) and [GitHub](https://github.com/QwenLM/Qwen2).
|
21 |
<br>
|
@@ -73,7 +73,7 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
73 |
|
74 |
### Processing Long Texts
|
75 |
|
76 |
-
To handle extensive inputs exceeding
|
77 |
|
78 |
For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps:
|
79 |
|
@@ -90,7 +90,7 @@ For deployment, we recommend using vLLM. You can enable the long-context capabil
|
|
90 |
|
91 |
// adding the following snippets
|
92 |
"rope_scaling": {
|
93 |
-
"factor":
|
94 |
"original_max_position_embeddings": 32768,
|
95 |
"type": "yarn"
|
96 |
}
|
|
|
15 |
|
16 |
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
|
17 |
|
18 |
+
Qwen2-MoE-57B-A14B-Instruct supports a context length of up to 65,536 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts.
|
19 |
|
20 |
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/) and [GitHub](https://github.com/QwenLM/Qwen2).
|
21 |
<br>
|
|
|
73 |
|
74 |
### Processing Long Texts
|
75 |
|
76 |
+
To handle extensive inputs exceeding 65,536 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
|
77 |
|
78 |
For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps:
|
79 |
|
|
|
90 |
|
91 |
// adding the following snippets
|
92 |
"rope_scaling": {
|
93 |
+
"factor": 2.0,
|
94 |
"original_max_position_embeddings": 32768,
|
95 |
"type": "yarn"
|
96 |
}
|