Update README.md
Browse files
README.md
CHANGED
@@ -18,8 +18,7 @@ tags:
|
|
18 |
|
19 |
## Introduction
|
20 |
|
21 |
-
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen).
|
22 |
-
As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. All of these models follows the Apache License (except for the 3B); Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
|
23 |
|
24 |
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source coderLLM, with its coding abilities matching those of GPT-4o.
|
25 |
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
@@ -39,7 +38,7 @@ As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7,
|
|
39 |
|
40 |
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
|
41 |
|
42 |
-
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
|
43 |
|
44 |
## Requirements
|
45 |
|
@@ -74,7 +73,7 @@ We advise adding the `rope_scaling` configuration only when processing long cont
|
|
74 |
|
75 |
## Evaluation & Performance
|
76 |
|
77 |
-
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder/).
|
78 |
|
79 |
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
|
80 |
|
|
|
18 |
|
19 |
## Introduction
|
20 |
|
21 |
+
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. All of these models follows the Apache License (except for the 3B); Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
|
|
|
22 |
|
23 |
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source coderLLM, with its coding abilities matching those of GPT-4o.
|
24 |
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
|
|
38 |
|
39 |
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
|
40 |
|
41 |
+
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
|
42 |
|
43 |
## Requirements
|
44 |
|
|
|
73 |
|
74 |
## Evaluation & Performance
|
75 |
|
76 |
+
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
|
77 |
|
78 |
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
|
79 |
|