Update README.md
Browse files
README.md
CHANGED
@@ -41,7 +41,7 @@ As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7,
|
|
41 |
|
42 |
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
|
43 |
|
44 |
-
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
|
45 |
|
46 |
## Requirements
|
47 |
|
@@ -76,7 +76,7 @@ We advise adding the `rope_scaling` configuration only when processing long cont
|
|
76 |
|
77 |
## Evaluation & Performance
|
78 |
|
79 |
-
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder/).
|
80 |
|
81 |
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
|
82 |
|
|
|
41 |
|
42 |
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
|
43 |
|
44 |
+
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
|
45 |
|
46 |
## Requirements
|
47 |
|
|
|
76 |
|
77 |
## Evaluation & Performance
|
78 |
|
79 |
+
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
|
80 |
|
81 |
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
|
82 |
|