Update README.md
Browse files
README.md
CHANGED
@@ -32,10 +32,6 @@ tags:
|
|
32 |
|
33 |
Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8B parameters with 2.75B activated parameters. This model demonstrates state-of-the-art performance on 12 coding benchmarks, while simultaneously offering competitive latency and throughput compared to code LLMs of similar size. In addition to open-sourcing the model itself, we also release a substantial amount of code-related data, including synthetic QA, SFT and DPO datasets. More details are described in the technique report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
|
34 |
|
35 |
-
<p align="center">
|
36 |
-
<img src="./data-accuracy-efficiency.png" width="1000"/>
|
37 |
-
<p>
|
38 |
-
|
39 |
## Model Downloads
|
40 |
|
41 |
You can download the following table to see the various parameters for your use case. If you are located in mainland China, we also provide the model on modelscope.cn to speed up the download process.
|
|
|
32 |
|
33 |
Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8B parameters with 2.75B activated parameters. This model demonstrates state-of-the-art performance on 12 coding benchmarks, while simultaneously offering competitive latency and throughput compared to code LLMs of similar size. In addition to open-sourcing the model itself, we also release a substantial amount of code-related data, including synthetic QA, SFT and DPO datasets. More details are described in the technique report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793).
|
34 |
|
|
|
|
|
|
|
|
|
35 |
## Model Downloads
|
36 |
|
37 |
You can download the following table to see the various parameters for your use case. If you are located in mainland China, we also provide the model on modelscope.cn to speed up the download process.
|