Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,151 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
datasets:
|
4 |
+
- HuggingFaceFW/fineweb-edu
|
5 |
+
- mlfoundations/dclm-baseline-1.0
|
6 |
+
- BAAI/CCI3-HQ
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
- zh
|
10 |
+
base_model:
|
11 |
+
- PLM-Team/PLM-1.8B-Base
|
12 |
+
---
|
13 |
+
|
14 |
+
<center>
|
15 |
+
<img src="https://www.cdeng.net/plm/plm_logo.png" alt="k2-logo" width="200"/>
|
16 |
+
<h2>🖲️ PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing</h2>
|
17 |
+
<a href='https://sites.google.com/view/project-plm'>👉 Project PLM Website</a>
|
18 |
+
</center>
|
19 |
+
|
20 |
+
<center>
|
21 |
+
|
22 |
+
||||||||
|
23 |
+
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|
24 |
+
|<a href='https://arxiv.org/abs/'><img src='https://img.shields.io/badge/Paper-ArXiv-C71585'></a>|<a href='https://huggingface.co/PLM-Team/PLM-1.8B-Base'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging Face-Base-red'></a>|<a href='https://huggingface.co/PLM-Team/PLM-1.8B-Instruct'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging Face-Instruct-red'></a>|<a href='https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging Face-gguf-red'></a>|<a href='https://huggingface.co/datasets/plm-team/scots'><img src='https://img.shields.io/badge/Data-plm%20mix-4169E1'></img></a>|<a><img src="https://img.shields.io/github/stars/plm-team/PLM"></a>|
|
25 |
+
|
26 |
+
</center>
|
27 |
+
|
28 |
+
---
|
29 |
+
|
30 |
+
The PLM (Peripheral Language Model) series introduces a novel model architecture to peripheral computing by delivering powerful language capabilities within the constraints of resource-limited devices. Through modeling and system co-design strategy, PLM optimizes model performance and fits edge system requirements, PLM employs **Multi-head Latent Attention** and **squared ReLU** activation to achieve sparsity, significantly reducing memory footprint and computational demands. Coupled with a meticulously crafted training regimen using curated datasets and a Warmup-Stable-Decay-Constant learning rate scheduler, PLM demonstrates superior performance compared to existing small language models, all while maintaining the lowest activated parameters, making it ideally suited for deployment on diverse peripheral platforms like mobile phones and Raspberry Pis.
|
31 |
+
|
32 |
+
---
|
33 |
+
## News
|
34 |
+
|
35 |
+
> The paper **"PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing"** has been released!
|
36 |
+
|
37 |
+
## PLM Roadmap
|
38 |
+
|
39 |
+
<center>
|
40 |
+
<img src="https://www.cdeng.net/plm/pipe.png" width="100%"/>
|
41 |
+
</center>
|
42 |
+
|
43 |
+
## PLM Hightlight
|
44 |
+
|
45 |
+
PLM demonstrates highly competitive performance along with a series of advantages stemming from its modeling and system co-design. These benefits include impressive inference speed, extreme sparsity, and reduced KV cache due to MLA, enabling it to outperform models with the same number of layers when handling long-context inference tasks at certain sequence lengths.
|
46 |
+
|
47 |
+
|
48 |
+
- **Sparse** (Less activated parameters but better performance)
|
49 |
+
|
50 |
+
<div align="center">
|
51 |
+
<img src="https://www.cdeng.net/plm/sparse_compare.png" width="50%"/>
|
52 |
+
</div>
|
53 |
+
|
54 |
+
- **High efficiency** (Generate content with low latency while having a good quality)
|
55 |
+
|
56 |
+
<center>
|
57 |
+
<img src="https://www.cdeng.net/plm/latency/latency_all.png" width="100%"/>
|
58 |
+
</center>
|
59 |
+
|
60 |
+
- **Low kv-cache** on long-context processing leads to a low latency when inference with long sequences.
|
61 |
+
|
62 |
+
|||
|
63 |
+
|:-:|:-:|
|
64 |
+
|<img src="https://www.cdeng.net/plm/latency/prefill_eff.png"/>|<img src="https://www.cdeng.net/plm/latency/decode_eff.png"/>|
|
65 |
+
|
66 |
+
- **More efficiency** when layer-wise loading.
|
67 |
+
|
68 |
+
|||
|
69 |
+
|:-:|:-:|
|
70 |
+
|<img src="https://www.cdeng.net/plm/latency/prefill_ngl.png"/>|<img src="https://www.cdeng.net/plm/latency/decode_ngl.png"/>|
|
71 |
+
|
72 |
+
## Performance
|
73 |
+
|
74 |
+
PLM-1.8B is a strong and reliable model, particularly in basic knowledge understanding, coding and simple reasoning tasks.
|
75 |
+
|
76 |
+
<center>
|
77 |
+
|
78 |
+
| **Benchmarks** | PLM-Instruct | MiniCPM | Yulan-Mini | SmolLM2 | Qwen2.5 | Qwen2 | GLM-Edge |
|
79 |
+
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|
80 |
+
| **ARC-C** | <u>51.14</u> | 43.86 | 50.51 | 50.29 | **53.41** | 43.90 | 24.15 |
|
81 |
+
| **ARC-E** | <u>78.18</u> | 55.51 | 69.87 | 77.78 | **79.13** | 62.21 | 36.83 |
|
82 |
+
| **MMLU** | 51.18 | 51.13 | 49.10 | 51.91 | **59.79** | <u>56.50</u> | 54.84 |
|
83 |
+
| **CMMLU** | 48.18 | 48.97 | 48.35 | 33.46 | <u>67.82</u> | **70.30** | 54.23 |
|
84 |
+
| **C-Eval** | 44.93 | 48.24 | 51.47 | 35.10 | <u>69.05</u> | **70.60** | 55.05 |
|
85 |
+
| **GSM8K** | 60.73 | 53.83 | <u>66.65</u> | 47.68 | **68.50** | 46.90 | 54.89 |
|
86 |
+
| **MathQA** | 33.23 | 30.59 | <u>34.84</u> | 34.30 | **35.14** | 31.66 | 33.94 |
|
87 |
+
| **HumanEval** | **64.60** | 50.00 | <u>61.60</u> | 23.35 | 37.20 | 34.80 | 1.21 |
|
88 |
+
| **MBPP** | <u>60.40</u> | 47.31 | **66.70** | 45.00 | 60.20 | 46.90 | 3.44 |
|
89 |
+
| **BoolQ** | <u>77.86</u> | 73.55 | 70.89 | 72.26 | 72.91 | 72.69 | 60.95 |
|
90 |
+
| **Hellaswag** | 68.17 | 53.06 | <u>71.47</u> | **71.48** | 67.73 | 65.41 | 29.39 |
|
91 |
+
| **LogiQA** | 30.12 | **31.64** | 29.65 | 29.65 | <u>31.03</u> | 31.02 | 22.73 |
|
92 |
+
| **PIQA** | 76.01 | 77.04 | 76.50 | 77.04 | **76.01** | <u>75.35</u> | 74.32 |
|
93 |
+
| **Average** | **57.29 (3rd)** | 51.13 | **57.51 (2nd)** | 49.95 | **59.84 (1st)** | 54.48 | 38.92 |
|
94 |
+
|
95 |
+
</center>
|
96 |
+
|
97 |
+
## How to use PLM
|
98 |
+
|
99 |
+
### llama.cpp
|
100 |
+
|
101 |
+
The original contribution to the llama.cpp framwork is [Si1w/llama.cpp](https://github.com/Si1w/llama.cpp). Here is the usage:
|
102 |
+
|
103 |
+
```bash
|
104 |
+
git clone https://github.com/Si1w/llama.cpp.git
|
105 |
+
cd llama.cpp
|
106 |
+
pip install -r requirements.txt
|
107 |
+
```
|
108 |
+
|
109 |
+
Then we can build with CPU of GPU (e.g. Orin). The build is based on `cmake`.
|
110 |
+
|
111 |
+
- For CPU
|
112 |
+
|
113 |
+
```bash
|
114 |
+
cmake -B build
|
115 |
+
cmake --build build --config Release
|
116 |
+
```
|
117 |
+
|
118 |
+
- For GPU
|
119 |
+
|
120 |
+
```bash
|
121 |
+
cmake -B build -DGGML_CUDA=ON
|
122 |
+
cmake --build build --config Release
|
123 |
+
```
|
124 |
+
|
125 |
+
## Future works
|
126 |
+
|
127 |
+
- [ ] Release vLLM, SGLang, and PowerInfer inference scripts for PLM.
|
128 |
+
- [ ] Release reasoning model trained on PLM.
|
129 |
+
- [ ] Release vision model based on PLM.
|
130 |
+
|
131 |
+
## Acknowledgements
|
132 |
+
|
133 |
+
We sincerely thank Deepseek for its contributions to the community through the MLA architecture and the PowerInfer project for inspiring our model architecture design. We are grateful to Yixin Song, Yan Song, and Yang Li for their insightful suggestions throughout the project. We also acknowledge the ADC of the Hong Kong University of Science and Technology (Guangzhou) for providing essential computing resources. Finally, we extend our deepest appreciation to our team members for their dedication and contributions from September 2024 to the present.
|
134 |
+
|
135 |
+
## License
|
136 |
+
The code in this repository is released under the MIT License.
|
137 |
+
Limitations: While we strive to address safety concerns and promote the generation of ethical and lawful text, the probabilistic nature of language models may still produce unforeseen outputs. These may include biased, discriminatory, or otherwise harmful content. Users are advised not to disseminate such material. We disclaim any liability for consequences resulting from the distribution of harmful information.
|
138 |
+
|
139 |
+
|
140 |
+
## Citation
|
141 |
+
If you find **Project PLM** helpful for your research or applications, please cite as follows:
|
142 |
+
|
143 |
+
```
|
144 |
+
@misc{cheng2025plm,
|
145 |
+
title={PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing},
|
146 |
+
author={Cheng Deng, Luoyang Sun, Jiwen Jiang, Yongcheng Zeng, Xinjian Wu, Wenxin Zhao, Qingfa Xiao, Jiachuan Wang, Lei Chen, Lionel M. Ni, Haifeng Zhang, Jun Wang},
|
147 |
+
year={2025},
|
148 |
+
archivePrefix={arXiv},
|
149 |
+
primaryClass={cs.CL},
|
150 |
+
}
|
151 |
+
```
|