Text Generation
Transformers
PyTorch
skywork
custom_code
zhao1iang commited on
Commit
c11f7e8
·
1 Parent(s): 76137eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -6
README.md CHANGED
@@ -12,7 +12,10 @@ license_link: >-
12
  <div align="center"><img src="misc/skywork_logo.jpeg" width="550"/></div>
13
 
14
  <p align="center">
15
- 🤗 <a href="https://huggingface.co/Skywork" target="_blank">Hugging Face</a> • 🤖 <a href="https://modelscope.cn/organization/Skywork" target="_blank">ModelScope</a> • 💬 <a href="https://github.com/SkyworkAI/Skywork/blob/main/misc/wechat.png?raw=true" target="_blank">WeChat</a>• 📜<a href="https://arxiv.org/" target="_blank">Tech Report</a>• 🧮<a href="https://arxiv.org/" target="_blank">Skymath Paper</a>
 
 
 
16
  </p>
17
 
18
 
@@ -40,10 +43,27 @@ license_link: >-
40
  **Skywork-13B-Math**: Skywork-13B-Math model has undergone specialized training to enhance its mathematical abilities. In the 13B-scale model, the Skywork-13B-Math model ranked first in the GSM8K evaluation, and it also performed exceptionally well on the MATH dataset and CMATH, placing it among the top-level 13B models.
41
 
42
 
43
- 如果您希望了解更多的信息,如训练方案,评估方法,请参考我们的[技术报告](https://arxiv.org/skywork-tech-report)[Skywork-Math](https://arxiv.org/skywork-tech-report)论文。
 
 
 
44
 
45
- If you are interested in more training and evaluation details, please refer to our [technical report](https://arxiv.org/skywork-tech-report) and [Skywork-Math]((https://arxiv.org/skywork-tech-report)) paper.
 
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  # 快速开始(Quickstart)
49
  我们将模型参数、配置文件、tokenizer等在huggingface和modelscope上进行了开源。
@@ -226,9 +246,22 @@ If you find our work helpful, please feel free to cite our paper~
226
 
227
  ```
228
  @article{skyworkmath,
229
- title={},
230
- author={},
231
- journal={arXiv preprint arXiv:},
 
232
  year={2023}
233
  }
234
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  <div align="center"><img src="misc/skywork_logo.jpeg" width="550"/></div>
13
 
14
  <p align="center">
15
+ 👨‍💻 <a href="https://github.com/SkyworkAI/Skywork" target="_blank">Github</a> • 🤗 <a href="https://huggingface.co/Skywork" target="_blank">Hugging Face</a> • 🤖 <a href="https://modelscope.cn/organization/Skywork" target="_blank">ModelScope</a> • 💬 <a href="https://github.com/SkyworkAI/Skywork/blob/main/misc/wechat.png?raw=true" target="_blank">WeChat</a>• 📜<a href="https://arxiv.org/" target="_blank">Tech Report</a>• 🧮<a href="https://arxiv.org/" target="_blank">Skymath Paper</a>
16
+ • 🖼️<a href="https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf" target="_blank">SkyworkMM Paper</a>
17
+
18
+
19
  </p>
20
 
21
 
 
43
  **Skywork-13B-Math**: Skywork-13B-Math model has undergone specialized training to enhance its mathematical abilities. In the 13B-scale model, the Skywork-13B-Math model ranked first in the GSM8K evaluation, and it also performed exceptionally well on the MATH dataset and CMATH, placing it among the top-level 13B models.
44
 
45
 
46
+ 如果您希望了解更多的信息,如训练方案,评估方法,请参考我们的[技术报告](https://arxiv.org/skywork-tech-report)[Skymath](https://arxiv.org/abs/2310.16713)论文,[SkyworkMM](https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf)论文。
47
+
48
+ If you are interested in more training and evaluation details, please refer to our [technical report](https://arxiv.org/skywork-tech-report), [Skymath]((https://arxiv.org/skywork-tech-report)) paper and [SkyworkMM](https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf) paper.
49
+
50
 
51
+ # Skywork-13B-Math模型评估(Results)
52
+ Skywork-13B-Math在数学能力相对Base模型进一步加强,我们在主流的数学相关benchmark,GSM8K,MATH和CMATH上进行评估。结果显示在13B规模模型中,我们的模型在**GSM8K和CMATH评测中得分第一**,同时MATH评测也处于前列。
53
 
54
+ Skywork-13B-Math has further enhanced mathematical capabilities compared to the Base model. We conducted evaluations on mainstream mathematical related benchmarks, GSM8K, MATH, and CMATH. The results show that in the 13B scale model, our model ranked 1st in the GSM8K and CMATH benchmarks, and is also at the forefront in the MATH benchmark.
55
+
56
+
57
+ | Model | GSM8K | MATH | CMATH |
58
+ |-------------------------|:-----:|:---------------:|:----------:|
59
+ | LLaMA-1-13B-Base | 17.80 | 3.90 | - |
60
+ | LLaMA-2-13B-Base | 28.70 | 3.90 | - |
61
+ | Baichuan-13B-Base | 26.76 | 4.84 | 51.33 |
62
+ | Baichuan-2-13B-Base | 52.77 | 10.08 | - |
63
+ | WizardMath-13B | 63.90 | 14.00 | 50.83 |
64
+ | GAIRMATH-Abel-13B | 66.41 | 17.34 | - |
65
+ | MetaMath-13B | 72.30 | 22.40 | - |
66
+ | Skywork-13B-Math (ours) | **72.33** | 16.98 | **77.27** |
67
 
68
  # 快速开始(Quickstart)
69
  我们将模型参数、配置文件、tokenizer等在huggingface和modelscope上进行了开源。
 
246
 
247
  ```
248
  @article{skyworkmath,
249
+ title={SkyMath: Technical Report},
250
+ author={Liu Yang, Haihua Yang, Wenjun Cheng, Lei Lin, Chenxia Li, Yifu Chen, Lunan Liu, Jianfei Pan, Tianwen Wei, Biye Li, Liang Zhao, Lijie Wang, Bo Zhu, Guoliang Li, Xuejie Wu, Xilin Luo, Rui Hu},
251
+ journal={arXiv preprint arXiv: 2310.16713},
252
+ url={https://arxiv.org/abs/2309.10305},
253
  year={2023}
254
  }
255
  ```
256
+
257
+
258
+ ```
259
+ @article{Skywork_Multi-Modal_Group_Empirical_Study_Towards_2023,
260
+ author = {Skywork Multi-Modal Group},
261
+ month = sep,
262
+ title = {{Empirical Study Towards Building An Effective Multi-Modal Large Language Model}},
263
+ year = {2023}
264
+ }
265
+
266
+ ```
267
+