Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: Chinese
|
3 |
+
widget:
|
4 |
+
- text: "北京上个月召开了两会"
|
5 |
+
|
6 |
+
|
7 |
+
|
8 |
+
|
9 |
+
---
|
10 |
+
|
11 |
+
# Chinese RoBERTa-Base Models for Text Classification
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
## Model description
|
16 |
+
|
17 |
+
This is the set of 5 Chinese RoBERTa base models fine-tuned by [UER-py](https://arxiv.org/abs/1909.05658).
|
18 |
+
|
19 |
+
You can download the 5 Chinese RoBERTa base models either from the links below:
|
20 |
+
|
21 |
+
| corpus | Link |
|
22 |
+
| :-----------: | :-------------------------------------------------------: |
|
23 |
+
| **JD full** | [**roberta-base-finetuned-jd-full-chinese**][JD_full] |
|
24 |
+
| **JD binary** | [**roberta-base-finetuned-jd-binary-chinese**][JD_binary] |
|
25 |
+
| **Dianping** | [**roberta-base-finetuned-dianping-chinese**][Dianping] |
|
26 |
+
| **Ifeng** | [**roberta-base-finetuned-ifeng-chinese**][Ifeng] |
|
27 |
+
| **Chinanews** | [**roberta-base-finetuned-chinanews-chinese**][Chinanews] |
|
28 |
+
|
29 |
+
|
30 |
+
|
31 |
+
## How to use
|
32 |
+
|
33 |
+
You can use this model directly with a pipeline for text classification (take the case of roberta-base-finetuned-chinanews-chinese):
|
34 |
+
|
35 |
+
```python
|
36 |
+
>>> from transformers import AutoModelForSequenceClassification,AutoTokenizer,pipeline
|
37 |
+
>>> model = AutoModelForSequenceClassification.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
|
38 |
+
>>> tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
|
39 |
+
>>> text_classification = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
|
40 |
+
>>> text_classification("北京上个月召开了两会")
|
41 |
+
[{'label': 'mainland China politics', 'score': 0.7211663722991943}]
|
42 |
+
```
|
43 |
+
|
44 |
+
|
45 |
+
|
46 |
+
## Training data
|
47 |
+
|
48 |
+
We use 5 Chinese text classification datasets which are collected by [Glyph](https://github.com/zhangxiangxiao/glyph) project.
|
49 |
+
|
50 |
+
|
51 |
+
|
52 |
+
## Training procedure
|
53 |
+
|
54 |
+
Models are fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune three epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved.
|
55 |
+
|
56 |
+
Taking the case of roberta-base-finetuned-chinanews-chinese
|
57 |
+
|
58 |
+
```
|
59 |
+
python3 run_classifier.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
|
60 |
+
--vocab_path models/google_zh_vocab.txt \
|
61 |
+
--train_path Glyph/Chinanews_train.txt \
|
62 |
+
--dev_path Glypg/Chinanews_test.txt \
|
63 |
+
--output_model_path models/Chinanews_model.bin \
|
64 |
+
--learning_rate 3e-5 --batch_size 32 --epochs_num 3 \
|
65 |
+
--seq_length 512 --embedding word_pos_seg --encoder transformer --mask fully_visible
|
66 |
+
```
|
67 |
+
|
68 |
+
Finally, we convert the pre-trained model into Huggingface's format:
|
69 |
+
|
70 |
+
```
|
71 |
+
python3 scripts/convert_bert_text_classification_from_uer_to_huggingface.py --input_model_path models/Chinanews_model.bin \
|
72 |
+
--output_model_path pytorch_model.bin \
|
73 |
+
--layers_num 12
|
74 |
+
```
|
75 |
+
|
76 |
+
### BibTeX entry and citation info
|
77 |
+
|
78 |
+
```
|
79 |
+
@article{zhao2019uer,
|
80 |
+
title={UER: An Open-Source Toolkit for Pre-training Models},
|
81 |
+
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
|
82 |
+
journal={EMNLP-IJCNLP 2019},
|
83 |
+
pages={241},
|
84 |
+
year={2019}
|
85 |
+
}
|
86 |
+
```
|
87 |
+
|
88 |
+
[JD_full]:https://huggingface.co/uer/roberta-base-finetuned-jd-full-chinese
|
89 |
+
[JD_binary]:https://huggingface.co/uer/roberta-base-finetuned-jd-binary-chinese
|
90 |
+
[Dianping]:https://huggingface.co/uer/roberta-base-finetuned-dianping-chinese
|
91 |
+
[Ifeng]:https://huggingface.co/uer/roberta-base-finetuned-ifeng-chinese
|
92 |
+
[Chinanews]:https://huggingface.co/uer/roberta-base-finetuned-chinanews-chinese
|