File size: 1,247 Bytes
e872017 214ecc8 e872017 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
datasets:
- lst20
language:
- th
widget:
- text: วัน ที่ _ 12 _ มีนาคม นี้ _ ฉัน จะ ไป เที่ยว วัดพระแก้ว _ ที่ กรุงเทพ
library_name: transformers
---
<!-- # HoogBERTa
This repository includes the Thai pretrained language representation (HoogBERTa_base) and the fine-tuned model for multitask sequence labeling. -->
# Documentation
## Prerequisite
Since we use subword-nmt BPE encoding, input needs to be pre-tokenize using [BEST](https://huggingface.co/datasets/best2009) standard before inputting into HoogBERTa
```
pip install attacut
```
# Citation
Please cite as:
``` bibtex
@inproceedings{porkaew2021hoogberta,
title = {HoogBERTa: Multi-task Sequence Labeling using Thai Pretrained Language Representation},
author = {Peerachet Porkaew, Prachya Boonkwan and Thepchai Supnithi},
booktitle = {The Joint International Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP 2021)},
year = {2021},
address={Online}
}
```
Download full-text [PDF](https://drive.google.com/file/d/1hwdyIssR5U_knhPE2HJigrc0rlkqWeLF/view?usp=sharing)
Check out the code on [Github](https://github.com/lstnlp/HoogBERTa) |