KR-BERT: A Small-Scale Korean-Specific Language Model
Abstract
Since the appearance of BERT, recent works including XLNet and Ro<PRE_TAG>BERTa</POST_TAG> utilize sentence embedding models pre-trained by large corpora and a large number of parameters. Because such models have large hardware and a huge amount of data, they take a long time to pre-train. Therefore it is important to attempt to make smaller models that perform comparatively. In this paper, we trained a Korean-specific model KR-<PRE_TAG>BERT</POST_TAG>, utilizing a smaller vocabulary and dataset. Since Korean is one of the morphologically rich languages with poor resources using non-Latin alphabets, it is also important to capture language-specific linguistic phenomena that the Multilingual <PRE_TAG>BERT model</POST_TAG> missed. We tested several tokenizers including our BidirectionalWordPiece Tokenizer and adjusted the minimal span of tokens for tokenization ranging from sub-character level to character-level to construct a better vocabulary for our model. With those adjustments, our KR-<PRE_TAG>BERT</POST_TAG> model performed comparably and even better than other existing pre-trained models using a corpus about 1/10 of the size.
Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper