This is [ELECTRA](https://github.com/google-research/electra) Small model for Japanese pretrained on 354 million sentences / 5.6 billion words of [YACIS](https://github.com/ptaszynski/yacis-corpus) blog corpus. | |
The corpus was tokenized with [MeCab](https://taku910.github.io/mecab/). |