---
language: ja
license: cc-by-sa-4.0
datasets:
- YACIS corpus
---
# yacis-electra-small
This is [ELECTRA](https://github.com/google-research/electra) Small model for Japanese pretrained on 354 million sentences / 5.6 billion words of [YACIS](https://github.com/ptaszynski/yacis-corpus) blog corpus.
The corpus was tokenized for pretraining with [MeCab](https://taku910.github.io/mecab/). Subword tokenization was peroformed with WordPiece.
## Model architecture
This model uses the original ELECTRA Small model; 12 layers, 128 dimensions of hidden states, and 12 attention heads.
Vocabulary size was 32,000 tokens.
## Licenses
The pretrained model with all attached files is distributed under the terms of the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.en) license.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.