|
--- |
|
|
|
language: ja |
|
|
|
license: cc-by-sa-4.0 |
|
|
|
datasets: |
|
|
|
- YACIS corpus |
|
|
|
--- |
|
|
|
# yacis-electra-small |
|
|
|
This is [ELECTRA](https://github.com/google-research/electra) Small model for Japanese pretrained on 354 million sentences / 5.6 billion words of [YACIS](https://github.com/ptaszynski/yacis-corpus) blog corpus. |
|
|
|
The corpus was tokenized for pretraining with [MeCab](https://taku910.github.io/mecab/). Subword tokenization was peroformed with WordPiece. |
|
|
|
## Model architecture |
|
|
|
This model uses the original ELECTRA Small model; 12 layers, 128 dimensions of hidden states, and 12 attention heads. |
|
|
|
Vocabulary size was 32,000 tokens. |
|
|
|
## Licenses |
|
|
|
The pretrained model with all attached files is distributed under the terms of the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.en) license. |
|
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>. |