File size: 450 Bytes
dbc2b61
 
 
 
 
 
 
 
 
 
 
 
 
 
735e762
 
f5890e8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---

language: ja

license: cc-by-sa-4.0

datasets:

- YACIS corpus

---

# yacis-electra-small

This is [ELECTRA](https://github.com/google-research/electra) Small model for Japanese pretrained on 354 million sentences / 5.6 billion words of [YACIS](https://github.com/ptaszynski/yacis-corpus) blog corpus.

The corpus was tokenized for pretraining with [MeCab](https://taku910.github.io/mecab/). Subword tokenization was peroformed with WordPiece.