Japanese ELECTRA-small
We provide a Japanese ELECTRA-Small model, as described in ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.
Our pretraining process employs subword units derived from the Japanese Wikipedia, using the Byte-Pair Encoding method and building on an initial tokenization with mecab-ipadic-NEologd. For optimal performance, please take care to set your MeCab dictionary appropriately.
How to use the discriminator in transformers
from transformers import BertJapaneseTokenizer, ElectraForPreTraining
tokenizer = BertJapaneseTokenizer.from_pretrained('Cinnamon/electra-small-japanese-discriminator', mecab_kwargs={"mecab_option": "-d /usr/lib/x86_64-linux-gnu/mecab/dic/mecab-ipadic-neologd"})
model = ElectraForPreTraining.from_pretrained('Cinnamon/electra-small-japanese-discriminator')
- Downloads last month
- 266