bert-large-japanese-char / tokenizer_config.json
singletongue's picture
First commit of the bert-large-japanese-char model and tokenizer.
b0da028
raw
history blame contribute delete
174 Bytes
{
"do_lower_case": false,
"word_tokenizer_type": "mecab",
"subword_tokenizer_type": "character",
"mecab_kwargs": {
"mecab_dic": "unidic_lite"
}
}