kobert / tokenizer_config.json
monologg's picture
feat: update on tokenizer_config
7e56454
raw
history blame
263 Bytes
{
"model_max_length": 512,
"max_len": 512,
"do_lower_case": false,
"tokenizer_class": "KoBertTokenizer",
"auto_map": {
"AutoTokenizer": [
"tokenization_kobert.KoBertTokenizer",
"tokenization_kobert.KoBertTokenizer"
]
}
}