Edit model card

This is only a tokenizer.

  • This tokenizer is a PreTrainedTokenizerFast which is trained on raygx/Nepali-Extended-Corpus datasets.
  • This tokenizer is trained from scratch using Tokenizers library.
  • This tokenizer uses
    • Model: Tokenizer(WordPiece(unk_token="[UNK]"))
    • Normalizer: normalizers.Sequence([NFD(),Strip()])
    • Pre-processor: pre_tokenizers.Sequence([Whitespace(),Digits(individual_digits=True), Punctuation()])
    • Post-processor: BertProcessing

Code is available here.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .