File size: 880 Bytes
1e67fe3 0da77fe 402944d 0503cea 402944d 0503cea |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
---
license: mit
---
This is a roberta based configuration model for Bodo.
It does not contain checkpoints for pretrained model. Its has only two things
- Byte Level BPE Tokenizer for Bodo
- Roberta base configuration
# Uses
You can use tokenizer as following
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('alayaran/bodo-roberta-base')
t = tokenizer('कौटि नख'राव दैनि कानेक्सन होबाय')
# {'input_ids': [310, 294, 313, 267, 503, 11, 268, 263, 277, 298, 287, 265, 267, 321, 263, 265, 272, 310, 273, 378, 295, 266, 271, 263, 269], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
# to decode
tokenizer.decode(t['input_ids'],skip_special_tokens=True)
# "कौटि नख'राव दैनि कानेक्सन होबाय"
```
|