File size: 771 Bytes
065f052 cd155f9 065f052 cd155f9 065f052 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
language:
- ar
- en
dataset:
- fka/awesome-chatgpt-prompts
- open-r1/codeforces
license: mit
---
## Miscovery Tokenizer
A SentencePiece unigram tokenizer trained on a mix of Arabic and English text, with a vocabulary size of 100,000 tokens.
## Training Data
This tokenizer was trained on:
- Arabic Quran.
- awesome-chatgpt-prompts
- open-r1/codeforces
## Usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("miscovery/tokenizer")
# Example usage
text = "بسم الله الرحمن الرحيم"
encoded = tokenizer(text)
print(encoded)
```
## Features
- Vocabulary size: 100,000
- Model type: Unigram
- Model Max Length: 512
- Handles both Arabic and English text
- Supports Arabic normalization
|