DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew

State-of-the-art language model for Hebrew, released here.

This is the fine-tuned model for the prefix segmentation task.

For the bert-base models for other tasks, see here.

Sample usage:

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('dicta-il/dictabert-seg')
model = AutoModel.from_pretrained('dicta-il/dictabert-seg', trust_remote_code=True)

model.eval()

sentence = 'בשנת 1948 השלים אפרים קישון את לימודיו בפיסול מתכת ובתולדות האמנות והחל לפרסם מאמרים הומוריסטיים'
print(model.predict([sentence], tokenizer))

Output:

[
    [
        [ "[CLS]" ],
        [ "ב","שנת" ],
        [ "1948" ],
        [ "השלים" ],
        [ "אפרים" ],
        [ "קישון" ],
        [ "את" ],
        [ "לימודיו" ],
        [ "ב","פיסול" ],
        [ "מתכת" ],
        [ "וב","תולדות" ],
        [ "ה","אמנות" ],
        [ "ו","החל" ],
        [ "לפרסם" ],
        [ "מאמרים" ],
        [ "הומוריסטיים" ],
        [ "[SEP]" ]
    ]
]

Citation

If you use DictaBERT in your research, please cite DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew

BibTeX:

@misc{shmidman2023dictabert,
      title={DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew}, 
      author={Shaltiel Shmidman and Avi Shmidman and Moshe Koppel},
      year={2023},
      eprint={2308.16687},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

License

Shield: CC BY 4.0

This work is licensed under a Creative Commons Attribution 4.0 International License.

CC BY 4.0

Downloads last month
323
Safetensors
Model size
185M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Collection including dicta-il/dictabert-seg