HeRo / README.md
vitvit's picture
Update README.md
88697e4
|
raw
history blame
691 Bytes
metadata
language:
  - he
datasets:
  - HeNLP/HeDC4

Hebrew Language Model

State-of-the-art RoBERTa language model for Hebrew.

How to use


from transformers import AutoModelForMaskedLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('HeNLP/HeRo')
model = AutoModelForMaskedLM.from_pretrained('HeNLP/HeRo')

Citing

If you use HeRo in your research, please cite HeRo: RoBERTa and Longformer Hebrew Language Models.

@article{shalumov2023hero,
      title={HeRo: RoBERTa and Longformer Hebrew Language Models}, 
      author={Vitaly Shalumov and Harel Haskey},
      year={2023},
      journal={arXiv:2304.11077},
}