metadata
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
Tensorflow XLM-RoBERTa
In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow.
XLM-RoBERTa
XLM-RoBERTa is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks.
Model Weights
Model | Downloads |
---|---|
jplu/tf-xlm-roberta-base |
config.json • tf_model.h5 |
jplu/tf-xlm-roberta-large |
config.json • tf_model.h5 |
Usage
With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like:
from transformers import TFXLMRobertaModel
model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-base")
Or
model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-large")
Huggingface model hub
All models are available on the Huggingface model hub.
Acknowledgments
Thanks to all the Huggingface team for the support and their amazing library!