Abstract
General-purpose multilingual vector representations, used in retrieval, regression and classification, are traditionally obtained from bidirectional encoder models. Despite their wide applicability, encoders have been recently overshadowed by advances in generative decoder-only models. However, many innovations driving this progress are not inherently tied to decoders. In this paper, we revisit the development of multilingual encoders through the lens of these advances, and introduce EuroBERT, a family of multilingual encoders covering European and widely spoken global languages. Our models outperform existing alternatives across a diverse range of tasks, spanning multilingual capabilities, mathematics, and coding, and natively supporting sequences of up to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering insights into our dataset composition and training pipeline. We publicly release the EuroBERT models, including intermediate training checkpoints, together with our training framework.
Community
Abstract: General-purpose multilingual vector representations, used in retrieval, regression and classification, are traditionally obtained from bidirectional encoder models. Despite their wide applicability, encoders have been recently overshadowed by advances in generative decoder-only models. However, many innovations driving this progress are not inherently tied to decoders. In this paper, we revisit the development of multilingual encoders through the lens of these advances, and introduce EuroBERT, a family of multilingual encoders covering European and widely spoken global languages. Our models outperform existing alternatives across a diverse range of tasks, spanning multilingual capabilities, mathematics, and coding, and natively supporting sequences of up to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering insights into our dataset composition and training pipeline. We publicly release the EuroBERT models, including intermediate training checkpoints, together with our training framework.
Very interesting, that Modern,Neo and now EuroBERT do not present results on token classification tasks.
I performed them for Modern and Neo BERT and they are pretty bad, so I'm wondering when we see papers tackling this. I have some ideas why and I'm curious to see EuroBERT evaluated on the CoNLL-2002 and CoNLL-2003 family.
@stefan-it We are currently running experiments on NER. It will come with a v1.5 update in the paper for our conference submission 👌
We can discuss about it offline @stefan-it . Nicolas is currently skying in the alps :D but we could get in touch if you wish.
Happy ⛷️ @Nicolas-BZRD ! Many thanks @PierreColombo , I would highly interested in that, you can please write me a message on LinkedIn - I am really looking forward!
Awesome model, can't wait to see what the community does with it !
Would you consider adding the results on the Massive Text Embedding Benchmark (MTEB) ?
As a heads up, the 3 EuroBERT models released today are very much "base" models, i.e. they're not finetuned for specific tasks like retrieval yet.
For evaluation, they simply reran the same training script with various of these base models to showcase that finetuned EuroBERT is generally stronger than e.g. Finetuned XLM-RoBERTa.
I really hope that some of the excellent labs/companies that finetune embedding models (Nomic, BAAI, Mixedbread, Jina, Alibaba, Snowflake, IBM, etc.) wants to pick this up and release a strong embedding model for retrieval.
Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper