---
license: mit
language:
- it
---
--------------------------------------------------------------------------------------------------
Cross Encoding
Model: MiniLM
Lang: IT
--------------------------------------------------------------------------------------------------
Model description
This is a MiniLMv2 [1] model for the Italian language, obtained using [mmarco-mMiniLMv2-L12-H384-v1](https://huggingface.co/nreimers/mmarco-mMiniLMv2-L12-H384-v1) as a starting point and focusing it on the Italian language by modifying the embedding layer
(as in [2], computing document-level frequencies over the Wikipedia dataset)
The resulting model has 33M parameters, a vocabulary of 30.498 tokens, and a size of ~130 MB.
References
[1] https://arxiv.org/abs/2012.15828
[2] https://arxiv.org/abs/2010.05609
License
The model is released under MIT license