|
--- |
|
license: apache-2.0 |
|
language: |
|
- gl |
|
- ca |
|
metrics: |
|
- bleu |
|
library_name: fairseq |
|
--- |
|
## Projecte Aina’s Galician-Catalan machine translation model |
|
|
|
## Model description |
|
|
|
This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of Galician-Catalan datasets |
|
totalling approximately 75 million sentence pairs. comprising both Catalan-Galician data sourced from Opus, and synthetic Galician-Catalan data created by the GL-ES translator of |
|
[Proxecto Nós](https://huggingface.co/proxectonos/Nos_MT-OpenNMT-es-gl) on the Spanish side of the Projecte Aina Spanish-Catalan corpus. |
|
The model was evaluated on the Flores, and NTREX evaluation datasets. |
|
|
|
## Intended uses and limitations |
|
|
|
You can use this model for machine translation from Galician to Catalan. |
|
|
|
## How to use |
|
|
|
### Usage |
|
Required libraries: |
|
|
|
```bash |
|
pip install ctranslate2 pyonmttok |
|
``` |
|
|
|
Translate a sentence using python |
|
```python |
|
import ctranslate2 |
|
import pyonmttok |
|
from huggingface_hub import snapshot_download |
|
model_dir = snapshot_download(repo_id="projecte-aina/aina-translator-gl-ca", revision="main") |
|
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model") |
|
tokenized=tokenizer.tokenize("Benvido ao proxecto Ilenia.") |
|
translator = ctranslate2.Translator(model_dir) |
|
translated = translator.translate_batch([tokenized[0]]) |
|
print(tokenizer.detokenize(translated[0][0]['tokens'])) |
|
``` |
|
|
|
## Limitations and bias |
|
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. |
|
However, we are well aware that our models may be biased. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. |
|
|
|
## Training |
|
|
|
### Training data |
|
|
|
The Catalan-Galician data is a combination of publicly available bilingual datasets collected from [Opus](https://opus.nlpl.eu/) and synthetic data created by translating |
|
the the Spanish side of the Projecte Aina Spanish-Catalan corpus using the GL-ES translator of |
|
[Proxecto Nós](https://huggingface.co/proxectonos/Nos_MT-OpenNMT-es-gl). |
|
|
|
|
|
|
|
### Training procedure |
|
|
|
### Data preparation |
|
|
|
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75. |
|
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE). |
|
The filtered datasets are then concatenated to form the final training corpus and before training the punctuation is normalized using a |
|
modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py) |
|
|
|
|
|
#### Tokenization |
|
|
|
All data is tokenized using sentencepiece, with a 50 thousand token sentencepiece model learned from the combination of all filtered training data. |
|
This model is included. |
|
|
|
#### Hyperparameters |
|
|
|
The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf) |
|
The following hyperparameters were set on the Fairseq toolkit: |
|
|
|
| Hyperparameter | Value | |
|
|------------------------------------|----------------------------------| |
|
| Architecture | transformer_vaswani_wmt_en_de_big | |
|
| Embedding size | 1024 | |
|
| Feedforward size | 4096 | |
|
| Number of heads | 16 | |
|
| Encoder layers | 24 | |
|
| Decoder layers | 6 | |
|
| Normalize before attention | True | |
|
| --share-decoder-input-output-embed | True | |
|
| --share-all-embeddings | True | |
|
| Effective batch size | 48.000 | |
|
| Optimizer | adam | |
|
| Adam betas | (0.9, 0.980) | |
|
| Clip norm | 0.0 | |
|
| Learning rate | 5e-4 | |
|
| Lr. schedurer | inverse sqrt | |
|
| Warmup updates | 8000 | |
|
| Dropout | 0.1 | |
|
| Label smoothing | 0.1 | |
|
|
|
The model was trained for 24.000 updates on the parallel data collected from the web. |
|
This data was then concatenated with the synthetic parallel data and training continued for a total of 34.000 updates. |
|
Weights were saved every 1000 updates and reported results are the average of the last 4 checkpoints. |
|
|
|
## Evaluation |
|
|
|
### Variable and metrics |
|
|
|
We use the BLEU score for evaluation on test sets: [Flores-200](https://github.com/facebookresearch/flores/tree/main/flores200), |
|
and [NTREX](https://github.com/MicrosoftTranslator/NTREX). |
|
|
|
### Evaluation results |
|
|
|
Below are the evaluation results on the machine translation from Galician to Catalan compared to [Google Translate](https://translate.google.com/), |
|
[M2M100 1.2B](https://huggingface.co/facebook/m2m100_1.2B), [NLLB 200 3.3B](https://huggingface.co/facebook/nllb-200-3.3B) and |
|
[ NLLB-200's distilled 1.3B variant](https://huggingface.co/facebook/nllb-200-distilled-1.3B): |
|
|
|
| Test set |Google Translate|M2M100 1.2B| NLLB 1.3B | NLLB 3.3 | aina-translator-gl-ca | |
|
|----------------------|----|-------|-----------|------------------|---------------| |
|
|Flores 101 devtest |**36,4**|32,6| 22,3 | 34,3 | 32,4 | |
|
| NTREX |**34,7**|34,0|20,4 | 34,2 | 33,7 | |
|
| Average |**35,6**|33,3| 21,4| 34,3 | 33,1 | |
|
|
|
|
|
## Additional information |
|
|
|
### Author |
|
The Language Technologies Unit from Barcelona Supercomputing Center. |
|
|
|
### Contact |
|
For further information, please send an email to <[email protected]>. |
|
|
|
### Copyright |
|
Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center. |
|
|
|
### License |
|
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
|
|
|
### Funding |
|
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU |
|
within the framework of the [project ILENIA](https://proyectoilenia.es/) |
|
with reference 2022/TL22/00215337. |
|
|
|
### Disclaimer |
|
|
|
<details> |
|
<summary>Click to expand</summary> |
|
|
|
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0. |
|
|
|
Be aware that the model may have biases and/or any other undesirable distortions. |
|
|
|
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it) |
|
or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and, |
|
in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. |
|
|
|
In no event shall the owner and creator of the model (Barcelona Supercomputing Center) |
|
be liable for any results arising from the use made by third parties. |
|
|
|
</details> |