license: apache-2.0
language:
- ca
- es
metrics:
- bleu
library_name: fairseq
Aina Project's Catalan-Spanish machine translation model
Model description
This model was trained from scratch using the Fairseq toolkit on a combination of Catalan-Spanish datasets, up to 92 million sentences before cleaning and filtering. Additionally, the model is evaluated on several public datasets comprising 5 different domains (general, adminstrative, technology, biomedical, and news).
Intended uses and limitations
You can use this model for machine translation from Catalan to Spanish.
How to use
Usage
Required libraries:
pip install ctranslate2 pyonmttok
Translate a sentence using python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="projecte-aina/aina-translator-ca-es", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
tokenized=tokenizer.tokenize("Benvingut al projecte Aina!")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
Limitations and bias
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
Training
Training data
The model was trained on a combination of several datasets, totalling around 92 million parallel sentences before filtering and cleaning. The trainig data includes corpora collected from Opus, internally created parallel datsets, and corpora from other sources.
Training procedure
Data preparation
All datasets are concatenated and filtered using the mBERT Gencata parallel filter and cleaned using the clean-corpus-n.pl script from moses, allowing sentences between 5 and 150 words.
Before training, the punctuation is normalized using a modified version of the join-single-file.py script from SoftCatalà.
Tokenization
All data is tokenized using sentencepiece, with 50 thousand token sentencepiece model learned from the combination of all filtered training data. This model is included.
Hyperparameters
The model is based on the Transformer-XLarge proposed by Subramanian et al. The following hyperparamenters were set on the Fairseq toolkit:
Hyperparameter | Value |
---|---|
Architecture | transformer_vaswani_wmt_en_de_bi |
Embedding size | 1024 |
Feedforward size | 4096 |
Number of heads | 16 |
Encoder layers | 24 |
Decoder layers | 6 |
Normalize before attention | True |
--share-decoder-input-output-embed | True |
--share-all-embeddings | True |
Effective batch size | 96.000 |
Optimizer | adam |
Adam betas | (0.9, 0.980) |
Clip norm | 0.0 |
Learning rate | 1e-3 |
Lr. schedurer | inverse sqrt |
Warmup updates | 4000 |
Dropout | 0.1 |
Label smoothing | 0.1 |
The model was trained using shards of 10 million sentences, for a total of 13.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 6 checkpoints.
Evaluation
Variable and metrics
We use the BLEU score for evaluation on test sets: Flores-101, TaCon, United Nations, Cybersecurity, wmt19 biomedical test set, wmt13 news test set
Evaluation results
Below are the evaluation results on the machine translation from Catalan to Spanish compared to Softcatalà and Google Translate:
Test set | SoftCatalà | Google Translate | aina-translator-ca-es |
---|---|---|---|
Spanish Constitution | 70,7 | 77,1 | 83,3 |
United Nations | 78,1 | 84,3 | 87,3 |
Flores 101 dev | 23,5 | 24 | 24,2 |
Flores 101 devtest | 24,1 | 24,2 | 24,7 |
Cybersecurity | 67,3 | 76,9 | 78,2 |
wmt 19 biomedical | 60,4 | 62,7 | 64,1 |
wmt 13 news | 22,5 | 23,1 | 23,7 |
Average | 53,4 | 53,2 | 55,1 |
Additional information
Author
The Language Technologies Unit from Barcelona Supercomputing Center.
Contact
For further information, please send an email to [email protected].
Copyright
Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.
License
Funding
This work has been promoted and financed by the Generalitat de Catalunya through the Aina project.
Disclaimer
Click to expand
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
Be aware that the model may have biases and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it) or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the model (Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties.