Update app.py
Browse files
app.py
CHANGED
@@ -57,7 +57,7 @@ Also, to increase the amount of data, we collected 3,000 extra samples from the
|
|
57 |
We employ two training-stages using a multilingual T5-small. This model was chosen because it can handle different vocabularies and suffixes. T5-small is pretrained on different tasks and languages (French, Romanian, English, German).
|
58 |
|
59 |
### Training-stage 1 (learning Spanish)
|
60 |
-
In training stage 1 we first introduce Spanish to the model. The
|
61 |
|
62 |
### Training-stage 2 (learning Nahuatl)
|
63 |
We use the pretrained Spanish-English model to learn Spanish-Nahuatl. Since the amount of Nahuatl pairs is limited, we also add to our dataset 20,000 samples from the English-Spanish Anki dataset. This two-task-training avoids overfitting end makes the model more robust.
|
|
|
57 |
We employ two training-stages using a multilingual T5-small. This model was chosen because it can handle different vocabularies and suffixes. T5-small is pretrained on different tasks and languages (French, Romanian, English, German).
|
58 |
|
59 |
### Training-stage 1 (learning Spanish)
|
60 |
+
In training stage 1 we first introduce Spanish to the model. The goal is to learn a new language rich in data (Spanish) and not lose the previous knowledge acquired. We use the English-Spanish [Anki](https://www.manythings.org/anki/) dataset, which consists of 118,964 text pairs. We train the model till convergence adding the suffix "Translate Spanish to English: ".
|
61 |
|
62 |
### Training-stage 2 (learning Nahuatl)
|
63 |
We use the pretrained Spanish-English model to learn Spanish-Nahuatl. Since the amount of Nahuatl pairs is limited, we also add to our dataset 20,000 samples from the English-Spanish Anki dataset. This two-task-training avoids overfitting end makes the model more robust.
|