rockdrigoma commited on
Commit
ad71c0f
·
1 Parent(s): 5151527

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +2 -0
app.py CHANGED
@@ -2,6 +2,8 @@ import os
2
  import gradio as gr
3
  from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
4
 
 
 
5
  article='''
6
  # Spanish Nahuatl Automatic Translation
7
  Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the neural machine translation task is hard due to the lack of structured data. The most popular datasets such as the Axolot dataset and the bible-corpus only consist of ~16,000 and ~7,000 samples respectively. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, a single word from the Axolot dataset can be found written in more than three different ways. Therefore, in this work, we leverage the T5 text-to-text prefix training strategy to compensate for the lack of data. We first teach the multilingual model Spanish using English, then we make the transition to Spanish-Nahuatl. The resulting model successfully translates short sentences from Spanish to Nahuatl. We report Chrf and BLEU results.
 
2
  import gradio as gr
3
  from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
4
 
5
+ os.environ["TOKENIZERS_PARALLELISM"] = "false"
6
+
7
  article='''
8
  # Spanish Nahuatl Automatic Translation
9
  Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the neural machine translation task is hard due to the lack of structured data. The most popular datasets such as the Axolot dataset and the bible-corpus only consist of ~16,000 and ~7,000 samples respectively. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, a single word from the Axolot dataset can be found written in more than three different ways. Therefore, in this work, we leverage the T5 text-to-text prefix training strategy to compensate for the lack of data. We first teach the multilingual model Spanish using English, then we make the transition to Spanish-Nahuatl. The resulting model successfully translates short sentences from Spanish to Nahuatl. We report Chrf and BLEU results.