entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | semenov-bojar-2022-automated | Automated Evaluation Metric for Terminology Consistency in {MT} | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.41/ | Semenov, Kirill and Bojar, Ond{\v{r}}ej | Proceedings of the Seventh Conference on Machine Translation (WMT) | 450--457 | The most widely used metrics for machine translation tackle sentence-level evaluation. However, at least for professional domains such as legal texts, it is crucial to measure the consistency of the translation of the terms throughout the whole text. This paper introduces an automated metric for the term consistency evaluation in machine translation (MT). To demonstrate the metric`s performance, we used the Czech-to-English translated texts from the ELITR 2021 agreement corpus and the outputs of the MT systems that took part in WMT21 News Task. We show different modes of our evaluation algorithm and try to interpret the differences in the ranking of the translation systems based on sentence-level metrics and our approach. We also demonstrate that the proposed metric scores significantly differ from the widespread automated metric scores, and correlate with the human assessment. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,091 |
inproceedings | weller-di-marco-fraser-2022-test | Test Suite Evaluation: Morphological Challenges and Pronoun Translation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.42/ | Weller-Di Marco, Marion and Fraser, Alexander | Proceedings of the Seventh Conference on Machine Translation (WMT) | 458--468 | This paper summarizes the results of our test suite evaluation with a main focus on morphology for the language pairs English to/from German. We look at the translation of morphologically complex words (DE{--}EN), and evaluatewhether English noun phrases are translated as compounds vs. phrases into German. Furthermore, we investigate the preservation of morphological features (gender in EN{--}DE pronoun translation and number in morpho-syntacticallycomplex structures for DE{--}EN). Our results indicate that systems are able to interpret linguistic structures to obtain relevant information, but also that translation becomes more challenging with increasing complexity, as seen, for example, when translating words with negation or non-concatenative properties, and for the morecomplex cases of the pronoun translation task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,092 |
inproceedings | alves-etal-2022-robust | Robust {MT} Evaluation with Sentence-level Multilingual Augmentation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.43/ | Alves, Duarte and Rei, Ricardo and Farinha, Ana C and C. de Souza, Jos{\'e} G. and Martins, Andr{\'e} F. T. | Proceedings of the Seventh Conference on Machine Translation (WMT) | 469--478 | Automatic translations with critical errors may lead to misinterpretations and pose several risks for the user. As such, it is important that Machine Translation (MT) Evaluation systems are robust to these errors in order to increase the reliability and safety of Machine Translation systems. Here we introduce SMAUG a novel Sentence-level Multilingual AUGmentation approach for generating translations with critical errors and apply this approach to create a test set to evaluate the robustness of MT metrics to these errors. We show that current State-of-the-Art metrics are improving their capability to distinguish translations with and without critical errors and to penalize the first accordingly. We also show that metrics tend to struggle with errors related to named entities and numbers and that there is a high variance in the robustness of current methods to translations with critical errors. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,093 |
inproceedings | amrhein-etal-2022-aces | {ACES}: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.44/ | Amrhein, Chantal and Moghe, Nikita and Guillou, Liane | Proceedings of the Seventh Conference on Machine Translation (WMT) | 479--513 | As machine translation (MT) metrics improve their correlation with human judgement every year, it is crucial to understand the limitations of these metrics at the segment level. Specifically, it is important to investigate metric behaviour when facing accuracy errors in MT because these can have dangerous consequences in certain contexts (e.g., legal, medical). We curate ACES, a translation accuracy challenge set, consisting of 68 phenomena ranging from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge. We use ACES to evaluate a wide range of MT metrics including the submissions to the WMT 2022 metrics shared task and perform several analyses leading to general recommendations for metric developers. We recommend: a) combining metrics with different strengths, b) developing metrics that give more weight to the source and less to surface-level overlap with the reference and c) explicitly modelling additional language-specific information beyond what is available via multilingual embeddings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,094 |
inproceedings | avramidis-macketanz-2022-linguistically | Linguistically Motivated Evaluation of Machine Translation Metrics Based on a Challenge Set | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.45/ | Avramidis, Eleftherios and Macketanz, Vivien | Proceedings of the Seventh Conference on Machine Translation (WMT) | 514--529 | We employ a linguistically motivated challenge set in order to evaluate the state-of-the-art machine translation metrics submitted to the Metrics Shared Task of the 7th Conference for Machine Translation. The challenge set includes about 20,000 items extracted from 145 MT systems for two language directions (German-English, English-German), covering more than 100 linguistically-motivated phenomena organized in 14 categories. The best performing metrics are YiSi-1, BERTScore and COMET-22 for German-English, and UniTE, UniTE-ref, XL-DA and xxl-DA19 for English-German.Metrics in both directions are performing worst when it comes to named-entities {\&} terminology and particularly measuring units. Particularly in German-English they are weak at detecting issues at punctuation, polar questions, relative clauses, dates and idioms. In English-German, they perform worst at present progressive of transitive verbs, future II progressive of intransitive verbs, simple present perfect of ditransitive verbs and focus particles. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,095 |
inproceedings | chen-etal-2022-exploring | Exploring Robustness of Machine Translation Metrics: A Study of Twenty-Two Automatic Metrics in the {WMT}22 Metric Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.46/ | Chen, Xiaoyu and Wei, Daimeng and Shang, Hengchao and Li, Zongyao and Wu, Zhanglin and Yu, Zhengzhe and Zhu, Ting and Zhu, Mengli and Xie, Ning and Lei, Lizhi and Tao, Shimin and Yang, Hao and Qin, Ying | Proceedings of the Seventh Conference on Machine Translation (WMT) | 530--540 | Contextual word embeddings extracted from pre-trained models have become the basis for many downstream NLP tasks, including machine translation automatic evaluations. Metrics that leverage embeddings claim better capture of synonyms and changes in word orders, and thus better correlation with human ratings than surface-form matching metrics (e.g. BLEU). However, few studies have been done to examine robustness of these metrics. This report uses a challenge set to uncover the brittleness of reference-based and reference-free metrics. Our challenge set1 aims at examining metrics' capability to correlate synonyms in different areas and to discern catastrophic errors at both word- and sentence-levels. The results show that although embedding-based metrics perform relatively well on discerning sentence-level negation/affirmation errors, their performances on relating synonyms are poor. In addition, we find that some metrics are susceptible to text styles so their generalizability compromised. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,096 |
inproceedings | kocmi-etal-2022-ms | {MS}-{COMET}: More and Better Human Judgements Improve Metric Performance | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.47/ | Kocmi, Tom and Matsushita, Hitokazu and Federmann, Christian | Proceedings of the Seventh Conference on Machine Translation (WMT) | 541--548 | We develop two new metrics that build on top of the COMET architecture. The main contribution is collecting a ten-times larger corpus of human judgements than COMET and investigating how to filter out problematic human judgements. We propose filtering human judgements where human reference is statistically worse than machine translation. Furthermore, we average scores of all equal segments evaluated multiple times. The results comparing automatic metrics on source-based DA and MQM-style human judgement show state-of-the-art performance on a system-level pair-wise system ranking. We release both of our metrics for public use. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,097 |
inproceedings | liu-etal-2022-partial | Partial Could Be Better than Whole. {HW}-{TSC} 2022 Submission for the Metrics Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.48/ | Liu, Yilun and Qiao, Xiaosong and Wu, Zhanglin and Chang, Su and Zhang, Min and Zhao, Yanqing and Peng, Song and Tao, Shimin and Yang, Hao and Qin, Ying and Guo, Jiaxin and Wang, Minghan and Li, Yinglu and Li, Peng and Zhao, Xiaofeng | Proceedings of the Seventh Conference on Machine Translation (WMT) | 549--557 | In this paper, we present the contribution of HW-TSC to WMT 2022 Metrics Shared Task. We propose one reference-based metric, HWTSC-EE-BERTScore*, and four referencefree metrics including HWTSC-Teacher-Sim, HWTSC-TLM, KG-BERTScore and CROSSQE. Among these metrics, HWTSC-Teacher-Sim and CROSS-QE are supervised, whereas HWTSC-EE-BERTScore*, HWTSC-TLM and KG-BERTScore are unsupervised. We use these metrics in the segment-level and systemlevel tracks. Overall, our systems achieve strong results for all language pairs on previous test sets and a new state-of-the-art in many sys-level case sets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,098 |
inproceedings | mukherjee-shrivastava-2022-unsupervised | Unsupervised Embedding-based Metric for {MT} Evaluation with Improved Human Correlation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.49/ | Mukherjee, Ananya and Shrivastava, Manish | Proceedings of the Seventh Conference on Machine Translation (WMT) | 558--563 | In this paper, we describe our submission to the WMT22 metrics shared task. Our metric focuses on computing contextual and syntactic equivalences along with lexical, morphological, and semantic similarity. The intent is to capture the fluency and context of the MT outputs along with their adequacy. Fluency is captured using syntactic similarity and context is captured using sentence similarity leveraging sentence embeddings. The final sentence translation score is the weighted combination of three similarity scores: a) Syntactic Similarity b) Lexical, Morphological and Semantic Similarity, and c) Contextual Similarity. This paper outlines two improved versions of MEE i.e., MEE2 and MEE4. Additionally, we report our experiments on language pairs of en-de, en-ru and zh-en from WMT17-19 testset and further depict the correlation with human assessments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,099 |
inproceedings | mukherjee-shrivastava-2022-reuse | {REUSE}: {RE}ference-free {U}n{S}upervised Quality Estimation Metric | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.50/ | Mukherjee, Ananya and Shrivastava, Manish | Proceedings of the Seventh Conference on Machine Translation (WMT) | 564--568 | This paper describes our submission to the WMT2022 shared metrics task. Our unsupervised metric estimates the translation quality at chunk-level and sentence-level. Source and target sentence chunks are retrieved by using a multi-lingual chunker. The chunk-level similarity is computed by leveraging BERT contextual word embeddings and sentence similarity scores are calculated by leveraging sentence embeddings of Language-Agnostic BERT models. The final quality estimation score is obtained by mean pooling the chunk-level and sentence-level similarity scores. This paper outlines our experiments and also reports the correlation with human judgements for en-de, en-ru and zh-en language pairs of WMT17, WMT18 and WMT19 test sets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,100 |
inproceedings | perrella-etal-2022-matese | {M}a{TES}e: Machine Translation Evaluation as a Sequence Tagging Problem | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.51/ | Perrella, Stefano and Proietti, Lorenzo and Scir{\`e}, Alessandro and Campolungo, Niccol{\`o} and Navigli, Roberto | Proceedings of the Seventh Conference on Machine Translation (WMT) | 569--577 | Starting from last year, WMT human evaluation has been performed within the Multidimensional Quality Metrics (MQM) framework, where human annotators are asked to identify error spans in translations, alongside an error category and a severity. In this paper, we describe our submission to the WMT 2022 Metrics Shared Task, where we propose using the same paradigm for automatic evaluation: we present the MaTESe metrics, which reframe machine translation evaluation as a sequence tagging problem. Our submission also includes a reference-free metric, denominated MaTESe-QE. Despite the paucity of the openly available MQM data, our metrics obtain promising results, showing high levels of correlation with human judgements, while also enabling an evaluation that is interpretable. Moreover, MaTESe-QE can also be employed in settings where it is infeasible to curate reference translations manually. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,101 |
inproceedings | rei-etal-2022-comet | {COMET}-22: Unbabel-{IST} 2022 Submission for the Metrics Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.52/ | Rei, Ricardo and C. de Souza, Jos{\'e} G. and Alves, Duarte and Zerva, Chrysoula and Farinha, Ana C and Glushkova, Taisiya and Lavie, Alon and Coheur, Luisa and Martins, Andr{\'e} F. T. | Proceedings of the Seventh Conference on Machine Translation (WMT) | 578--585 | In this paper, we present the joint contribution of Unbabel and IST to the WMT 2022 Metrics Shared Task. Our primary submission {--} dubbed COMET-22 {--} is an ensemble between a COMET estimator model trained with Direct Assessments and a newly proposed multitask model trained to predict sentence-level scores along with OK/BAD word-level tags derived from Multidimensional Quality Metrics error annotations. These models are ensembled together using a hyper-parameter search that weights different features extracted from both evaluation models and combines them into a single score. For the reference-free evaluation, we present CometKiwi. Similarly to our primary submission, CometKiwi is an ensemble between two models. A traditional predictor-estimator model inspired by OpenKiwi and our new multitask model trained on Multidimensional Quality Metrics which can also be used without references. Both our submissions show improved correlations compared to state-of-the-art metrics from last year as well as increased robustness to critical errors. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,102 |
inproceedings | wan-etal-2022-alibaba | {A}libaba-Translate {C}hina`s Submission for {WMT}2022 Metrics Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.53/ | Wan, Yu and Bao, Keqin and Liu, Dayiheng and Yang, Baosong and Wong, Derek F. and Chao, Lidia S. and Lei, Wenqiang and Xie, Jun | Proceedings of the Seventh Conference on Machine Translation (WMT) | 586--592 | In this report, we present our submission to the WMT 2022 Metrics Shared Task. We build our system based on the core idea of UNITE (Unified Translation Evaluation), which unifies source-only, reference-only, and source- reference-combined evaluation scenarios into one single model. Specifically, during the model pre-training phase, we first apply the pseudo-labeled data examples to continuously pre-train UNITE. Notably, to reduce the gap between pre-training and fine-tuning, we use data cropping and a ranking-based score normalization strategy. During the fine-tuning phase, we use both Direct Assessment (DA) and Multidimensional Quality Metrics (MQM) data from past years' WMT competitions. Specially, we collect the results from models with different pre-trained language model backbones, and use different ensembling strategies for involved translation directions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,103 |
inproceedings | agrawal-etal-2022-quality | Quality Estimation via Backtranslation at the {WMT} 2022 Quality Estimation Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.54/ | Agrawal, Sweta and Mehandru, Nikita and Salehi, Niloufar and Carpuat, Marine | Proceedings of the Seventh Conference on Machine Translation (WMT) | 593--596 | This paper describes submission to the WMT 2022 Quality Estimation shared task (Task 1: sentence-level quality prediction). We follow a simple and intuitive approach, which consists of estimating MT quality by automatically back-translating hypotheses into the source language using a multilingual MT system. We then compare the resulting backtranslation with the original source using standard MT evaluation metrics. We find that even the best-performing backtranslation-based scores perform substantially worse than supervised QE systems, including the organizers' baseline. However, combining backtranslation-based metrics with off-the-shelf QE scorers improves correlation with human judgments, suggesting that they can indeed complement a supervised QE system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,104 |
inproceedings | bao-etal-2022-alibaba | {A}libaba-Translate {C}hina`s Submission for {WMT} 2022 Quality Estimation Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.55/ | Bao, Keqin and Wan, Yu and Liu, Dayiheng and Yang, Baosong and Lei, Wenqiang and He, Xiangnan and Wong, Derek F. and Xie, Jun | Proceedings of the Seventh Conference on Machine Translation (WMT) | 597--605 | In this paper, we present our submission to the sentence-level MQM benchmark at Quality Estimation Shared Task, named UniTE (Unified Translation Evaluation). Specifically, our systems employ the framework of UniTE, which combined three types of input formats during training with a pre-trained language model. First, we apply the pseudo-labeled data examples for the continuously pre-training phase. Notably, to reduce the gap between pre-training and fine-tuning, we use data cropping and a ranking-based score normalization strategy. For the fine-tuning phase, we use both Direct Assessment (DA) and Multidimensional Quality Metrics (MQM) data from past years' WMT competitions. Finally, we collect the source-only evaluation results, and ensemble the predictions generated by two UniTE models, whose backbones are XLM-R and infoXLM, respectively. Results show that our models reach 1st overall ranking in the Multilingual and English-Russian settings, and 2nd overall ranking in English-German and Chinese-English settings, showing relatively strong performances in this year`s quality estimation competition. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,105 |
inproceedings | eo-etal-2022-ku | {KU} {X} Upstage`s Submission for the {WMT}22 Quality Estimation: Critical Error Detection Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.56/ | Eo, Sugyeong and Park, Chanjun and Moon, Hyeonseok and Seo, Jaehyung and Lim, Heuiseok | Proceedings of the Seventh Conference on Machine Translation (WMT) | 606--614 | This paper presents KU X Upstage`s submission to the quality estimation (QE): critical error detection (CED) shared task in WMT22. We leverage the XLM-RoBERTa large model without utilizing any additional parallel data. To the best of our knowledge, we apply prompt-based fine-tuning to the QE task for the first time. To maximize the model`s language understanding capability, we reformulate the CED task to be similar to the masked language model objective, which is a pre-training strategy of the language model. We design intuitive templates and label words, and include auxiliary descriptions such as demonstration or Google Translate results in the input sequence. We further improve the performance through the template ensemble, and as a result of the shared task, our approach achieve the best performance for both English-German and Portuguese-English language pairs in an unconstrained setting. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,106 |
inproceedings | geng-etal-2022-njunlps | {NJUNLP}`s Participation for the {WMT}2022 Quality Estimation Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.57/ | Geng, Xiang and Zhang, Yu and Huang, Shujian and Tao, Shimin and Yang, Hao and Chen, Jiajun | Proceedings of the Seventh Conference on Machine Translation (WMT) | 615--620 | This paper presents submissions of the NJUNLP team in WMT 2022Quality Estimation shared task 1, where the goal is to predict the sentence-level and word-level quality for target machine translations. Our system explores pseudo data and multi-task learning. We propose several novel methods to generate pseudo data for different annotations using the conditional masked language model and the neural machine translation model. The proposed methods control the decoding process to generate more real pseudo translations. We pre-train the XLMR-large model with pseudo data and then fine-tune this model with real data both in the way of multi-task learning. We jointly learn sentence-level scores (with regression and rank tasks) and word-level tags (with a sequence tagging task). Our system obtains competitive results on different language pairs and ranks first place on both sentence- and word-level sub-tasks of the English-German language pair. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,107 |
inproceedings | huang-etal-2022-bjtu | {BJTU}-Toshiba`s Submission to {WMT}22 Quality Estimation Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.58/ | Huang, Hui and Di, Hui and Li, Chunyou and Wu, Hanming and Ouchi, Kazushige and Chen, Yufeng and Liu, Jian and Xu, Jinan | Proceedings of the Seventh Conference on Machine Translation (WMT) | 621--626 | This paper presents the BJTU-Toshiba joint submission for WMT 2022 quality estimation shared task. We only participate in Task 1 (quality prediction) of the shared task, focusing on the sentence-level MQM prediction. The techniques we experimented with include the integration of monolingual language models and the pre-finetuning of pre-trained representations. We tried two styles of pre-finetuning, namely Translation Language Modeling and Replaced Token Detection. We demonstrate the competitiveness of our system compared to the widely adopted XLM-RoBERTa baseline. Our system is also the top-ranking system on the Sentence-level MQM Prediction for the English-German language pairs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,108 |
inproceedings | lim-park-2022-papagos | Papago`s Submission to the {WMT}22 Quality Estimation Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.59/ | Lim, Seunghyun and Park, Jeonghyeok | Proceedings of the Seventh Conference on Machine Translation (WMT) | 627--633 | This paper describes anonymous submission to the WMT 2022 Quality Estimation shared task. We participate in Task 1: Quality Prediction for both sentence and word-level quality prediction tasks. Our system is a multilingual and multi-task model, whereby a single system can infer both sentence and word-level quality on multiple language pairs. Our system`s architecture consists of Pretrained Language Model (PLM) and task layers, and is jointly optimized for both sentence and word-level quality prediction tasks using multilingual dataset. We propose novel auxiliary tasks for training and explore diverse sources of additional data to demonstrate further improvements on performance. Through ablation study, we examine the effectiveness of proposed components and find optimal configurations to train our submission systems under each language pair and task settings. Finally, submission systems are trained and inferenced using K-folds ensemble. Our systems greatly outperform task organizer`s baseline and achieve comparable performance against other participants' submissions in both sentence and word-level quality prediction tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,109 |
inproceedings | rei-etal-2022-cometkiwi | {C}omet{K}iwi: {IST}-Unbabel 2022 Submission for the Quality Estimation Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.60/ | Rei, Ricardo and Treviso, Marcos and Guerreiro, Nuno M. and Zerva, Chrysoula and Farinha, Ana C and Maroti, Christine and C. de Souza, Jos{\'e} G. and Glushkova, Taisiya and Alves, Duarte and Coheur, Luisa and Lavie, Alon and Martins, Andr{\'e} F. T. | Proceedings of the Seventh Conference on Machine Translation (WMT) | 634--645 | We present the joint contribution of IST and Unbabel to the WMT 2022 Shared Task on Quality Estimation (QE). Our team participated in all three subtasks: (i) Sentence and Word-level Quality Prediction; (ii) Explainable QE; and (iii) Critical Error Detection. For all tasks we build on top of the COMET framework, connecting it with the predictor-estimator architecture of OpenKiwi, and equipping it with a word-level sequence tagger and an explanation extractor. Our results suggest that incorporating references during pretraining improves performance across several language pairs on downstream tasks, and that jointly training with sentence and word-level objectives yields a further boost. Furthermore, combining attention and gradient information proved to be the top strategy for extracting good explanations of sentence-level QE models. Overall, our submissions achieved the best results for all three tasks for almost all language pairs by a considerable margin. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,110 |
inproceedings | tao-etal-2022-crossqe | {C}ross{QE}: {HW}-{TSC} 2022 Submission for the Quality Estimation Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.61/ | Tao, Shimin and Chang, Su and Miaomiao, Ma and Yang, Hao and Geng, Xiang and Huang, Shujian and Zhang, Min and Guo, Jiaxin and Wang, Minghan and Li, Yinglu | Proceedings of the Seventh Conference on Machine Translation (WMT) | 646--652 | Quality estimation (QE) is a crucial method to investigate automatic methods for estimating the quality of machine translation results without reference translations. This paper presents Huawei Translation Services Center`s (HW-TSC`s) work called CrossQE in WMT 2022 QE shared tasks 1 and 2, namely sentence- and word- level quality prediction and explainable QE.CrossQE employes the framework of predictor-estimator for task 1, concretely with a pre-trained cross-lingual XLM-RoBERTa large as predictor and task-specific classifier or regressor as estimator. An extensive set of experimental results show that after adding bottleneck adapter layer, mean teacher loss, masked language modeling task loss and MC dropout methods in CrossQE, the performance has improved to a certain extent. For task 2, CrossQE calculated the cosine similarity between each word feature in the target and each word feature in the source by task 1 sentence-level QE system`s predictor, and used the inverse value of maximum similarity between each word in the target and the source as the word translation error risk value. Moreover, CrossQE has outstanding performance on QE test sets of WMT 2022. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,111 |
inproceedings | zafeiridou-sofianopoulos-2022-welocalize | Welocalize-{ARC}/{NKUA}`s Submission to the {WMT} 2022 Quality Estimation Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.62/ | Zafeiridou, Eirini and Sofianopoulos, Sokratis | Proceedings of the Seventh Conference on Machine Translation (WMT) | 653--660 | This paper presents our submission to the WMT 2022 quality estimation shared task and more specifically to the quality prediction sentence-level direct assessment (DA) subtask. We build a multilingual system based on the predictor{--}estimator architecture by using the XLM-RoBERTa transformer for feature extraction and a regression head on top of the final model to estimate the $z$-standardized DA labels. Furthermore, we use pretrained models to extract useful knowledge that reflect various criteria of quality assessment and demonstrate good correlation with human judgements. We optimize the performance of our model by incorporating this information as additional external features in the input data and by applying Monte Carlo dropout during both training and inference. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,112 |
inproceedings | bogoychev-etal-2022-edinburghs | {E}dinburgh`s Submission to the {WMT} 2022 Efficiency Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.63/ | Bogoychev, Nikolay and Behnke, Maximiliana and Van Der Linde, Jelmer and Nail, Graeme and Heafield, Kenneth and Zhang, Biao and Kashyap, Sidharth | Proceedings of the Seventh Conference on Machine Translation (WMT) | 661--667 | We participated in all tracks of the WMT 2022 efficient machine translation task: single-core CPU, multi-core CPU, and GPU hardware with throughput and latency conditions. Our submissions explores a number of several efficiency strategies: knowledge distillation, a simpler simple recurrent unit (SSRU) decoder with one or two layers, shortlisting, deep encoder, shallow decoder, pruning and bidirectional decoder. For the CPU track, we used quantized 8-bit models. For the GPU track, we used FP16 quantisation. We explored various pruning strategies and combination of one or more of the above methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,113 |
inproceedings | helcl-2022-cuni | {CUNI} Non-Autoregressive System for the {WMT} 22 Efficient Translation Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.64/ | Helcl, Jind{\v{r}}ich | Proceedings of the Seventh Conference on Machine Translation (WMT) | 668--670 | We present a non-autoregressive system submission to the WMT 22 Efficient Translation Shared Task. Our system was used by Helcl et al. (2022) in an attempt to provide fair comparison between non-autoregressive and autoregressive models. This submission is an effort to establish solid baselines along with sound evaluation methodology, particularly in terms of measuring the decoding speed. The model itself is a 12-layer Transformer model trained with connectionist temporal classification on knowledge-distilled dataset by a strong autoregressive teacher model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,114 |
inproceedings | qin-etal-2022-royalflush | The {R}oyal{F}lush System for the {WMT} 2022 Efficiency Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.65/ | Qin, Bo and Jia, Aixin and Wang, Qiang and Lu, Jianning and Pan, Shuqin and Wang, Haibo and Chen, Ming | Proceedings of the Seventh Conference on Machine Translation (WMT) | 671--676 | This paper describes the submission of the RoyalFlush neural machine translation system for the WMT 2022 translation efficiency task. Unlike the commonly used autoregressive translation system, we adopted a two-stage translation paradigm called Hybrid Regression Translation (HRT) to combine the advantages of autoregressive and non-autoregressive translation. Specifically, HRT first autoregressively generates a discontinuous sequence (e.g., make a prediction every k tokens, k1) and then fills in all previously skipped tokens at once in a non-autoregressive manner. Thus, we can easily trade off the translation quality and speed by adjusting k. In addition, by integrating other modeling techniques (e.g., sequence-level knowledge distillation and deep-encoder-shallow-decoder layer allocation strategy) and a mass of engineering efforts, HRT improves 80{\%} inference speed and achieves equivalent translation performance with the same-capacity AT counterpart. Our fastest system reaches 6k+ words/second on the GPU latency setting, estimated to be about 3.1x faster than the last year`s winner. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,115 |
inproceedings | shang-etal-2022-hw | {HW}-{TSC}`s Submission for the {WMT}22 Efficiency Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.66/ | Shang, Hengchao and Hu, Ting and Wei, Daimeng and Li, Zongyao and Yu, Xianzhi and Feng, Jianfei and Zhu, Ting and Lei, Lizhi and Tao, Shimin and Yang, Hao and Qin, Ying and Yang, Jinlong and Rao, Zhiqiang and Yu, Zhengzhe | Proceedings of the Seventh Conference on Machine Translation (WMT) | 677--681 | This paper presents the submission of Huawei Translation Services Center (HW-TSC) to WMT 2022 Efficiency Shared Task. For this year`s task, we still apply sentence-level distillation strategy to train small models with different configurations. Then, we integrate the average attention mechanism into the lightweight RNN model to pursue more efficient decoding. We tried adding a retrain step to our 8-bit and 4-bit models to achieve a balance between model size and quality. We still use Huawei Noah`s Bolt for INT8 inference and 4-bit storage. Coupled with Bolt`s support for batch inference and multi-core parallel computing, we finally submit models with different configurations to the CPU latency and throughput tracks to explore the Pareto frontiers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,116 |
inproceedings | deoghare-bhattacharyya-2022-iit | {IIT} {B}ombay`s {WMT}22 Automatic Post-Editing Shared Task Submission | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.67/ | Deoghare, Sourabh and Bhattacharyya, Pushpak | Proceedings of the Seventh Conference on Machine Translation (WMT) | 682--688 | This paper describes IIT Bombay`s submission to the WMT22 Automatic Post-Editing (APE) shared task for the English-Marathi (En-Mr) language pair. We follow the curriculum training strategy to train our APE system. First, we train an encoder-decoder model to perform translation from English to Marathi. Next, we add another encoder to the model and train the resulting \textit{dual-encoder single-decoder} model for the APE task. This involves training the model using the synthetic APE data in multiple training stages and then fine-tuning it using the real APE data. We use the LaBSE technique to ensure the quality of the synthetic APE data. For data augmentation, along with using candidates obtained from an external machine translation (MT) system, we also use the phrase-level APE triplets generated using phrase table injection. As APE systems are prone to the problem of {\textquoteleft}over-correction', we use a sentence-level quality estimation (QE) system to select the final output between an original translation and the corresponding output generated by the APE model. Our approach improves the TER and BLEU scores on the development set by -3.92 and +4.36 points, respectively. Also, the final results on the test set show that our APE system outperforms the baseline system by -3.49 TER points and +5.37 BLEU points. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,117 |
inproceedings | huang-etal-2022-luls | {LUL}`s {WMT}22 Automatic Post-Editing Shared Task Submission | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.68/ | Huang, Xiaoying and Lou, Xingrui and Zhang, Fan and Mei, Tu | Proceedings of the Seventh Conference on Machine Translation (WMT) | 689--693 | By learning the human post-edits, the automatic post-editing (APE) models are often used to modify the output of the machine translation (MT) system to make it as close as possible to human translation. We introduce the system used in our submission of WMT`22 Automatic Post-Editing (APE) English-Marathi (En-Mr) shared task. In this task, we first train the MT system of En-Mr to generate additional machine-translation sentences. Then we use the additional triple to bulid our APE model and use APE dataset to further fine-tuning. Inspired by the mixture of experts (MoE), we use GMM algorithm to roughly divide the text of APE dataset into three categories. After that, the experts are added to the APE model and different domain data are sent to different experts. Finally, we ensemble the models to get better performance. Our APE system significantly improves the translations of provided MT results by -2.848 and +3.74 on the development dataset in terms of TER and BLEU, respectively. Finally, the TER and BLEU scores are improved by -1.22 and +2.41 respectively on the blind test set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,118 |
inproceedings | neves-etal-2022-findings | Findings of the {WMT} 2022 Biomedical Translation Shared Task: Monolingual Clinical Case Reports | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.69/ | Neves, Mariana and Jimeno Yepes, Antonio and Siu, Amy and Roller, Roland and Thomas, Philippe and Vicente Navarro, Maika and Yeganova, Lana and Wiemann, Dina and Di Nunzio, Giorgio Maria and Vezzani, Federica and Gerardin, Christel and Bawden, Rachel and Estrada, Darryl Johan and Lima-lopez, Salvador and Farre-maduel, Eulalia and Krallinger, Martin and Grozea, Cristian and Neveol, Aurelie | Proceedings of the Seventh Conference on Machine Translation (WMT) | 694--723 | In the seventh edition of the WMT Biomedical Task, we addressed a total of seven languagepairs, namely English/German, English/French, English/Spanish, English/Portuguese, English/Chinese, English/Russian, English/Italian. This year`s test sets covered three types of biomedical text genre. In addition to scientific abstracts and terminology items used in previous editions, we released test sets of clinical cases. The evaluation of clinical cases translations were given special attention by involving clinicians in the preparation of reference translations and manual evaluation. For the main MEDLINE test sets, we received a total of 609 submissions from 37 teams. For the ClinSpEn sub-task, we had the participation of five teams. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,119 |
inproceedings | farinha-etal-2022-findings | Findings of the {WMT} 2022 Shared Task on Chat Translation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.70/ | Farinha, Ana C and Farajian, M. Amin and Buchicchio, Marianna and Fernandes, Patrick and C. de Souza, Jos{\'e} G. and Moniz, Helena and Martins, Andr{\'e} F. T. | Proceedings of the Seventh Conference on Machine Translation (WMT) | 724--743 | This paper reports the findings of the second edition of the Chat Translation Shared Task. Similarly to the previous WMT 2020 edition, the task consisted of translating bilingual customer support conversational text. However, unlike the previous edition, in which the bilingual data was created from a synthetic monolingual English corpus, this year we used a portion of the newly released Unbabel`s MAIA corpus, which contains genuine bilingual conversations between agents and customers. We also expanded the language pairs to English{\ensuremath{\leftrightarrow}}German (en{\ensuremath{\leftrightarrow}}de), English{\ensuremath{\leftrightarrow}}French (en{\ensuremath{\leftrightarrow}}fr), and English{\ensuremath{\leftrightarrow}}Brazilian Portuguese (en{\ensuremath{\leftrightarrow}}pt-br).Given that the main goal of the shared task is to translate bilingual conversations, participants were encouraged to train and test their models specifically for this environment. In total, we received 18 submissions from 4 different teams. All teams participated in both directions of en{\ensuremath{\leftrightarrow}}de. One of the teams also participated in en{\ensuremath{\leftrightarrow}}fr and en{\ensuremath{\leftrightarrow}}pt-br. We evaluated the submissions with automatic metrics as well as human judgments via Multidimensional Quality Metrics (MQM) on both directions. The official ranking of the systems is based on the overall MQM scores of the participating systems on both directions, i.e. agent and customer. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,120 |
inproceedings | muller-etal-2022-findings | Findings of the First {WMT} Shared Task on Sign Language Translation ({WMT}-{SLT}22) | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.71/ | M{\"uller, Mathias and Ebling, Sarah and Avramidis, Eleftherios and Battisti, Alessia and Berger, Mich{\`ele and Bowden, Richard and Braffort, Annelies and Cihan Camg{\"oz, Necati and Espa{\~na-bonet, Cristina and Grundkiewicz, Roman and Jiang, Zifan and Koller, Oscar and Moryossef, Amit and Perrollaz, Regula and Reinhard, Sabine and Rios, Annette and Shterionov, Dimitar and Sidler-miserez, Sandra and Tissi, Katja | Proceedings of the Seventh Conference on Machine Translation (WMT) | 744--772 | This paper presents the results of the First WMT Shared Task on Sign Language Translation (WMT-SLT22).This shared task is concerned with automatic translation between signed and spoken languages. The task is novel in the sense that it requires processing visual information (such as video frames or human pose estimation) beyond the well-known paradigm of text-to-text machine translation (MT).The task featured two tracks, translating from Swiss German Sign Language (DSGS) to German and vice versa. Seven teams participated in this first edition of the task, all submitting to the DSGS-to-German track. Besides a system ranking and system papers describing state-of-the-art techniques, this shared task makes the following scientific contributions: novel corpora, reproducible baseline systems and new protocols and software for human evaluation. Finally, the task also resulted in the first publicly available set of system outputs and human evaluation scores for sign language translation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,121 |
inproceedings | adelani-etal-2022-findings | Findings of the {WMT}`22 Shared Task on Large-Scale Machine Translation Evaluation for {A}frican Languages | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.72/ | Adelani, David Ifeoluwa and Alam, Md Mahfuz Ibn and Anastasopoulos, Antonios and Bhagia, Akshita and Costa-juss{\`a}, Marta R. and Dodge, Jesse and Faisal, Fahim and Federmann, Christian and Fedorova, Natalia and Guzm{\'a}n, Francisco and Koshelev, Sergey and Maillard, Jean and Marivate, Vukosi and Mbuya, Jonathan and Mourachko, Alexandre and Saleem, Safiyyah and Schwenk, Holger and Wenzek, Guillaume | Proceedings of the Seventh Conference on Machine Translation (WMT) | 773--800 | We present the results of the WMT`22 SharedTask on Large-Scale Machine Translation Evaluation for African Languages. The shared taskincluded both a data and a systems track, alongwith additional innovations, such as a focus onAfrican languages and extensive human evaluation of submitted systems. We received 14system submissions from 8 teams, as well as6 data track contributions. We report a largeprogress in the quality of translation for Africanlanguages since the last iteration of this sharedtask: there is an increase of about 7.5 BLEUpoints across 72 language pairs, and the average BLEU scores went from 15.09 to 22.60. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,122 |
inproceedings | weller-di-marco-fraser-2022-findings | Findings of the {WMT} 2022 Shared Tasks in Unsupervised {MT} and Very Low Resource Supervised {MT} | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.73/ | Weller-Di Marco, Marion and Fraser, Alexander | Proceedings of the Seventh Conference on Machine Translation (WMT) | 801--805 | We present the findings of the WMT2022Shared Tasks in Unsupervised MT and VeryLow Resource Supervised MT with experiments on the language pairs German to/fromUpper Sorbian, German to/from Lower Sorbian and Lower Sorbian to/from Upper Sorbian. Upper and Lower Sorbian are minoritylanguages spoken in the Eastern parts of Germany. There are active language communitiesworking on the preservation of the languageswho also made the data used in this Shared Taskavailable.In total, four teams participated on this SharedTask, with submissions from three teams for theunsupervised sub task, and submissions fromall four teams for the supervised sub task. Inthis overview paper, we present and discuss theresults. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,123 |
inproceedings | srivastava-singh-2022-overview | Overview and Results of {M}ix{MT} Shared-Task at {WMT} 2022 | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.74/ | Srivastava, Vivek and Singh, Mayank | Proceedings of the Seventh Conference on Machine Translation (WMT) | 806--811 | In this paper, we present an overview of the WMT 2022 shared task on code-mixed machine translation (MixMT). In this shared task, we hosted two code-mixed machine translation subtasks in the following settings: (i) monolingual to code-mixed translation and (ii) code-mixed to monolingual translation. In both the subtasks, we received registration and participation from teams across the globe showing an interest and need to immediately address the challenges with machine translation involving code-mixed and low-resource languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,124 |
inproceedings | casacuberta-etal-2022-findings | Findings of the Word-Level {A}uto{C}ompletion Shared Task in {WMT} 2022 | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.75/ | Casacuberta, Francisco and Foster, George and Huang, Guoping and Koehn, Philipp and Kovacs, Geza and Liu, Lemao and Shi, Shuming and Watanabe, Taro and Zong, Chengqing | Proceedings of the Seventh Conference on Machine Translation (WMT) | 812--820 | Recent years have witnessed rapid advancements in machine translation, but the state-of-the-art machine translation system still can not satisfy the high requirements in some rigorous translation scenarios. Computer-aided translation (CAT) provides a promising solution to yield a high-quality translation with a guarantee. Unfortunately, due to the lack of popular benchmarks, the research on CAT is not well developed compared with machine translation. In this year, we hold a new shared task called Word-level AutoCompletion (WLAC) for CAT in WMT. Specifically, we introduce some resources to train a WLAC model, and particularly we collect data from CAT systems as a part of test data for this shared task. In addition, we employ both automatic and human evaluations to measure the performance of the submitted systems, and our final evaluation results reveal some findings for the WLAC task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,125 |
inproceedings | yang-etal-2022-findings | Findings of the {WMT} 2022 Shared Task on Translation Suggestion | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.76/ | Yang, Zhen and Meng, Fandong and Zhang, Yingxue and Li, Ernan and Zhou, Jie | Proceedings of the Seventh Conference on Machine Translation (WMT) | 821--829 | We report the result of the first edition of the WMT shared task on Translation Suggestion (TS). The task aims to provide alternatives for specific words or phrases given the entire documents generated by machine translation (MT). It consists two sub-tasks, namely, the naive translation suggestion and translation suggestion with hints. The main difference is that some hints are provided in sub-task two, therefore, it is easier for the model to generate more accurate suggestions. For sub-task one, we provide the corpus for the language pairs English-German and English-Chinese. And only English-Chinese corpus is provided for the sub-task two. We received 92 submissions from 5 participating teams in sub-task one and 6 submissions for the sub-task 2, most of them covering all of the translation directions. We used the automatic metric BLEU for evaluating the performance of each submission. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,126 |
inproceedings | lupo-etal-2022-focused | Focused Concatenation for Context-Aware Neural Machine Translation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.77/ | Lupo, Lorenzo and Dinarelli, Marco and Besacier, Laurent | Proceedings of the Seventh Conference on Machine Translation (WMT) | 830--842 | A straightforward approach to context-aware neural machine translation consists in feeding the standard encoder-decoder architecture with a window of consecutive sentences, formed by the current sentence and a number of sentences from its context concatenated to it. In this work, we propose an improved concatenation approach that encourages the model to focus on the translation of the current sentence, discounting the loss generated by target context. We also propose an additional improvement that strengthen the notion of sentence boundaries and of relative sentence distance, facilitating model compliance to the context-discounted objective. We evaluate our approach with both average-translation quality metrics and contrastive test sets for the translation of inter-sentential discourse phenomena, proving its superiority to the vanilla concatenation approach and other sophisticated context-aware systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,127 |
inproceedings | wicks-post-2022-sentence | Does Sentence Segmentation Matter for Machine Translation? | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.78/ | Wicks, Rachel and Post, Matt | Proceedings of the Seventh Conference on Machine Translation (WMT) | 843--854 | For the most part, NLP applications operate at the sentence level. Since sentences occur most naturally in documents, they must be extracted and segmented via the use of a segmenter, of which there are a handful of options. There has been some work evaluating the performance of segmenters on intrinsic metrics, that look at their ability to recover human-segmented sentence boundaries, but there has been no work looking at the effect of segmenters on downstream tasks. We ask the question, {\textquotedblleft}does segmentation matter?{\textquotedblright} and attempt to answer it on the task of machine translation. We consider two settings: the application of segmenters to a black-box system whose training segmentation is mostly unknown, as well as the variation in performance when segmenters are applied to the training process, too. We find that the choice of segmenter largely does not matter, so long as its behavior is not one of extreme under- or over-segmentation. For such settings, we provide some qualitative analysis examining their harms, and point the way towards document-level processing. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,128 |
inproceedings | hoang-etal-2022-revisiting | Revisiting Locality Sensitive Hashing for Vocabulary Selection in Fast Neural Machine Translation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.79/ | Hoang, Hieu and Junczys-dowmunt, Marcin and Grundkiewicz, Roman and Khayrallah, Huda | Proceedings of the Seventh Conference on Machine Translation (WMT) | 855--869 | Neural machine translation models often contain large target vocabularies. The calculation of logits, softmax and beam search is computationally costly over so many classes. We investigate the use of locality sensitive hashing (LSH) to reduce the number of vocabulary items that must be evaluated and explore the relationship between the hashing algorithm, translation speed and quality. Compared to prior work, our LSH-based solution does not require additional augmentation via word-frequency lists or alignments. We propose a training procedure that produces models, which, when combined with our LSH inference algorithm increase translation speed by up to 87{\%} over the baseline, while maintaining translation quality as measured by BLEU. Apart from just using BLEU, we focus on minimizing search errors compared to the full softmax, a much harsher quality criterion. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,129 |
inproceedings | diddee-etal-2022-brittle | Too Brittle to Touch: Comparing the Stability of Quantization and Distillation towards Developing Low-Resource {MT} Models | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.80/ | Diddee, Harshita and Dandapat, Sandipan and Choudhury, Monojit and Ganu, Tanuja and Bali, Kalika | Proceedings of the Seventh Conference on Machine Translation (WMT) | 870--885 | Leveraging shared learning through Massively Multilingual Models, state-of-the-art Machine translation (MT) models are often able to adapt to the paucity of data for low-resource languages. However, this performance comes at the cost of significantly bloated models which aren`t practically deployable. Knowledge Distillation is one popular technique to develop competitive lightweight models: In this work, we first evaluate its use in compressing MT models, focusing specifically on languages with extremely limited training data. Through our analysis across 8 languages, we find that the variance in the performance of the distilled models due to their dependence on priors including the amount of synthetic data used for distillation, the student architecture, training hyper-parameters and confidence of the teacher models, makes distillation a brittle compression mechanism. To mitigate this, we further explore the use of post-training quantization for the compression of these models. Here, we find that while Distillation provides gains across some low-resource languages, Quantization provides more consistent performance trends for the entire range of languages, especially the lowest-resource languages in our target set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,130 |
inproceedings | ryu-etal-2022-data | Data Augmentation for Inline Tag-Aware Neural Machine Translation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.81/ | Ryu, Yonghyun and Choi, Yoonjung and Kim, Sangha | Proceedings of the Seventh Conference on Machine Translation (WMT) | 886--894 | Despite the wide use of inline formatting, not much has been studied on translating sentences with inline formatted tags. The detag-and-project approach using word alignments is one solution to translating a tagged sentence. However, the method has a limitation: tag reinsertion is not considered in the translation process. Another solution is to use an end-to-end model which takes text with inline tags as inputs and translates them into a tagged sentence. This approach can alleviate the problems of the aforementioned method, but there is no sufficient parallel corpus dedicated to such a task. To solve this problem, an automatic data augmentation method by tag injection is suggested, but it is computationally expensive and augmentation is limited since the model is based on isolated translation for all fragments. In this paper, we propose an efficient and effective tag augmentation method based on word alignment. Our experiments show that our approach outperforms the detag-and-project methods. We also introduce a metric to evaluate the placement of tags and show that the suggested metric is reasonable for our task. We further analyze the effectiveness of each implementation detail. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,131 |
inproceedings | ballier-etal-2022-spectrans | The {SPECTRANS} System Description for the {WMT}22 Biomedical Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.82/ | Ballier, Nicolas and Yun{\`e}s, Jean-baptiste and Wisniewski, Guillaume and Zhu, Lichao and Zimina, Maria | Proceedings of the Seventh Conference on Machine Translation (WMT) | 895--900 | This paper describes the SPECTRANS submission for the WMT 2022 biomedical shared task. We present the results of our experiments using the training corpora and the JoeyNMT (Kreutzer et al., 2019) and SYSTRAN Pure Neural Server/ Advanced Model Studio toolkits for the language directions English to French and French to English. We compare the pre- dictions of the different toolkits. We also use JoeyNMT to fine-tune the model with a selection of texts from WMT, Khresmoi and UFAL data sets. We report our results and assess the respective merits of the different translated texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,132 |
inproceedings | choi-etal-2022-srts | {SRT}`s Neural Machine Translation System for {WMT}22 Biomedical Translation Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.83/ | Choi, Yoonjung and Shin, Jiho and Ryu, Yonghyun and Kim, Sangha | Proceedings of the Seventh Conference on Machine Translation (WMT) | 901--907 | This paper describes the Samsung Research`s Translation system (SRT) submitted to the WMT22 biomedical translation task in two language directions: English to Spanish and Spanish to English. To improve the overall quality, we adopt the deep transformer architecture and employ the back-translation strategy for monolingual corpus. One of the issues in the domain translation is to translate domain-specific terminologies well. To address this issue, we apply the soft-constrained terminology translation based on biomedical terminology dictionaries. In this paper, we provide the performance of our system with WMT20 and WMT21 biomedical testsets. Compared to the best model in WMT20 and WMT21, our system shows equal or better performance. According to the official evaluation results in terms of BLEU scores, our systems get the highest scores in both directions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,133 |
inproceedings | han-etal-2022-examining | Examining Large Pre-Trained Language Models for Machine Translation: What You Don`t Know about It | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.84/ | Han, Lifeng and Erofeev, Gleb and Sorokina, Irina and Gladkoff, Serge and Nenadic, Goran | Proceedings of the Seventh Conference on Machine Translation (WMT) | 908--919 | Pre-trained language models (PLMs) often take advantage of the monolingual and multilingual dataset that is freely available online to acquire general or mixed domain knowledge before deployment into specific tasks. Extra-large PLMs (xLPLMs) are proposed very recently to claim supreme performances over smaller-sized PLMs such as in machine translation (MT) tasks. These xLPLMs include Meta-AI`s wmt21-dense-24-wide-en-X (2021) and NLLB (2022). In this work, we examine if xLPLMs are absolutely superior to smaller-sized PLMs in fine-tuning toward domain-specific MTs. We use two different in-domain data of different sizes: commercial automotive in-house data and clinical shared task data from the ClinSpEn2022 challenge at WMT2022. We choose the popular Marian Helsinki as smaller sized PLM and two massive-sized Mega-Transformers from Meta-AI as xLPLMs.Our experimental investigation shows that 1) on smaller-sized in-domain commercial automotive data, xLPLM wmt21-dense-24-wide-en-X indeed shows much better evaluation scores using SacreBLEU and hLEPOR metrics than smaller-sized Marian, even though its score increase rate is lower than Marian after fine-tuning; 2) on relatively larger-size well prepared clinical data fine-tuning, the xLPLM NLLB tends to lose its advantage over smaller-sized Marian on two sub-tasks (clinical terms and ontology concepts) using ClinSpEn offered metrics METEOR, COMET, and ROUGE-L, and totally lost to Marian on Task-1 (clinical cases) on all official metrics including SacreBLEU and BLEU; 3) metrics do not always agree with each other on the same tasks using the same model outputs; 4) clinic-Marian ranked No.2 on Task- 1 (via SacreBLEU/BLEU) and Task-3 (via METEOR and ROUGE) among all submissions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,134 |
inproceedings | li-etal-2022-summer | Summer: {W}e{C}hat Neural Machine Translation Systems for the {WMT}22 Biomedical Translation Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.85/ | Li, Ernan and Meng, Fandong and Zhou, Jie | Proceedings of the Seventh Conference on Machine Translation (WMT) | 920--924 | This paper introduces WeChat`s participation in WMT 2022 shared biomedical translationtask on Chinese{\textrightarrow}English. Our systems are based on the Transformer(Vaswani et al., 2017),and use several different Transformer structures to improve the quality of translation. In our experiments, we employ data filtering, data generation, several variants of Transformer,fine-tuning and model ensemble. Our Chinese{\textrightarrow}English system, named Summer, achieves the highest BLEU score among all submissions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,135 |
inproceedings | manchanda-bhagwat-2022-optums | Optum`s Submission to {WMT}22 Biomedical Translation Tasks | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.86/ | Manchanda, Sahil and Bhagwat, Saurabh | Proceedings of the Seventh Conference on Machine Translation (WMT) | 925--929 | This paper describes Optum`s submission to the Biomedical Translation task of the seventh conference on Machine Translation (WMT22). The task aims at promoting the development and evaluation of machine translation systems in their ability to handle challenging domain-specific biomedical data. We made submissions to two sub-tracks of ClinSpEn 2022, namely, ClinSpEn-CC (clinical cases) and ClinSpEn-OC (ontology concepts). These sub-tasks aim to test translation from English to Spanish. Our approach involves fine-tuning a pre-trained transformer model using in-house clinical domain data and the biomedical data provided by WMT. The fine-tuned model results in a test BLEU score of 38.12 in the ClinSpEn-CC (clinical cases) subtask, which is a gain of 1.23 BLEU compared to the pre-trained model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,136 |
inproceedings | wang-etal-2022-huawei | Huawei {B}abel{T}ar {NMT} at {WMT}22 Biomedical Translation Task: How We Further Improve Domain-specific {NMT} | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.87/ | Wang, Weixuan and Meng, Xupeng and Yan, Suqing and Tian, Ye and Peng, Wei | Proceedings of the Seventh Conference on Machine Translation (WMT) | 930--935 | This paper describes Huawei Artificial Intelligence Application Research Center`s neural machine translation system ({\textquotedblleft}BabelTar{\textquotedblright}). Our submission to the WMT22 biomedical translation shared task covers language directions between English and the other seven languages (French, German, Italian, Spanish, Portuguese, Russian, and Chinese). During the past four years, our participation in this domain-specific track has witnessed a paradigm shift of methodology from a purely data-driven focus to embracing diversified techniques, including pre-trained multilingual NMT models, homograph disambiguation, ensemble learning, and preprocessing methods. We illustrate practical insights and measured performance improvements relating to how we further improve our domain-specific NMT system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,137 |
inproceedings | wu-etal-2022-hw | {HW}-{TSC} Translation Systems for the {WMT}22 Biomedical Translation Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.88/ | Wu, Zhanglin and Yang, Jinlong and Rao, Zhiqiang and Yu, Zhengzhe and Wei, Daimeng and Chen, Xiaoyu and Li, Zongyao and Shang, Hengchao and Li, Shaojun and Zhu, Ming and Luo, Yuanchang and Xie, Yuhao and Ma, Miaomiao and Zhu, Ting and Lei, Lizhi and Peng, Song and Yang, Hao and Qin, Ying | Proceedings of the Seventh Conference on Machine Translation (WMT) | 936--942 | This paper describes the translation systems trained by Huawei translation services center (HW-TSC) for the WMT22 biomedical translation task in five language pairs: English{\ensuremath{\leftrightarrow}}German (en{\ensuremath{\leftrightarrow}}de), English{\ensuremath{\leftrightarrow}}French (en{\ensuremath{\leftrightarrow}}fr), English{\ensuremath{\leftrightarrow}}Chinese (en{\ensuremath{\leftrightarrow}}zh), English{\ensuremath{\leftrightarrow}}Russian (en{\ensuremath{\leftrightarrow}}ru) and Spanish{\textrightarrow}English (es{\textrightarrow}en). Our primary systems are built on deep Transformer with a large filter size. We also utilize R-Drop, data diversification, forward translation, back translation, data selection, finetuning and ensemble to improve the system performance. According to the official evaluation results in OCELoT or CodaLab, our unconstrained systems in en{\textrightarrow}de, de{\textrightarrow}en, en{\textrightarrow}fr, fr{\textrightarrow}en, en{\textrightarrow}zh and es{\textrightarrow}en (clinical terminology sub-track) get the highest BLEU scores among all submissions for the WMT22 biomedical translation task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,138 |
inproceedings | alves-etal-2022-unbabel | Unbabel-{IST} at the {WMT} Chat Translation Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.89/ | Alves, Jo{\~a}o and Martins, Pedro Henrique and C. de Souza, Jos{\'e} G. and Farajian, M. Amin and Martins, Andr{\'e} F. T. | Proceedings of the Seventh Conference on Machine Translation (WMT) | 943--948 | We present the joint contribution of IST and Unbabel to the WMT 2022 Chat Translation Shared Task. We participated in all six language directions (English {\ensuremath{\leftrightarrow}} German, English {\ensuremath{\leftrightarrow}} French, English {\ensuremath{\leftrightarrow}} Brazilian Portuguese). Due to the lack of domain-specific data, we use mBART50, a large pretrained language model trained on millions of sentence-pairs, as our base model. We fine-tune it using a two step fine-tuning process. In the first step, we fine-tune the model on publicly available data. In the second step, we use the validation set. After having a domain specific model, we explore the use of kNN-MT as a way of incorporating domain-specific data at decoding time. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,139 |
inproceedings | gain-etal-2022-investigating | Investigating Effectiveness of Multi-Encoder for Conversational Neural Machine Translation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.90/ | Gain, Baban and Appicharla, Ramakrishna and Chennabasavaraj, Soumya and Garera, Nikesh and Ekbal, Asif and Chelliah, Muthusamy | Proceedings of the Seventh Conference on Machine Translation (WMT) | 949--954 | Multilingual chatbots are the need of the hour for modern business. There is increasing demand for such systems all over the world. A multilingual chatbot can help to connect distant parts of the world together, without sharing a common language. We participated in WMT22 Chat Translation Shared Task. In this paper, we report descriptions of methodologies used for participation. We submit outputs from multi-encoder based transformer model, where one encoder is for context and another for source utterance. We consider one previous utterance as context. We obtain COMET scores of 0.768 and 0.907 on English-to-German and German-to-English directions, respectively. We submitted outputs without using context at all, which generated worse results in English-to-German direction. While for German-to-English, the model achieved a lower COMET score but slightly higher chrF and BLEU scores. Further, to understand the effectiveness of the context encoder, we submitted a run after removing the context encoder during testing and we obtain similar results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,140 |
inproceedings | liang-etal-2022-bjtu | {BJTU}-{W}e{C}hat`s Systems for the {WMT}22 Chat Translation Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.91/ | Liang, Yunlong and Meng, Fandong and Xu, Jinan and Chen, Yufeng and Zhou, Jie | Proceedings of the Seventh Conference on Machine Translation (WMT) | 955--961 | This paper introduces the joint submission of the Beijing Jiaotong University and WeChat AI to the WMT`22 chat translation task for English-German. Based on the Transformer, we apply several effective variants. In our experiments, we apply the pre-training-then-fine-tuning paradigm. In the first pre-training stage, we employ data filtering and synthetic data generation (i.e., back-translation, forward-translation, and knowledge distillation). In the second fine-tuning stage, we investigate speaker-aware in-domain data generation, speaker adaptation, prompt-based context modeling, target denoising fine-tuning, and boosted self-COMET-based model ensemble. Our systems achieve 81.0 and 94.6 COMET scores on English-German and German-English, respectively. The COMET scores of English-German and German-English are the highest among all submissions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,141 |
inproceedings | yang-etal-2022-hw | {HW}-{TSC} Translation Systems for the {WMT}22 Chat Translation Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.92/ | Yang, Jinlong and Li, Zongyao and Wei, Daimeng and Shang, Hengchao and Chen, Xiaoyu and Yu, Zhengzhe and Rao, Zhiqiang and Li, Shaojun and Wu, Zhanglin and Xie, Yuhao and Luo, Yuanchang and Zhu, Ting and Zhao, Yanqing and Lei, Lizhi and Yang, Hao and Qin, Ying | Proceedings of the Seventh Conference on Machine Translation (WMT) | 962--968 | This paper describes the submissions of Huawei Translation Services Center (HW-TSC) to WMT22 chat translation shared task on English-Germany (en-de) bidirection with results of zore-shot and few-shot tracks. We use the deep transformer architecture with a lager parameter size. Our submissions to the WMT21 News Translation task are used as the baselines. We adopt strategies such as back translation, forward translation, domain transfer, data selection, and noisy forward translation in task, and achieve competitive results on the development set. We also test the effectiveness of document translation on chat tasks. Due to the lack of chat data, the results on the development set show that it is not as effective as sentence-level translation models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,142 |
inproceedings | dey-etal-2022-clean | Clean Text and Full-Body Transformer: {M}icrosoft`s Submission to the {WMT}22 Shared Task on Sign Language Translation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.93/ | Dey, Subhadeep and Pal, Abhilash and Chaabani, Cyrine and Koller, Oscar | Proceedings of the Seventh Conference on Machine Translation (WMT) | 969--976 | This paper describes Microsoft`s submission to the first shared task on sign language translation at WMT 2022, a public competition tackling sign language to spoken language translation for Swiss German sign language. The task is very challenging due to data scarcity and an unprecedented vocabulary size of more than 20k words on the target side. Moreover, the data is taken from real broadcast news, includes native signing and covers scenarios of long videos. Motivated by recent advances in action recognition, we incorporate full body information by extracting features from a pre-trained I3D model and applying a standard transformer network. The accuracy of the system is furtherimproved by applying careful data cleaning on the target text. We obtain BLEU scores of 0.6 and 0.78 on the test and dev set respectively, which is the best score among the participants of the shared task. Also in the human evaluation the submission reaches the first place. The BLEU score is further improved to 1.08 on the dev set by applying features extracted from a lip reading model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,143 |
inproceedings | hamidullah-etal-2022-spatio | Spatio-temporal Sign Language Representation and Translation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.94/ | Hamidullah, Yasser and Van Genabith, Josef and Espa{\~n}a-bonet, Cristina | Proceedings of the Seventh Conference on Machine Translation (WMT) | 977--982 | This paper describes the DFKI-MLT submission to the WMT-SLT 2022 sign language translation (SLT) task from Swiss German Sign Language (video) into German (text). State-of-the-art techniques for SLT use a generic seq2seq architecture with customized input embeddings. Instead of word embeddings as used in textual machine translation, SLT systems use features extracted from video frames. Standard approaches often do not benefit from temporal features. In our participation, we present a system that learns spatio-temporal feature representations and translation in a single model, resulting in a real end-to-end architecture expected to better generalize to new data sets. Our best system achieved $5 \pm 1$ BLEU points on the development set, but the performance on the test dropped to $0.11 \pm 0.06$ BLEU points. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,144 |
inproceedings | hufe-avramidis-2022-experimental | Experimental Machine Translation of the {S}wiss {G}erman Sign Language via 3{D} Augmentation of Body Keypoints | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.95/ | Hufe, Lorenz and Avramidis, Eleftherios | Proceedings of the Seventh Conference on Machine Translation (WMT) | 983--988 | This paper describes the participation of DFKI-SLT at the Sign Language Translation Task of the Seventh Conference of Machine Translation (WMT22). The system focuses on the translation direction from the Swiss German Sign Language (DSGS) to written German. The original videos of the sign language were analyzed with computer vision models to provide 3D body keypoints. A deep-learning sequence-to-sequence model is trained on a parallel corpus of these body keypoints aligned to written German sentences. Geometric data augmentation occurs during the training process. The body keypoints are augmented by artificial rotation in the three dimensional space. The 3D-transformation is calculated with different angles on every batch of the training process. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,145 |
inproceedings | shi-etal-2022-ttics | {TTIC}`s {WMT}-{SLT} 22 Sign Language Translation System | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.96/ | Shi, Bowen and Brentari, Diane and Shakhnarovich, Gregory and Livescu, Karen | Proceedings of the Seventh Conference on Machine Translation (WMT) | 989--993 | We describe TTIC`s model submission to WMT-SLT 2022 task on sign language translation (Swiss-German Sign Language (DSGS) - German). Our model consists of an I3D backbone for image encoding and a Transformerbased encoder-decoder model for sequence modeling. The I3D is pre-trained with isolated sign recognition using the WLASL dataset. The model is based on RGB images alone and does not rely on the pre-extracted human pose. We explore a few different strategies for model training in this paper. Our system achieves 0.3 BLEU score and 0.195 Chrf score on the official test set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,146 |
inproceedings | tarres-etal-2022-tackling | Tackling Low-Resourced Sign Language Translation: {UPC} at {WMT}-{SLT} 22 | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.97/ | Tarres, Laia and G{\'a}llego, Gerard I. and Giro-i-nieto, Xavier and Torres, Jordi | Proceedings of the Seventh Conference on Machine Translation (WMT) | 994--1000 | This paper describes the system developed at the Universitat Polit{\`e}cnica de Catalunya for the Workshop on Machine Translation 2022 Sign Language Translation Task, in particular, for the sign-to-text direction. We use a Transformer model implemented with the Fairseq modeling toolkit. We have experimented with the vocabulary size, data augmentation techniques and pretraining the model with the PHOENIX-14T dataset. Our system obtains 0.50 BLEU score for the test set, improving the organizers' baseline by 0.38 BLEU. We remark the poor results for both the baseline and our system, and thus, the unreliability of our findings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,147 |
inproceedings | abdulmumin-etal-2022-separating | Separating Grains from the Chaff: Using Data Filtering to Improve Multilingual Translation for Low-Resourced {A}frican Languages | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.98/ | Abdulmumin, Idris and Beukman, Michael and Alabi, Jesujoba and Emezue, Chris Chinenye and Chimoto, Everlyn and Adewumi, Tosin and Muhammad, Shamsuddeen and Adeyemi, Mofetoluwa and Yousuf, Oreen and Singh, Sahib and Gwadabe, Tajuddeen | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1001--1014 | We participated in the WMT 2022 Large-Scale Machine Translation Evaluation for the African Languages Shared Task. This work describes our approach, which is based on filtering the given noisy data using a sentence-pair classifier that was built by fine-tuning a pre-trained language model. To train the classifier, we obtain positive samples (i.e. high-quality parallel sentences) from a gold-standard curated dataset and extract negative samples (i.e. low-quality parallel sentences) from automatically aligned parallel data by choosing sentences with low alignment scores. Our final machine translation model was then trained on filtered data, instead of the entire noisy dataset. We empirically validate our approach by evaluating on two common datasets and show that data filtering generally improves overall translation quality, in some cases even significantly. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,148 |
inproceedings | alam-anastasopoulos-2022-language | Language Adapters for Large-Scale {MT}: The {GMU} System for the {WMT} 2022 Large-Scale Machine Translation Evaluation for {A}frican Languages Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.99/ | Alam, Md Mahfuz Ibn and Anastasopoulos, Antonios | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1015--1033 | This report describes GMU`s machine translation systems for the WMT22 shared task on large-scale machine translation evaluation for African languages. We participated in the constrained translation track where only the data listed on the shared task page were allowed, including submissions accepted to the Data track. Our approach uses models initialized with DeltaLM, a generic pre-trained multilingual encoder-decoder model, and fine-tuned correspondingly with the allowed data sources. Our best submission incorporates language family and language-specific adapter units; ranking ranked second under the constrained setting. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,149 |
inproceedings | cruz-sutawika-2022-samsung | {S}amsung Research {P}hilippines - Datasaur {AI}`s Submission for the {WMT}22 Large Scale Multilingual Translation Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.100/ | Cruz, Jan Christian Blaise and Sutawika, Lintang | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1034--1038 | This paper describes the submission of the joint Samsung Research Philippines - Datasaur AI team for the WMT22 Large Scale Multilingual African Translation shared task. We approach the contest as a way to explore task composition as a solution for low-resource multilingual translation, using adapter fusion to combine multiple task adapters that learn subsets of the total translation pairs. Our final model shows performance improvements in 32 out of the 44 translation directions that we participate in when compared to a single model system trained on multiple directions at once. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,150 |
inproceedings | elmadani-etal-2022-university | University of Cape Town`s {WMT}22 System: Multilingual Machine Translation for {S}outhern {A}frican Languages | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.101/ | Elmadani, Khalid and Meyer, Francois and Buys, Jan | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1039--1048 | The paper describes the University of Cape Town`s submission to the constrained track of the WMT22 Shared Task: Large-Scale Machine Translation Evaluation for African Languages. Our system is a single multilingual translation model that translates between English and 8 South / South East African Languages, as well as between specific pairs of the African languages. We used several techniques suited for low-resource machine translation (MT), including overlap BPE, back-translation, synthetic training data generation, and adding more translation directions during training. Our results show the value of these techniques, especially for directions where very little or no bilingual training data is available. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,151 |
inproceedings | jiao-etal-2022-tencents | Tencent`s Multilingual Machine Translation System for {WMT}22 Large-Scale {A}frican Languages | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.102/ | Jiao, Wenxiang and Tu, Zhaopeng and Li, Jiarui and Wang, Wenxuan and Huang, Jen-tse and Shi, Shuming | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1049--1056 | This paper describes Tencent`s multilingual machine translation systems for the WMT22 shared task on Large-Scale Machine Translation Evaluation for African Languages. We participated in the constrained translation track in which only the data and pretrained models provided by the organizer are allowed. The task is challenging due to three problems, including the absence of training data for some to-be-evaluated language pairs, the uneven optimization of language pairs caused by data imbalance, and the curse of multilinguality. To address these problems, we adopt data augmentation, distributionally robust optimization, and language family grouping, respectively, to develop our multilingual neural machine translation (MNMT) models. Our submissions won the 1st place on the blind test sets in terms of the automatic evaluation metrics. Codes, models, and detailed competition results are available at \url{https://github.com/wxjiao/WMT2022-Large-Scale-African}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,152 |
inproceedings | kamboj-etal-2022-dentra | {DENTRA}: Denoising and Translation Pre-training for Multilingual Machine Translation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.103/ | Kamboj, Samta and Sahu, Sunil Kumar and Sengupta, Neha | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1057--1067 | In this paper, we describe our submission to the WMT-2022: Large-Scale Machine Translation Evaluation for African Languages under the Constrained Translation track. We introduce DENTRA, a novel pre-training strategy for a multilingual sequence-to-sequence transformer model. DENTRA pre-training combines denoising and translation objectives to incorporate both monolingual and bitext corpora in 24 African, English, and French languages. To evaluate the quality of DENTRA, we fine-tuned it with two multilingual machine translation configurations, one-to-many and many-to-one. In both pre-training and fine-tuning, we employ only the datasets provided by the organizers. We compare DENTRA against a strong baseline, M2M-100, in different African multilingual machine translation scenarios and show gains in 3 out of 4 subtasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,153 |
inproceedings | qian-etal-2022-volctrans | The {V}olc{T}rans System for {WMT}22 Multilingual Machine Translation Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.104/ | Qian, Xian and Hu, Kai and Wang, Jiaqiang and Liu, Yifeng and Pan, Xingyuan and Cao, Jun and Wang, Mingxuan | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1068--1075 | This report describes our VolcTrans system for the WMT22 shared task on large-scale multilingual machine translation. We participated in the unconstrained track which allows the use of external resources. Our system is a transformer-based multilingual model trained on data from multiple sources including the public training set from the data track, NLLB data provided by Meta AI, self-collected parallel corpora, and pseudo bitext from back-translation. Both bilingual and monolingual texts are cleaned by a series of heuristic rules. On the official test set, our system achieves 17.3 BLEU, 21.9 spBLEU, and 41.9 chrF2++ on average over all language pairs. Averaged inference speed is 11.5 sentences per second using a single Nvidia Tesla V100 GPU. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,154 |
inproceedings | vegi-etal-2022-webcrawl | {W}eb{C}rawl {A}frican : A Multilingual Parallel Corpora for {A}frican Languages | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.105/ | Vegi, Pavanpankaj and J, Sivabhavani and Paul, Biswajit and Mishra, Abhinav and Banjare, Prashant and K R, Prasanna and Viswanathan, Chitra | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1076--1089 | WebCrawl African is a mixed domain multilingual parallel corpora for a pool of African languages compiled by ANVITA machine translation team of Centre for Artificial Intelligence and Robotics Lab, primarily for accelerating research on low-resource and extremely low-resource machine translation and is part of the submission to WMT 2022 shared task on Large-Scale Machine Translation Evaluation for African Languages under the data track. The corpora is compiled through web data mining and comprises 695K parallel sentences spanning 74 different language pairs from English and 15 African languages, many of which fall under low and extremely low resource categories. As a measure of corpora usefulness, a MNMT model for 24 African languages to English is trained by combining WebCrawl African corpora with existing corpus and evaluation on FLORES200 shows that inclusion of WebCrawl African corpora could improve BLEU score by 0.01-1.66 for 12 out of 15 African to English translation directions and even by 0.18-0.68 for the 4 out of 9 African to English translation directions which are not part of WebCrawl African corpora. WebCrawl African corpora includes more parallel sentences for many language pairs in comparison to OPUS public repository. This data description paper captures creation of corpora and results obtained along with datasheets. The WebCrawl African corpora is hosted on github repository. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,155 |
inproceedings | vegi-etal-2022-anvita | {ANVITA}-{A}frican: A Multilingual Neural Machine Translation System for {A}frican Languages | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.106/ | Vegi, Pavanpankaj and J, Sivabhavani and Paul, Biswajit and K R, Prasanna and Viswanathan, Chitra | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1090--1097 | This paper describes ANVITA African NMT system submitted by team ANVITA for WMT 2022 shared task on Large-Scale Machine Translation Evaluation for African Languages under the constrained translation track. The team participated in 24 African languages to English MT directions. For better handling of relatively low resource language pairs and effective transfer learning, models are trained in multilingual setting. Heuristic based corpus filtering is applied and it improved performance by 0.04-2.06 BLEU across 22 out of 24 African to English directions and also improved training time by 5x. Use of deep transformer with 24 layers of encoder and 6 layers of decoder significantly improved performance by 1.1-7.7 BLEU across all the 24 African to English directions compared to base transformer. For effective selection of source vocabulary in multilingual setting, joint and language wise vocabulary selection strategies are explored at the source side. Use of language wise vocabulary selection however did not consistently improve performance of low resource languages in comparison to joint vocabulary selection. Empirical results indicate that training using deep transformer with filtered corpora seems to be a better choice than using base transformer on the whole corpora both in terms of accuracy and training time. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,156 |
inproceedings | li-etal-2022-hw-tsc-systems | {HW}-{TSC} Systems for {WMT}22 Very Low Resource Supervised {MT} Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.107/ | Li, Shaojun and Luo, Yuanchang and Wei, Daimeng and Li, Zongyao and Shang, Hengchao and Chen, Xiaoyu and Wu, Zhanglin and Yang, Jinlong and Rao, Zhiqiang and Yu, Zhengzhe and Xie, Yuhao and Lei, Lizhi and Yang, Hao and Qin, Ying | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1098--1103 | This paper describes the submissions of Huawei translation services center (HW-TSC) to the WMT22 Very Low Resource Supervised MT task. We participate in all 6 supervised tracks including all combinations between Upper/Lower Sorbian (Hsb/Dsb) and German (De). Our systems are build on deep Transformer with a large filter size. We use multilingual transfer with German-Czech (De-Cs) and German-Polish (De-Pl) parallel data. We also utilize regularized dropout (R-Drop), back translation, fine-tuning and ensemble to improve the system performance. According to the official evaluation results on OCELoT, our supervised systems of all 6 language directions get the highest BLEU scores among all submissions. Our pre-trained multilingual model for unsupervised De2Dsb and Dsb2De translation also gain highest BLEU. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,157 |
inproceedings | tangsali-etal-2022-unsupervised | Unsupervised and Very-Low Resource Supervised Translation on {G}erman and Sorbian Variant Languages | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.108/ | Tangsali, Rahul and Vyawahare, Aditya and Mandke, Aditya and Litake, Onkar and Kadam, Dipali | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1104--1110 | This paper presents the work of team PICT-NLP for the shared task on unsupervised and very low-resource supervised machine translation, organized by the Workshop on Machine Translation, a workshop in collocation with the Conference on Empirical Methods in Natural Language Processing (EMNLP 2022). The paper delineates the approaches we implemented for supervised and unsupervised translation between the following 6 language pairs: German-Lower Sorbian (de-dsb), Lower Sorbian-German (dsb-de), Lower Sorbian-Upper Sorbian (dsb-hsb), Upper Sorbian-Lower Sorbian (hsb-dsb), German-Upper Sorbian (de-hsb), and Upper Sorbian-German (hsb-de). For supervised learning, we implemented the transformer architecture from scratch using the Fairseq library. Whereas for unsupervised learning, we implemented Facebook`s XLM masked language modeling approach. We discuss the training details for the models we used, and the results obtained from our approaches. We used the BLEU and chrF metrics for evaluating the accuracies of the generated translations on our systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,158 |
inproceedings | signoroni-rychly-2022-muni | {MUNI}-{NLP} Systems for {L}ower {S}orbian-{G}erman and {L}ower {S}orbian-{U}pper {S}orbian Machine Translation @ {WMT}22 | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.109/ | Signoroni, Edoardo and Rychl{\'y}, Pavel | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1111--1116 | We describe our neural machine translation systems for the WMT22 shared task on unsupervised MT and very low resource supervised MT. We submit supervised NMT systems for Lower Sorbian-German and Lower Sorbian-Upper Sorbian translation in both directions. By using a novel tokenization algorithm, data augmentation techniques, such as Data Diversification (DD), and parameter optimization we improve on our baselines by 10.5-10.77 BLEU for Lower Sorbian-German and by 1.52-1.88 BLEU for Lower Sorbian-Upper Sorbian. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,159 |
inproceedings | shapiro-etal-2022-aic | The {AIC} System for the {WMT} 2022 Unsupervised {MT} and Very Low Resource Supervised {MT} Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.110/ | Shapiro, Ahmad and Salama, Mahmoud and Abdelhakim, Omar and Fayed, Mohamed and Khalafallah, Ayman and Adly, Noha | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1117--1121 | This paper presents our submissions to WMT 22 shared task in the Unsupervised and Very Low Resource Supervised Machine Translation tasks. The task revolves around translating between German {\ensuremath{\leftrightarrow}} Upper Sorbian (de {\ensuremath{\leftrightarrow}} hsb), German {\ensuremath{\leftrightarrow}} Lower Sorbian (de {\ensuremath{\leftrightarrow}} dsb) and Upper Sorbian {\ensuremath{\leftrightarrow}} Lower Sorbian (hsb {\ensuremath{\leftrightarrow}} dsb) in both unsupervised and supervised manner. For the unsupervised system, we trained an unsupervised phrase-based statistical machine translation (UPBSMT) system on each pair independently. We pretrained a De-Salvic mBART model on the following languages Polish (pl), Czech (cs), German (de), Upper Sorbian (hsb), Lower Sorbian (dsb). We then fine-tuned our mBART on the synthetic parallel data generated by the (UPBSMT) model along with authentic parallel data (de {\ensuremath{\leftrightarrow}} pl, de {\ensuremath{\leftrightarrow}} cs). We further fine-tuned our unsupervised system on authentic parallel data (hsb {\ensuremath{\leftrightarrow}} dsb, de {\ensuremath{\leftrightarrow}} dsb, de {\ensuremath{\leftrightarrow}} hsb) to submit our supervised low-resource system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,160 |
inproceedings | dabre-2022-nict | {NICT} at {M}ix{MT} 2022: Synthetic Code-Mixed Pre-training and Multi-way Fine-tuning for {H}inglish{--}{E}nglish Translation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.111/ | Dabre, Raj | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1122--1125 | In this paper, we describe our submission to the Code-mixed Machine Translation (MixMT) shared task. In MixMT, the objective is to translate Hinglish to English and vice versa. For our submissions, we focused on code-mixed pre-training and multi-way fine-tuning. Our submissions achieved rank 4 in terms of automatic evaluation score. For Hinglish to English translation, our submission achieved rank 4 as well. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,161 |
inproceedings | gahoi-etal-2022-gui | Gui at {M}ix{MT} 2022 : {E}nglish-{H}inglish : An {MT} Approach for Translation of Code Mixed Data | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.112/ | Gahoi, Akshat and Duneja, Jayant and Padhi, Anshul and Mangale, Shivam and Rajput, Saransh and Kamble, Tanvi and Sharma, Dipti and Varma, Vasudev | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1126--1130 | Code-mixed machine translation has become an important task in multilingual communities and extending the task of machine translation to code mixed data has become a common task for these languages. In the shared tasks of EMNLP 2022, we try to tackle the same for both English + Hindi to Hinglish and Hinglish to English. The first task dealt with both Roman and Devanagari script as we had monolingual data in both English and Hindi whereas the second task only had data in Roman script. To our knowledge, we achieved one of the top ROUGE-L and WER scores for the first task of Monolingual to Code-Mixed machine translation. In this paper, we discuss the use of mBART with some special pre-processing and post-processing (transliteration from Devanagari to Roman) for the first task in detail and the experiments that we performed for the second task of translating code-mixed Hinglish to monolingual English. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,162 |
inproceedings | hegde-lakshmaiah-2022-mucs | {MUCS}@{M}ix{MT}: {I}ndic{T}rans-based Machine Translation for {H}inglish Text | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.113/ | Hegde, Asha and Lakshmaiah, Shashirekha | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1131--1135 | Code-mixing is the phenomena of mixing various linguistic units such as paragraphs, sentences, phrases, words, etc., of one language with that of the other language in any text. This code-mixing is predominantly used by social media users who know more than one language. Processing code-mixed text is challenging because of its characteristics and lack of tools that supports such data. Further, pretrained models can be used for the formal text and not for the informal text such as code-mixed. Developing efficient Machine Translation (MT) systems for code-mixed text is challenging due to lack of code-mixed training data. Further, existing MT systems developed to translate monolingual data are not portable to translate code-mixed text mainly due to its informal nature. To address the MT challenges of code-mixed text, this paper describes the proposed MT models submitted by our team MUCS, to the Code-mixed Machine Translation (MixMT) shared task in the Workshop on Machine Translation (WMT) organized in connection with Empirical models in Natural Language Processing (EMNLP) 2022. This shared has two subtasks: i) subtask 1 - to translate English sentences and their corresponding Hindi translations into Hinglish text and ii) subtask 2 - to translate Hinglish text into English text. The proposed models that translate the code-mixed English text to Hinglish (English-Hindli code-mixed text) and vice-versa, comprises of i) transliterating Hinglish text from Latin to Devanagari script and vice-versa, ii) pseudo translation generation using existing models, and iii) efficient target generation by combining the pseudo translations along with the training data provided by the shared task organizers. The proposed models obtained $5^{th}$ and $3^{rd}$ rank with Recall-Oriented Under-study for Gisting Evaluation (ROUGE) scores of 0.35806 and 0.55453 for subtask 1 and subtask 2 respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,163 |
inproceedings | khan-etal-2022-sit | {SIT} at {M}ix{MT} 2022: Fluent Translation Built on Giant Pre-trained Models | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.114/ | Khan, Abdul and Kanade, Hrishikesh and Budhrani, Girish and Jhanglani, Preet and Xu, Jia | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1136--1144 | This paper describes the Stevens Institute of Technology`s submission for the WMT 2022 Shared Task: Code-mixed Machine Translation (MixMT). The task consisted of two subtasks, subtask 1 Hindi/English to Hinglish and subtask 2 Hinglish to English translation. Our findings lie in the improvements made through the use of large pre-trained multilingual NMT models and in-domain datasets, as well as back-translation and ensemble techniques. The translation output is automatically evaluated against the reference translations using ROUGE-L and WER. Our system achieves the 1st position on subtask 2 according to ROUGE-L, WER, and human evaluation, 1st position on subtask 1 according to WER and human evaluation, and 3rd position on subtask 1 with respect to ROUGE-L metric. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,164 |
inproceedings | kirefu-etal-2022-university | The {U}niversity of {E}dinburgh`s Submission to the {WMT}22 Code-Mixing Shared Task ({M}ix{MT}) | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.115/ | Kirefu, Faheem and Iyer, Vivek and Chen, Pinzhen and Burchell, Laurie | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1145--1157 | The University of Edinburgh participated in the WMT22 shared task on code-mixed translation. This consists of two subtasks: i) generating code-mixed Hindi/English (Hinglish) text generation from parallel Hindi and English sentences and ii) machine translation from Hinglish to English. As both subtasks are considered low-resource, we focused our efforts on careful data generation and curation, especially the use of backtranslation from monolingual resources. For subtask 1 we explored the effects of constrained decoding on English and transliterated subwords in order to produce Hinglish. For subtask 2, we investigated different pretraining techniques, namely comparing simple initialisation from existing machine translation models and aligned augmentation. For both subtasks, we found that our baseline systems worked best. Our systems for both subtasks were one of the overall top-performing submissions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,165 |
inproceedings | laskar-etal-2022-cnlp-nits | {CNLP}-{NITS}-{PP} at {M}ix{MT} 2022: {H}inglish-{E}nglish Code-Mixed Machine Translation | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.116/ | Laskar, Sahinur Rahman and Singh, Rahul and Pandey, Shyambabu and Manna, Riyanka and Pakray, Partha and Bandyopadhyay, Sivaji | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1158--1161 | The mixing of two or more languages in speech or text is known as code-mixing. In this form of communication, users mix words and phrases from multiple languages. Code-mixing is very common in the context of Indian languages due to the presence of multilingual societies. The probability of the existence of code-mixed sentences in almost all Indian languages since in India English is the dominant language for social media textual communication platforms. We have participated in the WMT22 shared task of code-mixed machine translation with the team name: CNLP-NITS-PP. In this task, we have prepared a synthetic Hinglish{--}English parallel corpus using transliteration of original Hindi sentences to tackle the limitation of the parallel corpus, where, we mainly considered sentences that have named-entity (proper noun) from the available English-Hindi parallel corpus. With the addition of synthetic bi-text data to the original parallel corpus (train set), our transformer-based neural machine translation models have attained recall-oriented understudy for gisting evaluation (ROUGE-L) scores of 0.23815, 0.33729, and word error rate (WER) scores of 0.95458, 0.88451 at Sub-Task-1 (English-to-Hinglish) and Sub-Task-2 (Hinglish-to-English) for test set results respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,166 |
inproceedings | raheem-etal-2022-domain | Domain Curricula for Code-Switched {MT} at {M}ix{MT} 2022 | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.117/ | Raheem, Lekan and Elrashid, Maab and Johnson, Melvin and Kreutzer, Julia | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1162--1169 | In multilingual colloquial settings, it is a habitual occurrence to compose expressions of text or speech containing tokens or phrases of different languages, a phenomenon popularly known as code-switching or code-mixing (CMX). We present our approach and results for the Code-mixed Machine Translation (MixMT) shared task at WMT 2022: the task consists of two subtasks, monolingual to code-mixed machine translation (Subtask-1) and code-mixed to monolingual machine translation (Subtask-2). Most non-synthetic code-mixed data are from social media but gathering a significant amount of this kind of data would be laborious and this form of data has more writing variation than other domains, so for both subtasks, we experimented with data schedules for out-of-domain data. We jointly learn multiple domains of text by pretraining and fine-tuning, combined with a sentence alignment objective. We found that switching between domains caused improved performance in the domains seen earliest during training, but depleted the performance on the remaining domains. A continuous training run with strategically dispensed data of different domains showed a significantly improved performance over fine-tuning. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,167 |
inproceedings | ailem-etal-2022-lingua | Lingua Custodia`s Participation at the {WMT} 2022 Word-Level Auto-completion Shared Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.118/ | Ailem, Melissa and Liu, Jingshu and Barthelemy, Jean-gabriel and Qader, Raheel | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1170--1175 | This paper presents Lingua Custodia`s submission to the WMT22 shared task on Word Level Auto-completion (WLAC). We consider two directions, namely German-English and English-German.The WLAC task in Neural Machine Translation (NMT) consists in predicting a target word given few human typed characters, the source sentence to translate, as well as some translation context. Inspired by recent work in terminology control, we propose to treat the human typed sequence as a constraint to predict the right word starting by the latter. To do so, the source side of the training data is augmented with both the constraints and the translation context. In addition, following new advances in WLAC, we use a joint optimization strategy taking into account several types of translation context. The automatic as well as human accuracy obtained with the submitted systems show the effectiveness of the proposed method. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,168 |
inproceedings | moslem-etal-2022-translation | Translation Word-Level Auto-Completion: What Can We Achieve Out of the Box? | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.119/ | Moslem, Yasmin and Haque, Rejwanul and Way, Andy | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1176--1181 | Research on Machine Translation (MT) has achieved important breakthroughs in several areas. While there is much more to be done in order to build on this success, we believe that the language industry needs better ways to take full advantage of current achievements. Due to a combination of factors, including time, resources, and skills, businesses tend to apply pragmatism into their AI workflows. Hence, they concentrate more on outcomes, e.g. delivery, shipping, releases, and features, and adopt high-level working production solutions, where possible. Among the features thought to be helpful for translators are sentence-level and word-level translation auto-suggestion and auto-completion. Suggesting alternatives can inspire translators and limit their need to refer to external resources, which hopefully boosts their productivity. This work describes our submissions to WMT`s shared task on word-level auto-completion, for the Chinese-to-English, English-to-Chinese, German-to-English, and English-to-German language directions. We investigate the possibility of using pre-trained models and out-of-the-box features from available libraries. We employ random sampling to generate diverse alternatives, which reveals good results. Furthermore, we introduce our open-source API, based on CTranslate2, to serve translations, auto-suggestions, and auto-completions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,169 |
inproceedings | navarro-etal-2022-prhlts | {PRHLT}`s Submission to {WLAC} 2022 | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.120/ | Navarro, Angel and Domingo, Miguel and Casacuberta, Francisco | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1182--1186 | This paper describes our submission to the Word-Level AutoCompletion shared task of WMT22. We participated in the English{--}German and German{--}English categories. We proposed a segment-based interactive machine translation approach whose central core is a machine translation (MT) model which generates a complete translation from the context provided by the task. From there, we obtain the word which corresponds to the autocompletion. With this approach, we aim to show that it is possible to use the MT models in the autocompletion task by simply performing minor changes at the decoding step, obtaining satisfactory results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,170 |
inproceedings | yang-etal-2022-iigroup | {IIGROUP} Submissions for {WMT}22 Word-Level {A}uto{C}ompletion Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.121/ | Yang, Cheng and Li, Siheng and Shi, Chufan and Yang, Yujiu | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1187--1191 | This paper presents IIGroup`s submission to the WMT22 Word-Level AutoCompletion(WLAC) Shared Task in four language directions. We propose to use a Generate-then-Rerank framework to solve this task. More specifically, the generator is used to generate candidate words and recall as many positive candidates as possible. To facilitate the training process of the generator, we propose a span-level mask prediction task. Once we get the candidate words, we take the top-K candidates and feed them into the reranker. The reranker is used to select the most confident candidate. The experimental results in four language directions demonstrate the effectiveness of our systems. Our systems achieve competitive performance ranking 1st in English to Chinese subtask and 2nd in Chinese to English subtask. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,171 |
inproceedings | yang-etal-2022-hw-tscs | {HW}-{TSC}`s Submissions to the {WMT}22 Word-Level Auto Completion Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.122/ | Yang, Hao and Shang, Hengchao and Li, Zongyao and Wei, Daimeng and He, Xianghui and Chen, Xiaoyu and Yu, Zhengzhe and Guo, Jiaxin and Yang, Jinlong and Li, Shaojun and Luo, Yuanchang and Xie, Yuhao and Lei, Lizhi and Qin, Ying | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1192--1197 | This paper presents the submissions of Huawei Translation Services Center (HW-TSC) to WMT 2022 Word-Level AutoCompletion Task. We propose an end-to-end autoregressive model with bi-context based on Transformer to solve current task. The model uses a mixture of subword and character encoding units to realize the joint encoding of human input, the context of the target side and the decoded sequence, which ensures full utilization of information. We uses one model to solve four types of data structures in the task. During training, we try using a machine translation model as the pre-trained model and fine-tune it for the task. We also add BERT-style MLM data at the fine-tuning stage to improve model performance. We participate in zh$\rightarrow$en, en$\rightarrow$de, and de$\rightarrow$en directions and win the first place in all the three tracks. Particularly, we outperform the second place by more than 5{\%} in terms of accuracy on the zh$\rightarrow$en and en$\rightarrow$de tracks. The result is buttressed by human evaluations as well, demonstrating the effectiveness of our model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,172 |
inproceedings | ge-etal-2022-tsmind | {TSM}ind: {A}libaba and Soochow University`s Submission to the {WMT}22 Translation Suggestion Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.123/ | Ge, Xin and Wang, Ke and Wang, Jiayi and Xiao, Nini and Duan, Xiangyu and Zhao, Yu and Zhang, Yuqi | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1198--1204 | This paper describes the joint submission of Alibaba and Soochow University to the WMT 2022 Shared Task on Translation Suggestion (TS). We participate in the English to/from German and English to/from Chinese tasks. Basically, we utilize the model paradigm fine-tuning on the downstream tasks based on large-scale pre-trained models, which has recently achieved great success. We choose FAIR`s WMT19 English to/from German news translation system and MBART50 for English to/from Chinese as our pre-trained models. Considering the task`s condition of limited use of training data, we follow the data augmentation strategies provided by Yang to boost our TS model performance. And we further involve the dual conditional cross-entropy model and GPT-2 language model to filter augmented data. The leader board finally shows that our submissions are ranked first in three of four language directions in the Naive TS task of the WMT22 Translation Suggestion task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,173 |
inproceedings | hongbao-etal-2022-transns | Transn`s Submissions to the {WMT}22 Translation Suggestion Task | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.124/ | Hongbao, Mao and Wenbo, Zhang and Jie, Cai and Jianwei, Cheng | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1205--1210 | This paper describes the Transn`s submissions to the WMT2022 shared task on TranslationSuggestion. Our team participated on two tasks: Naive Translation Suggestion and TranslationSuggestion with Hints, focusing on two language directions Zh{\textrightarrow}En and En{\textrightarrow}Zh. Apart from the golden training data provided by the shared task, we utilized synthetic corpus to fine-tune on DeltaLM ({\ensuremath{\Delta}}LM), which is a pre-trained encoder-decoder language model. We applied two-stage training strategy on {\ensuremath{\Delta}}LM and several effective methods to generate synthetic corpus, which contribute a lot to the results. According to the official evaluation results in terms of BLEU scores, our submissions in Naive Translation Suggestion En{\textrightarrow}Zh and Translation Suggestion with Hints (both Zh{\textrightarrow}En and En{\textrightarrow}Zh) ranked 1st, and Naive Translation Suggestion Zh{\textrightarrow}En also achieved comparable result to the best score. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,174 |
inproceedings | zhang-etal-2022-improved | Improved Data Augmentation for Translation Suggestion | Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.wmt-1.125/ | Zhang, Hongxiao and Lai, Siyu and Zhang, Songming and Huang, Hui and Chen, Yufeng and Xu, Jinan and Liu, Jian | Proceedings of the Seventh Conference on Machine Translation (WMT) | 1211--1216 | Translation suggestion (TS) models are used to automatically provide alternative suggestions for incorrect spans in sentences generated by machine translation. This paper introduces the system used in our submission to the WMT`22 Translation Suggestion shared task. Our system is based on the ensemble of different translation architectures, including Transformer, SA-Transformer, and DynamicConv. We use three strategies to construct synthetic data from parallel corpora to compensate for the lack of supervised data. In addition, we introduce a multi-phase pre-training strategy, adding an additional pre-training phase with in-domain data. We rank second and third on the English-German and English-Chinese bidirectional tasks, respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,175 |
inproceedings | park-lee-2022-unsupervised | Unsupervised Abstractive Dialogue Summarization with Word Graphs and {POV} Conversion | Hruschka, Estevam and Mitchell, Tom and Mladenic, Dunja and Grobelnik, Marko and Bhutani, Nikita | may | 2022 | (Hybrid) Dublin, Ireland, and Virtual | Association for Computational Linguistics | https://aclanthology.org/2022.wit-1.1/ | Park, Seongmin and Lee, Jihwa | Proceedings of the 2nd Workshop on Deriving Insights from User-Generated Text | 1--9 | We advance the state-of-the-art in unsupervised abstractive dialogue summarization by utilizing multi-sentence compression graphs. Starting from well-founded assumptions about word graphs, we present simple but reliable path-reranking and topic segmentation schemes. Robustness of our method is demonstrated on datasets across multiple domains, including meetings, interviews, movie scripts, and day-to-day conversations. We also identify possible avenues to augment our heuristic-based system with deep learning. We open-source our code, to provide a strong, reproducible baseline for future research into unsupervised dialogue summarization. | null | null | 10.18653/v1/2022.wit-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,177 |
inproceedings | miao-etal-2022-interactive | An Interactive Analysis of User-reported Long {COVID} Symptoms using {T}witter Data | Hruschka, Estevam and Mitchell, Tom and Mladenic, Dunja and Grobelnik, Marko and Bhutani, Nikita | may | 2022 | (Hybrid) Dublin, Ireland, and Virtual | Association for Computational Linguistics | https://aclanthology.org/2022.wit-1.2/ | Miao, Lin and Last, Mark and Litvak, Marina | Proceedings of the 2nd Workshop on Deriving Insights from User-Generated Text | 10--19 | With millions of documented recoveries from COVID-19 worldwide, various long-term sequelae have been observed in a large group of survivors. This paper is aimed at systematically analyzing user-generated conversations on Twitter that are related to long-term COVID symptoms for a better understanding of the Long COVID health consequences. Using an interactive information extraction tool built especially for this purpose, we extracted key information from the relevant tweets and analyzed the user-reported Long COVID symptoms with respect to their demographic and geographical characteristics. The results of our analysis are expected to improve the public awareness on long-term COVID-19 sequelae and provide important insights to public health authorities. | null | null | 10.18653/v1/2022.wit-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,178 |
inproceedings | tamire-etal-2022-bi | Bi-Directional Recurrent Neural Ordinary Differential Equations for Social Media Text Classification | Hruschka, Estevam and Mitchell, Tom and Mladenic, Dunja and Grobelnik, Marko and Bhutani, Nikita | may | 2022 | (Hybrid) Dublin, Ireland, and Virtual | Association for Computational Linguistics | https://aclanthology.org/2022.wit-1.3/ | Tamire, Maunika and Anumasa, Srinivas and Srijith, P. K. | Proceedings of the 2nd Workshop on Deriving Insights from User-Generated Text | 20--24 | Classification of posts in social media such as Twitter is difficult due to the noisy and short nature of texts. Sequence classification models based on recurrent neural networks (RNN) are popular for classifying posts that are sequential in nature. RNNs assume the hidden representation dynamics to evolve in a discrete manner and do not consider the exact time of the posting. In this work, we propose to use recurrent neural ordinary differential equations (RNODE) for social media post classification which consider the time of posting and allow the computation of hidden representation to evolve in a time-sensitive continuous manner. In addition, we propose a novel model, Bi-directional RNODE (Bi-RNODE), which can consider the information flow in both the forward and backward directions of posting times to predict the post label. Our experiments demonstrate that RNODE and Bi-RNODE are effective for the problem of stance classification of rumours in social media. | null | null | 10.18653/v1/2022.wit-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,179 |
inproceedings | huidrom-lepage-2022-introducing | Introducing {EM}-{FT} for {M}anipuri-{E}nglish Neural Machine Translation | Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.wildre-1.1/ | Huidrom, Rudali and Lepage, Yves | Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference | 1--6 | This paper introduces a pretrained word embedding for Manipuri, a low-resourced Indian language. The pretrained word embedding based on FastText is capable of handling the highly agglutinating language Manipuri (mni). We then perform machine translation (MT) experiments using neural network (NN) models. In this paper, we confirm the following observations. Firstly, the reported BLEU score of the Transformer architecture with FastText word embedding model EM-FT performs better than without in all the NMT experiments. Secondly, we observe that adding more training data from a different domain of the test data negatively impacts translation accuracy. The resources reported in this paper are made available in the ELRA catalogue to help the low-resourced languages community with MT/NLP tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,182 |
inproceedings | nayak-joshi-2022-l3cube | {L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models | Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.wildre-1.2/ | Nayak, Ravindra and Joshi, Raviraj | Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference | 7--12 | Code-switching occurs when more than one language is mixed in a given sentence or a conversation. This phenomenon is more prominent on social media platforms and its adoption is increasing over time. Therefore code-mixed NLP has been extensively studied in the literature. As pre-trained transformer-based architectures are gaining popularity, we observe that real code-mixing data are scarce to pre-train large language models. We present L3Cube-HingCorpus, the first large-scale real Hindi-English code mixed data in a Roman script. It consists of 52.93M sentences and 1.04B tokens, scraped from Twitter. We further present HingBERT, HingMBERT, HingRoBERTa, and HingGPT. The BERT models have been pre-trained on codemixed HingCorpus using masked language modelling objectives. We show the effectiveness of these BERT models on the subsequent downstream tasks like code-mixed sentiment analysis, POS tagging, NER, and LID from the GLUECoS benchmark. The HingGPT is a GPT2 based generative transformer model capable of generating full tweets. Our models show significant improvements over currently available models pre-trained on multiple languages and synthetic code-mixed datasets. We also release L3Cube-HingLID Corpus, the largest code-mixed Hindi-English language identification(LID) dataset and HingBERT-LID, a production-quality LID model to facilitate capturing of more code-mixed data using the process outlined in this work. The dataset and models are available at \url{https://github.com/l3cube-pune/code-mixed-nlp}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,183 |
inproceedings | gautam-2022-leveraging | Leveraging Sub Label Dependencies in Code Mixed {I}ndian Languages for Part-Of-Speech Tagging using Conditional Random Fields. | Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.wildre-1.3/ | Gautam, Akash Kumar | Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference | 13--17 | Code-mixed text sequences often lead to challenges in the task of correct identification of Part-Of-Speech tags. However, lexical dependencies created while alternating between multiple languages can be leveraged to improve the performance of such tasks. Indian languages with rich morphological structure and highly inflected nature provide such an opportunity. In this work, we exploit these sub-label dependencies using conditional random fields (CRFs) by defining feature extraction functions on three distinct language pairs (Hindi-English, Bengali-English, and Telugu-English). Our results demonstrate a significant increase in the tagging performance if the feature extraction functions employ the rich inner structure of such languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,184 |
inproceedings | yusuf-etal-2022-hindiwsd | {H}indi{WSD}: A package for word sense disambiguation in {H}inglish {\&} {H}indi | Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.wildre-1.4/ | Yusuf, Mirza and Surana, Praatibh and Sharma, Chethan | Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference | 18--23 | A lot of commendable work has been done, especially in high resource languages such as English, Spanish, French, etc. However, work done for Indic languages such as Hindi, Tamil, Telugu, etc is relatively less due to difficulty in finding relevant datasets, and the complexity of these languages. With the advent of IndoWordnet, we can explore important tasks such as word sense disambiguation, word similarity, and cross-lingual information retrieval, and carry out effective research regarding the same. In this paper, we worked on improving word sense disambiguation for 20 of the most common ambiguous Hindi words by making use of knowledge-based methods. We also came up with {\textquotedblleft}hindiwsd{\textquotedblright}, an easy-to-use framework developed in Python that acts as a pipeline for transliteration of Hinglish code-mixed text followed by spell correction, POS tagging, and word sense disambiguation of Hindi text. We also curated a dataset of these 20 most used ambiguous Hindi words. This dataset was then used to enhance a modified Lesk`s algorithm and more accurately carry out word sense disambiguation. We achieved an accuracy of about 71{\%} using our customized Lesk`s algorithm which was an improvement to the accuracy of about 34{\%} using the original Lesk`s algorithm on the test set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,185 |
inproceedings | -chandra-2022-paninian | {P}{\={a}}ṇinian Phonological Changes: Computation and Development of Online Access System | Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.wildre-1.5/ | Sanju and Chandra, Subhash | Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference | 24--28 | P{\={a}}ṇini used the term saṃhit{\={a}} for phonological changes. Any Sound change which alters phonemes in a particular language is called Phonological Change. It arises when two sounds are pronounced in a language with uninterrupted speed, then those letters are affected by each other due to Articulatory, Acoustic and Auditory principles in language. The pronunciation of two sounds that are in extreme proximity, affects each other and changes them. In simple words, this phenomenon is known as sandhi. Sanskrit is considered one of the oldest languages in the world. It has produced one of the most huge literary text corpora in the world. The tradition of Sanskrit started in the Vedic period. P{\={a}}ṇini`s Aṣṭ{\={a}}dhy{\={a}}y{\={i}} (AD) is a complete grammar of Sanskrit. It also covers Sanskrit sounds and phonology. Phonological changes are a natural phenomenon in any language during speech but in Sanskrit, it is highly reflected. Sanskrit corpora contain numerous long words. It looks like a single sentence due to sandhi between multiple words. The process of phonological changes occurred based on certain rules of pronunciation and it is codified by the P{\={a}}ṇini in AD. P{\={a}}ṇini has codified these rules systemically but the computation of these rules is a challenging task. Therefore, the objective of the paper is to compute the rules and demonstrate an online access system for Sanskrit sandhi. The system also generates the whole process of phonological changes based on P{\={a}}ṇinian Rules. It also plays a very effective role in Digital classroom teaching, boosting teaching skills and the learning process. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,186 |
inproceedings | litake-etal-2022-l3cube | {L}3{C}ube-{M}aha{NER}: A {M}arathi Named Entity Recognition Dataset and {BERT} models | Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.wildre-1.6/ | Litake, Onkar and Sabane, Maithili Ravindra and Patil, Parth Sachin and Ranade, Aparna Abhijeet and Joshi, Raviraj | Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference | 29--34 | Named Entity Recognition (NER) is a basic NLP task and finds major applications in conversational and search systems. It helps us identify key entities in a sentence used for the downstream application. NER or similar slot filling systems for popular languages have been heavily used in commercial applications. In this work, we focus on Marathi, an Indian language, spoken prominently by the people of Maharashtra state. Marathi is a low resource language and still lacks useful NER resources. We present L3Cube-MahaNER, the first major gold standard named entity recognition dataset in Marathi. We also describe the manual annotation guidelines followed during the process. In the end, we benchmark the dataset on different CNN, LSTM, and Transformer based models like mBERT, XLM-RoBERTa, IndicBERT, MahaBERT, etc. The MahaBERT provides the best performance among all the models. The data and models are available at \url{https://github.com/l3cube-pune/MarathiNLP} . | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,187 |
inproceedings | sonu-etal-2022-identifying | Identifying Emotions in Code Mixed {H}indi-{E}nglish Tweets | Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.wildre-1.7/ | Sonu, Sanket and Haque, Rejwanul and Hasanuzzaman, Mohammed and Stynes, Paul and Pathak, Pramod | Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference | 35--41 | Emotion detection (ED) in tweets is a text classification problem that is of interest to Natural Language Processing (NLP) researchers. Code-mixing (CM) is a process of mixing linguistic units such as words of two different languages. The CM languages are characteristically different from the languages whose linguistic units are used for mixing. Whilst NLP has been shown to be successful for low-resource languages, it becomes challenging to perform NLP tasks on CM languages. As for ED, it has been rarely investigated on CM languages such as Hindi{---}English due to the lack of training data that is required for today`s data-driven classification algorithms. This research proposes a gold standard dataset for detecting emotions in CM Hindi{--}English tweets. This paper also presents our results about the investigation of the usefulness of our gold-standard dataset while testing a number of state-of-the-art classification algorithms. We found that the ED classifier built using SVM provided us the highest accuracy (75.17{\%}) on the hold-out test set. This research would benefit the NLP community in detecting emotions from social media platforms in multilingual societies. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,188 |
inproceedings | nigam-chandra-2022-digital | Digital Accessibility and Information Mining of Dharma{\'s}{\={a}}stric Knowledge Traditions | Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.wildre-1.8/ | Nigam, Arooshi and Chandra, Subhash | Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference | 42--47 | The heritage of Dharma{\'s}{\={a}}stra (DS) carries extensive cultural history and encapsulates the treatises of Ancient Indian Social Institutions (SI). DS is reckoned as an epitome of the primitive Indian knowledge tradition as it incorporates a variety of genres for sciences and arts such as family law and legislation, civilization, culture, ritualistic procedures, environment, economics, commerce and finance studies, management, mathematical and medical sciences etc. SI represents a distinct tradition of civilization formation, society development and community living. The texts of the DS are primarily written in the Sanskrit language and due to its expansive subject stream, it is later translated into various other languages globally. With the ingress of the internet, the development of advanced digital technologies and IT boom, information is accessed and exchanged via digital platforms. DS texts are studied not only by Sanskrit scholars but also referred by historians, sociologists, political scientists, economists, law enthusiasts and linguists worldwide. Despite its eminence, there is a major setback in digitizing and online information mining for DS texts. The major objective of the paper is to digitize and develop an instant referencing system to amplify the digital accessibility of DS texts. This will act as an effective and immediate learning tool for researchers who are keen on intensive studying of DS concepts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,189 |
inproceedings | khenglawt-etal-2022-language | Language Resource Building and {E}nglish-to-Mizo Neural Machine Translation Encountering Tonal Words | Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.wildre-1.9/ | Khenglawt, Vanlalmuansangi and Laskar, Sahinur Rahman and Pal, Santanu and Pakray, Partha and Khan, Ajoy Kumar | Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference | 48--54 | Multilingual country like India has an enormous linguistic diversity and has an increasing demand towards developing language resources such that it will outreach in various natural language processing applications like machine translation. Low-resource language translation possesses challenges in the field of machine translation. The challenges include the availability of corpus and differences in linguistic information. This paper investigates a low-resource language pair, English-to-Mizo exploring neural machine translation by contributing an Indian language resource, i.e., English-Mizo corpus. In this work, we explore one of the main challenges to tackling tonal words existing in the Mizo language, as they add to the complexity on top of low-resource challenges for any natural language processing task. Our approach improves translation accuracy by encountering tonal words of Mizo and achieved a state-of-the-art result in English-to-Mizo translation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,190 |
inproceedings | cyriac-lalitha-devi-2022-classification | Classification of Multiword Expressions in {M}alayalam | Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.wildre-1.10/ | Cyriac, Treesa and Lalitha Devi, Sobha | Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference | 55--59 | Multiword expression is an interesting concept in languages and the MWEs of a language are not easy for a non-native speaker to understand. It includes lexicalized phrases, idioms, collocations etc. Data on multiwords are helpful in language processing. {\textquoteleft}Multiword expressions in Malayalam' is a less studied area. In this paper, we are trying to explore multiwords in Malayalam and to classify them as per the three idiosyncrasies: semantic idiosyncrasy, syntactic idiosyncrasy, and statistic idiosyncrasy. Though these are already identified, they are not being studied in Malayalam. The classification and features are given and are studied using Malayalam multiwords. Through this study, we identified how the linguistic features of Malayalam such as agglutination influence its multiword expressions in terms of pronunciation and spelling. Malayalam has a set of code-mixed multiword expressions which is also addressed in this study. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,191 |
inproceedings | majumdar-etal-2022-bengali | {B}engali and {M}agahi {PUD} Treebank and Parser | Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.wildre-1.11/ | Majumdar, Pritha and Alok, Deepak and Bansal, Akanksha and Ojha, Atul Kr. and McCrae, John P. | Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference | 60--67 | This paper presents the development of the Parallel Universal Dependency (PUD) Treebank for two Indo-Aryan languages: Bengali and Magahi. A treebank of 1,000 sentences has been created using a parallel corpus of English and the UD framework. A preliminary set of sentences was annotated manually - 600 for Bengali and 200 for Magahi. The rest of the sentences were built using the Bengali and Magahi parser. The sentences have been translated and annotated manually by the authors, some of whom are also native speakers of the languages. The objective behind this work is to build a syntactically-annotated linguistic repository for the aforementioned languages, that can prove to be a useful resource for building further NLP tools. Additionally, Bengali and Magahi parsers were also created which is built on machine learning approach. The accuracy of the Bengali parser is 78.13{\%} in the case of UPOS; 76.99{\%} in the case of XPOS, 56.12{\%} in the case of UAS; and 47.19{\%} in the case of LAS. The accuracy of Magahi parser is 71.53{\%} in the case of UPOS; 66.44{\%} in the case of XPOS, 58.05{\%} in the case of UAS; and 33.07{\%} in the case of LAS. This paper also includes an illustration of the annotation schema followed, the findings of the Parallel Universal Dependency (PUD) treebank, and it`s resulting linguistic analysis | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,192 |
inproceedings | vaibhav-srivastava-2022-makadi | Makadi: A Large-Scale Human-Labeled Dataset for {H}indi Semantic Parsing | Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.wildre-1.12/ | Vaibhav, Shashwat and Srivastava, Nisheeth | Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference | 68--73 | Parsing natural language queries into formal database calls is a very well-studied problem. Because of the rich diversity of semantic markers across the world`s languages, progress in solving this problem is irreducibly language-dependent. This has created an asymmetry in progress in NLIDB solutions, with most state-of-the-art efforts focused on the resource-rich English language, with limited progress seen for low resource languages. In this short paper, we present Makadi, a large-scale, complex, cross-lingual, cross-domain semantic parsing and text-to-SQL dataset for semantic parsing in the Hindi language. Produced by translating the recently introduced English language Spider NLIDB dataset, it consists of 9693 questions and SQL queries on 166 databases with multiple tables which cover multiple domains. This is the first large-scale dataset in the Hindi language for semantic parsing and related language understanding tasks. Our dataset is publicly available at: Link removed to preserve anonymization during peer review. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,193 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.