entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
sabeh-etal-2022-openbrand
{O}pen{B}rand: Open Brand Value Extraction from Product Descriptions
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.19/
Sabeh, Kassem and Kacimi, Mouna and Gamper, Johann
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
161--170
Extracting attribute-value information from unstructured product descriptions continue to be of a vital importance in e-commerce applications. One of the most important product attributes is the brand which highly influences costumers' purchasing behaviour. Thus, it is crucial to accurately extract brand information dealing with the main challenge of discovering new brand names. Under the open world assumption, several approaches have adopted deep learning models to extract attribute-values using sequence tagging paradigm. However, they did not employ finer grained data representations such as character level embeddings which improve generalizability. In this paper, we introduce OpenBrand, a novel approach for discovering brand names. OpenBrand is a BiLSTM-CRF-Attention model with embeddings at different granularities. Such embeddings are learned using CNN and LSTM architectures to provide more accurate representations. We further propose a new dataset for brand value extraction, with a very challenging task on zero-shot extraction. We have tested our approach, through extensive experiments, and shown that it outperforms state-of-the-art models in brand name discovery.
null
null
10.18653/v1/2022.ecnlp-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,094
inproceedings
nguyen-khatwani-2022-robust
Robust Product Classification with Instance-Dependent Noise
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.20/
Nguyen, Huy and Khatwani, Devashish
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
171--180
Noisy labels in large E-commerce product data (i.e., product items are placed into incorrect categories) is a critical issue for product categorization task because they are unavoidable, non-trivial to remove and degrade prediction performance significantly. Training a product title classification model which is robust to noisy labels in the data is very important to make product classification applications more practical. In this paper, we study the impact of instance-dependent noise to performance of product title classification by comparing our data denoising algorithm and different noise-resistance training algorithms which were designed to prevent a classifier model from over-fitting to noise. We develop a simple yet effective Deep Neural Network for product title classification to use as a base classifier. Along with recent methods of stimulating instance-dependent noise, we propose a novel noise stimulation algorithm based on product title similarity. Our experiments cover multiple datasets, various noise methods and different training solutions. Results uncover the limit of classification task when noise rate is not negligible and data distribution is highly skewed.
null
null
10.18653/v1/2022.ecnlp-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,095
inproceedings
schamel-etal-2022-structured
Structured Extraction of Terms and Conditions from {G}erman and {E}nglish Online Shops
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.21/
Schamel, Tobias and Braun, Daniel and Matthes, Florian
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
181--190
The automated analysis of Terms and Conditions has gained attention in recent years, mainly due to its relevance to consumer protection. Well-structured data sets are the base for every analysis. While content extraction, in general, is a well-researched field and many open source libraries are available, our evaluation shows, that existing solutions cannot extract Terms and Conditions in sufficient quality, mainly because of their special structure. In this paper, we present an approach to extract the content and hierarchy of Terms and Conditions from German and English online shops. Our evaluation shows, that the approach outperforms the current state of the art. A python implementation of the approach is made available under an open license.
null
null
10.18653/v1/2022.ecnlp-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,096
inproceedings
chia-etal-2022-come
{\textquotedblleft}Does it come in black?{\textquotedblright} {CLIP}-like models are zero-shot recommenders
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.22/
Chia, Patrick John and Tagliabue, Jacopo and Bianchi, Federico and Greco, Ciro and Goncalves, Diogo
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
191--198
Product discovery is a crucial component for online shopping. However, item-to-item recommendations today do not allow users to explore changes along selected dimensions: given a query item, can a model suggest something similar but in a different color? We consider item recommendations of the comparative nature (e.g. {\textquotedblleft}something darker{\textquotedblright}) and show how CLIP-based models can support this use case in a zero-shot manner. Leveraging a large model built for fashion, we introduce GradREC and its industry potential, and offer a first rounded assessment of its strength and weaknesses.
null
null
10.18653/v1/2022.ecnlp-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,097
inproceedings
braun-matthes-2022-clause
Clause Topic Classification in {G}erman and {E}nglish Standard Form Contracts
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.23/
Braun, Daniel and Matthes, Florian
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
199--209
So-called standard form contracts, i.e. contracts that are drafted unilaterally by one party, like terms and conditions of online shops or terms of services of social networks, are cornerstones of our modern economy. Their processing is, therefore, of significant practical value. Often, the sheer size of these contracts allows the drafting party to hide unfavourable terms from the other party. In this paper, we compare different approaches for automatically classifying the topics of clauses in standard form contracts, based on a data-set of more than 6,000 clauses from more than 170 contracts, which we collected from German and English online shops and annotated based on a taxonomy of clause topics, that we developed together with legal experts. We will show that, in our comparison of seven approaches, from simple keyword matching to transformer language models, BERT performed best with an F1-score of up to 0.91, however much simpler and computationally cheaper models like logistic regression also achieved similarly good results of up to 0.87.
null
null
10.18653/v1/2022.ecnlp-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,098
inproceedings
roy-etal-2022-investigating
Investigating the Generative Approach for Question Answering in {E}-Commerce
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.24/
Roy, Kalyani and Balapanuru, Vineeth and Nayak, Tapas and Goyal, Pawan
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
210--216
Many e-commerce websites provide Product-related Question Answering (PQA) platform where potential customers can ask questions related to a product, and other consumers can post an answer to that question based on their experience. Recently, there has been a growing interest in providing automated responses to product questions. In this paper, we investigate the suitability of the generative approach for PQA. We use state-of-the-art generative models proposed by Deng et al.(2020) and Lu et al.(2020) for this purpose. On closer examination, we find several drawbacks in this approach: (1) input reviews are not always utilized significantly for answer generation, (2) the performance of the models is abysmal while answering the numerical questions, (3) many of the generated answers contain phrases like {\textquotedblleft}I do not know{\textquotedblright} which are taken from the reference answer in training data, and these answers do not convey any information to the customer. Although these approaches achieve a high ROUGE score, it does not reflect upon these shortcomings of the generated answers. We hope that our analysis will lead to more rigorous PQA approaches, and future research will focus on addressing these shortcomings in PQA.
null
null
10.18653/v1/2022.ecnlp-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,099
inproceedings
chen-chou-2022-utilizing
Utilizing Cross-Modal Contrastive Learning to Improve Item Categorization {BERT} Model
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.25/
Chen, Lei and Chou, Hou Wei
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
217--223
Item categorization (IC) is a core natural language processing (NLP) task in e-commerce. As a special text classification task, fine-tuning pre-trained models, e.g., BERT, has become a mainstream solution. To improve IC performance further, other product metadata, e.g., product images, have been used. Although multimodal IC (MIC) systems show higher performance, expanding from processing text to more resource-demanding images brings large engineering impacts and hinders the deployment of such dual-input MIC systems. In this paper, we proposed a new way of using product images to improve text-only IC model: leveraging cross-modal signals between products' titles and associated images to adapt BERT models in a self-supervised learning (SSL) way. Our experiments on the three genres in the public Amazon product dataset show that the proposed method generates improved prediction accuracy and macro-F1 values than simply using the original BERT. Moreover, the proposed method is able to keep using existing text-only IC inference implementation and shows a resource advantage than the deployment of a dual-input MIC system.
null
null
10.18653/v1/2022.ecnlp-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,100
inproceedings
liu-etal-2022-towards
Towards Generalizeable Semantic Product Search by Text Similarity Pre-training on Search Click Logs
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.26/
Liu, Zheng and Zhang, Wei and Chen, Yan and Sun, Weiyi and Du, Tianchuan and Schroeder, Benjamin
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
224--233
Recently, semantic search has been successfully applied to E-commerce product search and the learned semantic space for query and product encoding are expected to generalize well to unseen queries or products. Yet, whether generalization can conveniently emerge has not been thoroughly studied in the domain thus far. In this paper, we examine several general-domain and domain-specific pre-trained Roberta variants and discover that general-domain fine-tuning does not really help generalization which aligns with the discovery of prior art, yet proper domain-specific fine-tuning with clickstream data can lead to better model generalization, based on a bucketed analysis of a manually annotated query-product relevance data.
null
null
10.18653/v1/2022.ecnlp-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,101
inproceedings
koto-etal-2022-pretrained
Can Pretrained Language Models Generate Persuasive, Faithful, and Informative Ad Text for Product Descriptions?
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.27/
Koto, Fajri and Lau, Jey Han and Baldwin, Timothy
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
234--243
For any e-commerce service, persuasive, faithful, and informative product descriptions can attract shoppers and improve sales. While not all sellers are capable of providing such interesting descriptions, a language generation system can be a source of such descriptions at scale, and potentially assist sellers to improve their product descriptions. Most previous work has addressed this task based on statistical approaches (Wang et al., 2017), limited attributes such as titles (Chen et al., 2019; Chan et al., 2020), and focused on only one product type (Wang et al., 2017; Munigala et al., 2018; Hong et al., 2021). In this paper, we jointly train image features and 10 text attributes across 23 diverse product types, with two different target text types with different writing styles: bullet points and paragraph descriptions. Our findings suggest that multimodal training with modern pretrained language models can generate fluent and persuasive advertisements, but are less faithful and informative, especially out of domain.
null
null
10.18653/v1/2022.ecnlp-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,102
inproceedings
joshi-singh-2022-simple
A Simple Baseline for Domain Adaptation in End to End {ASR} Systems Using Synthetic Data
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.28/
Joshi, Raviraj and Singh, Anupam
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
244--249
Automatic Speech Recognition(ASR) has been dominated by deep learning-based end-to-end speech recognition models. These approaches require large amounts of labeled data in the form of audio-text pairs. Moreover, these models are more susceptible to domain shift as compared to traditional models. It is common practice to train generic ASR models and then adapt them to target domains using comparatively smaller data sets. We consider a more extreme case of domain adaptation where text-only corpus is available. In this work, we propose a simple baseline technique for domain adaptation in end-to-end speech recognition models. We convert the text-only corpus to audio data using single speaker Text to Speech (TTS) engine. The parallel data in the target domain is then used to fine-tune the final dense layer of generic ASR models. We show that single speaker synthetic TTS data coupled with final dense layer only fine-tuning provides reasonable improvements in word error rates. We use text data from address and e-commerce search domains to show the effectiveness of our low-cost baseline approach on CTC and attention-based models.
null
null
10.18653/v1/2022.ecnlp-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,103
inproceedings
lavee-guy-2022-lot
Lot or Not: Identifying Multi-Quantity Offerings in {E}-Commerce
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.29/
Lavee, Gal and Guy, Ido
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
250--262
The term \textit{lot} in is defined to mean an offering that contains a collection of multiple identical items for sale. In a large online marketplace, lot offerings play an important role, allowing buyers and sellers to set price levels to optimally balance supply and demand needs. In spite of their central role, platforms often struggle to identify lot offerings, since explicit lot status identification is frequently not provided by sellers. The ability to identify lot offerings plays a key role in many fundamental tasks, from matching offerings to catalog products, through ranking search results, to providing effective pricing guidance. In this work, we seek to determine the lot status (and lot size) of each offering, in order to facilitate an improved buyer experience, while reducing the friction for sellers posting new offerings. We demonstrate experimentally the ability to accurately classify offerings as lots and predict their lot size using only the offer title, by adapting state-of-the-art natural language techniques to the lot identification problem.
null
null
10.18653/v1/2022.ecnlp-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,104
inproceedings
pham-etal-2022-multi
Multi-Domain Adaptation in Neural Machine Translation with Dynamic Sampling Strategies
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.4/
Pham, Minh-Quang and Crego, Josep and Yvon, Fran{\c{c}}ois
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
13--22
Building effective Neural Machine Translation models often implies accommodating diverse sets of heterogeneous data so as to optimize performance for the domain(s) of interest. Such multi-source / multi-domain adaptation problems are typically approached through instance selection or reweighting strategies, based on a static assessment of the relevance of training instances with respect to the task at hand. In this paper, we study dynamic data selection strategies that are able to automatically re-evaluate the usefulness of data samples and to evolve a data selection policy in the course of training. Based on the results of multiple experiments, we show that such methods constitute a generic framework to automatically and effectively handle a variety of real-world situations, from multi-source domain adaptation to multi-domain learning and unsupervised domain adaptation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,109
inproceedings
loock-etal-2022-use
The use of online translators by students not enrolled in a professional translation program: beyond copying and pasting for a professional use
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.5/
Loock, Rudy and L{\'e}chauguette, Sophie and Holt, Benjamin
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
23--29
In this paper, we discuss a use of machine translation (MT) that has been quite overlooked up to now, namely by students not enrolled in a professional translation program. A number of studies have reported massive use of free online translators (OTs), and it seems important to uncover such users' abilities and difficulties when using MT output, whether to improve their understanding, writing, or translation skills. We report here a study on students enrolled in a French {\textquoteleft}applied languages program' (where students study two languages, as well as law, economics, and management). The aim was to uncover how they use OTs, as well as their (in)ability to identify and correct MT errors. Obtained through two online surveys and several tests conducted with students from 2020 to 2022, our results show an unsurprising widespread use of OTs for many different tasks, but also some specific difficulties in identifying MT errors, in particular in relation to target language fluency.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,110
inproceedings
soto-etal-2022-comparing
Comparing and combining tagging with different decoding algorithms for back-translation in {NMT}: learnings from a low resource scenario
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.6/
Soto, Xabier and Perez-De-Vi{\~n}aspre, Olatz and Labaka, Gorka and Oronoz, Maite
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
31--40
Back-translation is a well established approach to improve the performance of Neural Machine Translation (NMT) systems when large monolingual corpora of the target language and domain are available. Recently, diverse approaches have been proposed to get better automatic evaluation results of NMT models using back-translation, including the use of sampling instead of beam search as decoding algorithm for creating the synthetic corpus. Alternatively, it has been proposed to append a tag to the back-translated corpus for helping the NMT system to distinguish the synthetic bilingual corpus from the authentic one. However, not all the combinations of the previous approaches have been tested, and thus it is not clear which is the best approach for developing a given NMT system. In this work, we empirically compare and combine existing techniques for back-translation in a real low resource setting: the translation of clinical notes from Basque into Spanish. Apart from automatically evaluating the MT systems, we ask bilingual healthcare workers to perform a human evaluation, and analyze the different synthetic corpora by measuring their lexical diversity (LD). For reproducibility and generalizability, we repeat our experiments for German to English translation using public data. The results suggest that in lower resource scenarios tagging only helps when using sampling for decoding, in contradiction with the previous literature using bigger corpora from the news domain. When fine-tuning with a few thousand bilingual in-domain sentences, one of our proposed method (tagged restricted sampling) obtains the best results both in terms of automatic and human evaluation. We will publish the code upon acceptance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,111
inproceedings
pu-simaan-2022-passing
Passing Parser Uncertainty to the Transformer: Labeled Dependency Distributions for Neural Machine Translation
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.7/
Liu, Dongqi and Sima{'}an, Khalil
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
41--50
Existing syntax-enriched neural machine translation (NMT) models work either with the single most-likely unlabeled parse or the set of n-best unlabeled parses coming out of an external parser. Passing a single or n-best parses to the NMT model risks propagating parse errors. Furthermore, unlabeled parses represent only syntactic groupings without their linguistically relevant categories. In this paper we explore the question: Does passing both parser uncertainty and labeled syntactic knowledge to the Transformer improve its translation performance? This paper contributes a novel method for infusing the whole labeled dependency distributions (LDD) of the source sentence`s dependency forest into the self-attention mechanism of the encoder of the Transformer. A range of experimental results on three language pairs demonstrate that the proposed approach outperforms both the vanilla Transformer as well as the single best-parse Transformer model across several evaluation metrics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,112
inproceedings
pluymaekers-2022-well
How well do real-time machine translation apps perform in practice? Insights from a literature review
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.8/
Pluymaekers, Mark
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
51--60
Although more and more professionals are using real-time machine translation during dialogues with interlocutors who speak a different language, the performance of real-time MT apps has received only limited attention in the academic literature. This study summarizes the findings of prior studies (N = 34) reporting an evaluation of one or more real-time MT apps in a professional setting. Our findings show that real-time MT apps are often tested in realistic circumstances and that users are more frequently employed as judges of performance than professional translators. Furthermore, most studies report overall positive results with regard to performance, particularly when apps are tested in real-life situations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,113
inproceedings
rei-etal-2022-searching
Searching for {COMETINHO}: The Little Metric That Could
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.9/
Rei, Ricardo and Farinha, Ana C and de Souza, Jos{\'e} G.C. and Ramos, Pedro G. and Martins, Andr{\'e} F.T. and Coheur, Luisa and Lavie, Alon
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
61--70
In recent years, several neural fine-tuned machine translation evaluation metrics such as COMET and BLEURT have been proposed. These metrics achieve much higher correlations with human judgments than lexical overlap metrics at the cost of computational efficiency and simplicity, limiting their applications to scenarios in which one has to score thousands of translation hypothesis (e.g. scoring multiple systems or Minimum Bayes Risk decoding). In this paper, we explore optimization techniques, pruning, and knowledge distillation to create more compact and faster COMET versions. Our results show that just by optimizing the code through the use of caching and length batching we can reduce inference time between 39{\%} and 65{\%} when scoring multiple systems. Also, we show that pruning COMET can lead to a 21{\%} model reduction without affecting the model`s accuracy beyond 0.01 Kendall tau correlation. Furthermore, we present DISTIL-COMET a lightweight distilled version that is 80{\%} smaller and 2.128x faster while attaining a performance close to the original model and above strong baselines such as BERTSCORE and PRISM.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,114
inproceedings
volkart-bouillon-2022-studying
Studying Post-Editese in a Professional Context: A Pilot Study
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.10/
Volkart, Lise and Bouillon, Pierrette
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
71--79
The past few years have seen the multiplication of studies on post-editese, following the massive adoption of post-editing in professional translation workflows. These studies mainly rely on the comparison of post-edited machine translation and human translation on artificial parallel corpora. By contrast, we investigate here post-editese on comparable corpora of authentic translation jobs for the language direction English into French. We explore commonly used scores and also proposes the use of a novel metric. Our analysis shows that post-edited machine translation is not only lexically poorer than human translation, but also less dense and less varied in terms of translation solutions. It also tends to be more prolific than human translation for our language direction. Finally, our study highlights some of the challenges of working with comparable corpora in post-editese research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,115
inproceedings
wang-etal-2022-diformer
Diformer: Directional Transformer for Neural Machine Translation
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.11/
Wang, Minghan and Guo, Jiaxin and Wang, Yuxia and Wei, Daimeng and Shang, Hengchao and Li, Yinglu and Su, Chang and Chen, Yimeng and Zhang, Min and Tao, Shimin and Yang, Hao
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
81--90
Autoregressive (AR) and Non-autoregressive (NAR) models have their own superiority on the performance and latency, combining them into one model may take advantage of both. Current combination frameworks focus more on the integration of multiple decoding paradigms with a unified generative model, e.g. Masked Language Model. However, the generalization can be harmful on the performance due to the gap between training objective and inference. In this paper, we aim to close the gap by preserving the original objective of AR and NAR under a unified framework. Specifically, we propose the Directional Transformer (Diformer) by jointly modelling AR and NAR into three generation directions (left-to-right, right-to-left and straight) with a newly introduced direction variable, which works by controlling the prediction of each token to have specific dependencies under that direction. The unification achieved by direction successfully preserves the original dependency assumption used in AR and NAR, retaining both generalization and performance. Experiments on 4 WMT benchmarks demonstrate that Diformer outperforms current united-modelling works with more than 1.5 BLEU points for both AR and NAR decoding, and is also competitive to the state-of-the-art independent AR and NAR models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,116
inproceedings
purason-tattar-2022-multilingual
Multilingual Neural Machine Translation With the Right Amount of Sharing
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.12/
Purason, Taido and T{\"attar, Andre
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
91--100
Large multilingual Transformer-based machine translation models have had a pivotal role in making translation systems available for hundreds of languages with good zero-shot translation performance. One such example is the universal model with shared encoder-decoder architecture. Additionally, jointly trained language-specific encoder-decoder systems have been proposed for multilingual neural machine translation (NMT) models. This work investigates various knowledge-sharing approaches on the encoder side while keeping the decoder language- or language-group-specific. We propose a novel approach, where we use universal, language-group-specific and language-specific modules to solve the shortcomings of both the universal models and models with language-specific encoders-decoders. Experiments on a multilingual dataset set up to model real-world scenarios, including zero-shot and low-resource translation, show that our proposed models achieve higher translation quality compared to purely universal and language-specific approaches.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,117
inproceedings
macken-etal-2022-literary
Literary translation as a three-stage process: machine translation, post-editing and revision
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.13/
Macken, Lieve and Vanroy, Bram and Desmet, Luca and Tezcan, Arda
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
101--110
This study focuses on English-Dutch literary translations that were created in a professional environment using an MT-enhanced workflow consisting of a three-stage process of automatic translation followed by post-editing and (mainly) monolingual revision. We compare the three successive versions of the target texts. We used different automatic metrics to measure the (dis)similarity between the consecutive versions and analyzed the linguistic characteristics of the three translation variants. Additionally, on a subset of 200 segments, we manually annotated all errors in the machine translation output and classified the different editing actions that were carried out. The results show that more editing occurred during revision than during post-editing and that the types of editing actions were different.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,118
inproceedings
alex-r-atrio-popescu-belis-2022-interaction
On the Interaction of Regularization Factors in Low-resource Neural Machine Translation
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.14/
Atrio, {\`A}lex R. and Popescu-Belis, Andrei
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
111--120
We explore the roles and interactions of the hyper-parameters governing regularization, and propose a range of values applicable to low-resource neural machine translation. We demonstrate that default or recommended values for high-resource settings are not optimal for low-resource ones, and that more aggressive regularization is needed when resources are scarce, in proportion to their scarcity. We explain our observations by the generalization abilities of sharp vs. flat basins in the loss landscape of a neural network. Results for four regularization factors corroborate our claim: batch size, learning rate, dropout rate, and gradient clipping. Moreover, we show that optimal results are obtained when using several of these factors, and that our findings generalize across datasets of different sizes and languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,119
inproceedings
vincent-etal-2022-controlling-extra
Controlling Extra-Textual Attributes about Dialogue Participants: A Case Study of {E}nglish-to-{P}olish Neural Machine Translation
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.15/
Vincent, Sebastian T. and Barrault, Lo{\"ic and Scarton, Carolina
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
121--130
Unlike English, morphologically rich languages can reveal characteristics of speakers or their conversational partners, such as gender and number, via pronouns, morphological endings of words and syntax. When translating from English to such languages, a machine translation model needs to opt for a certain interpretation of textual context, which may lead to serious translation errors if extra-textual information is unavailable. We investigate this challenge in the English-to-Polish language direction. We focus on the underresearched problem of utilising external metadata in automatic translation of TV dialogue, proposing a case study where a wide range of approaches for controlling attributes in translation is employed in a multi-attribute scenario. The best model achieves an improvement of +5.81 chrF++/+6.03 BLEU, with other models achieving competitive performance. We additionally contribute a novel attribute-annotated dataset of Polish TV dialogue and a morphological analysis script used to evaluate attribute control in models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,120
inproceedings
kambhatla-etal-2022-auxiliary
Auxiliary Subword Segmentations as Related Languages for Low Resource Multilingual Translation
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.16/
Kambhatla, Nishant and Born, Logan and Sarkar, Anoop
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
131--140
We propose a novel technique that combines alternative subword tokenizations of a single source-target language pair that allows us to leverage multilingual neural translation training methods. These alternate segmentations function like related languages in multilingual translation. Overall this improves translation accuracy for low-resource languages and produces translations that are lexically diverse and morphologically rich. We also introduce a cross-teaching technique which yields further improvements in translation accuracy and cross-lingual transfer between high- and low-resource language pairs. Compared to other strong multilingual baselines, our approach yields average gains of +1.7 BLEU across the four low-resource datasets from the multilingual TED-talks dataset. Our technique does not require additional training data and is a drop-in improvement for any existing neural translation system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,121
inproceedings
mota-etal-2022-fast
Fast-Paced Improvements to Named Entity Handling for Neural Machine Translation
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.17/
Mota, Pedro and Cabarr{\~a}o, Vera and Farah, Eduardo
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
141--149
In this work, we propose a Named Entity handling approach to improve translation quality within an existing Natural Language Processing (NLP) pipeline without modifying the Neural Machine Translation (NMT) component. Our approach seeks to enable fast delivery of such improvements and alleviate user experience problems related to NE distortion. We implement separate NE recognition and translation steps. Then, a combination of standard entity masking technique and a novel semantic equivalent placeholder guarantees that both NE translation is respected and the best overall quality is obtained from NMT. The experiments show that translation quality improves in 38.6{\%} of the test cases when compared to a version of the NLP pipeline with less-developed NE handling capability.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,122
inproceedings
kramchaninova-defauw-2022-synthetic
Synthetic Data Generation for Multilingual Domain-Adaptable Question Answering Systems
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.18/
Kramchaninova, Alina and Defauw, Arne
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
151--160
Deep learning models have significantly advanced the state of the art of question answering systems. However, the majority of datasets available for training such models have been annotated by humans, are open-domain, and are composed primarily in English. To deal with these limitations, we introduce a pipeline that creates synthetic data from natural text. To illustrate the domain-adaptability of our approach, as well as its multilingual potential, we use our pipeline to obtain synthetic data in English and Dutch. We combine the synthetic data with non-synthetic data (SQuAD 2.0) and evaluate multilingual BERT models on the question answering task. Models trained with synthetically augmented data demonstrate a clear improvement in performance when evaluated on the domain-specific test set, compared to the models trained exclusively on SQuAD 2.0. We expect our work to be beneficial for training domain-specific question-answering systems when the amount of available data is limited.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,123
inproceedings
van-der-werff-etal-2022-automatic
Automatic Discrimination of Human and Neural Machine Translation: A Study with Multiple Pre-Trained Models and Longer Context
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.19/
van der Werff, Tobias and van Noord, Rik and Toral, Antonio
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
161--170
We address the task of automatically distinguishing between human-translated (HT) and machine translated (MT) texts. Following recent work, we fine-tune pre-trained language models (LMs) to perform this task. Our work differs in that we use state-of-the-art pre-trained LMs, as well as the test sets of the WMT news shared tasks as training data, to ensure the sentences were not seen during training of the MT system itself. Moreover, we analyse performance for a number of different experimental setups, such as adding translationese data, going beyond the sentence-level and normalizing punctuation. We show that (i) choosing a state-of-the-art LM can make quite a difference: our best baseline system (DeBERTa) outperforms both BERT and RoBERTa by over 3{\%} accuracy, (ii) adding translationese data is only beneficial if there is not much data available, (iii) considerable improvements can be obtained by classifying at the document-level and (iv) normalizing punctuation and thus avoiding (some) shortcuts has no impact on model performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,124
inproceedings
sharou-specia-2022-taxonomy
A Taxonomy and Study of Critical Errors in Machine Translation
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.20/
Sharou, Khetam Al and Specia, Lucia
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
171--180
Not all machine mistranslations are equal. For example, mistranslating a date or time in an appointment, mistranslating the number or currency in a contract, or hallucinating profanity may lead to consequences for the users even when MT is just used for gisting. The severity of the errors is important, but overlooked, aspect of MT quality evaluation. In this paper, we present the result of our effort to bring awareness to the problem of critical translation errors. We study, validate and improve an initial taxonomy of critical errors with the view of providing guidance for critical error analysis, annotation and mitigation. We test the taxonomy for three different languages to examine to what extent it generalises across languages. We provide an account of factors that affect annotation tasks along with recommendations on how to improve the practice in future work. We also study the impact of the source text on generating critical errors in the translation and, based on this, propose a set of recommendations on aspects of the MT that need further scrutiny, especially for user-generated content, to avoid generating such errors, and hence improve online communication.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,125
inproceedings
nowakowski-etal-2022-neyron
n{EY}ron: Implementation and Deployment of an {MT} System for a Large Audit {\&} Consulting Corporation
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.21/
Nowakowski, Artur and Jassem, Krzysztof and Lison, Maciej and Jaworski, Rafa{\l} and Dwojak, Tomasz and Wiater, Karolina and Posesor, Olga
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
183--189
This paper reports on the implementation and deployment of an MT system in the Polish branch of EY Global Limited. The system supports standard CAT and MT functionalities such as translation memory fuzzy search, document translation and post-editing, and meets less common, customer-specific expectations. The deployment began in August 2018 with a Proof of Concept, and ended with the signing of the Final Version acceptance certificate in October 2021. We present the challenges that were faced during the deployment, particularly in relation to the security check and installation processes in the production environment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,126
inproceedings
buschbeck-etal-2022-hi
{\textquotedblleft}Hi, how can {I} help you?{\textquotedblright} Improving Machine Translation of Conversational Content in a Business Context
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.22/
Buschbeck, Bianka and Mell, Jennifer and Exel, Miriam and Huck, Matthias
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
191--200
This paper addresses the automatic translation of conversational content in a business context, for example support chat dialogues. While such use cases share characteristics with other informal machine translation scenarios, translation requirements with respect to technical and business-related expressions are high. To succeed in such scenarios, we experimented with curating dedicated training and test data, injecting noise to improve robustness, and applying sentence weighting schemes to carefully manage the influence of the different corpora. We show that our approach improves the performance of our models on conversational content for all 18 investigated language pairs while preserving translation quality on other domains - an indispensable requirement to integrate these developments into our MT engines at SAP.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,127
inproceedings
goncalves-etal-2022-agent
Agent and User-Generated Content and its Impact on Customer Support {MT}
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.23/
Gon{\c{c}}alves, Madalena and Buchicchio, Marianna and Stewart, Craig and Moniz, Helena and Lavie, Alon
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
201--210
This paper illustrates a new evaluation framework developed at Unbabel for measuring the quality of source language text and its effect on both Machine Translation (MT) and Human Post-Edition (PE) performed by non-professional post-editors. We examine both agent and user-generated content from the Customer Support domain and propose that differentiating the two is crucial to obtaining high quality translation output. Furthermore, we present results of initial experimentation with a new evaluation typology based on the Multidimensional Quality Metrics (MQM) Framework Lommel et al., 2014), specifically tailored toward the evaluation of source language text. We show how the MQM Framework Lommel et al., 2014) can be adapted to assess errors of monolingual source texts and demonstrate how very specific source errors propagate to the MT and PE targets. Finally, we illustrate how MT systems are not robust enough to handle very specific source noise in the context of Customer Support data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,128
inproceedings
menezes-etal-2022-case
A Case Study on the Importance of Named Entities in a Machine Translation Pipeline for Customer Support Content
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.24/
Menezes, Miguel and Cabarr{\~a}o, Vera and Mota, Pedro and Helena Moniz and Lavie, Alon
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
211--219
This paper describes the research developed at Unbabel, a Portuguese Machine-translation start-up, that combines MT with human post-edition and focuses strictly on customer service content. We aim to contribute to furthering MT quality and good-practices by exposing the importance of having a continuously-in-development robust Named Entity Recognition system compliant with General Data Protection Regulation (GDPR). Moreover, we have tested semiautomatic strategies that support and enhance the creation of Named Entities gold standards to allow a more seamless implementation of Multilingual Named Entities Recognition Systems. The project described in this paper is the result of a shared work between Unbabel ́s linguists and Unbabel ́s AI engineering team, matured over a year. The project should, also, be taken as a statement of multidisciplinary, proving and validating the much-needed articulation between the different scientific fields that compose and characterize the area of Natural Language Processing (NLP).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,129
inproceedings
afara-etal-2022-investigating
Investigating automatic and manual filtering methods to produce {MT}-ready glossaries from existing ones
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.25/
Afara, Maria and Scansani, Randy and Dugast, Lo{\"ic
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
221--230
Commercial Machine Translation (MT) providers offer functionalities that allow users to leverage bilingual glossaries. This poses the question of how to turn glossaries that were intended to be used by a human translator into MT-ready ones, removing entries that could harm the MT output. We present two automatic filtering approaches - one based on rules and the second one relying on a translation memory - and a manual filtering procedure carried out by a linguist. The resulting glossaries are added to the MT model. The outputs are compared against a baseline where no glossary is used and an output produced using the original glossary. The present work aims at investigating if any of these filtering methods can bring a higher terminology accuracy without negative effects on the overall quality. Results are measured with terminology accuracy and Translation Edit Rate. We test our filters on two language pairs, En-Fr and De-En. Results show that some of the automatically filtered glossaries improve the output when compared to the baseline, and they may help reach a better balance between accuracy and overall quality, replacing the costly manual process without quality loss.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,130
inproceedings
uguet-etal-2022-comparing
Comparing Multilingual {NMT} Models and Pivoting
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.26/
Uguet, Celia Soler and Bane, Fred and Zaretskaya, Anna and Mir{\'o}, T{\`a}nia Blanch
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
231--239
Following recent advancements in multilingual machine translation at scale, our team carried out tests to compare the performance of multilingual models (M2M from Facebook and multilingual models from Helsinki-NLP) with a two-step translation process using English as a pivot language. Direct assessment by linguists rated translations produced by pivoting as consistently better than those obtained from multilingual models of similar size, while automated evaluation with COMET suggested relative performance was strongly impacted by domain and language family.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,131
inproceedings
gupta-etal-2022-pre
Pre-training Synthetic Cross-lingual Decoder for Multilingual Samples Adaptation in {E}-Commerce Neural Machine Translation
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.27/
Gupta, Kamal Kumar and Chennabasavraj, Soumya and Garera, Nikesh and Ekbal, Asif
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
241--248
Availability of the user reviews in vernacular languages is helpful for the users to get information regarding the products. Since most of the e-commerce websites allow the reviews in English language only, it is important to provide the translated versions of the reviews to the non-English speaking users. Translation of the user reviews from English to vernacular languages is a challenging task, predominantly due to the lack of sufficient in-domain datasets. In this paper, we present a pre-training based efficient technique which is used to adapt and improve the single multilingual neural machine translation (NMT) model for the low-resource language pairs. The pre-trained model contains a special synthetic cross-lingual decoder. The decoder for the pre-training is trained over the cross-lingual target samples where the phrases are replaced with their translated counterparts. After pre-training, the model is adapted to multiple samples of the low-resource language pairs using incremental learning that does not require full training from the very scratch. We perform the experiments over eight low-resource and three high resource language pairs from the generic domain, and two language pairs from the product review domains. Through our synthetic multilingual decoder based pre-training, we achieve improvements of upto 4.35 BLEU points compared to the baseline and 2.13 BLEU points compared to the previous code-switched pre-trained models. The review domain outputs from the proposed model are evaluated in real time by human evaluators in the e-commerce company Flipkart.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,132
inproceedings
brockmann-etal-2022-error
Error Annotation in Post-Editing Machine Translation: Investigating the Impact of Text-to-Speech Technology
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.28/
Brockmann, Justus and Wiesinger, Claudia and Ciobanu, Dragoș
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
251--259
As post-editing of machine translation (PEMT) is becoming one of the most dominant services offered by the language services industry (LSI), efforts are being made to support provision of this service with additional technology. We present text-to-speech (T2S) as a potential attention-raising technology for post-editors. Our study was conducted with university students and included both PEMT and error annotation of a creative text with and without T2S. Focusing on the error annotation data, our analysis finds that participants under-annotated fewer MT errors in the T2S condition compared to the silent condition. At the same time, more over-annotation was recorded. Finally, annotation performance corresponds to participants' attitudes towards using T2S.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,133
inproceedings
karakanta-etal-2022-post
Post-editing in Automatic Subtitling: A Subtitlers' perspective
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.29/
Karakanta, Alina and Bentivogli, Luisa and Cettolo, Mauro and Negri, Matteo and Turchi, Marco
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
261--270
Recent developments in machine translation and speech translation are opening up opportunities for computer-assisted translation tools with extended automation functions. Subtitling tools are recently being adapted for post-editing by providing automatically generated subtitles, and featuring not only machine translation, but also automatic segmentation and synchronisation. But what do professional subtitlers think of post-editing automatically generated subtitles? In this work, we conduct a survey to collect subtitlers' impressions and feedback on the use of automatic subtitling in their workflows. Our findings show that, despite current limitations stemming mainly from speech processing errors, automatic subtitling is seen rather positively and has potential for the future.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,134
inproceedings
girletti-2022-working
Working with Pre-translated Texts: Preliminary Findings from a Survey on Post-editing and Revision Practices in {S}wiss Corporate In-house Language Services
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.30/
Girletti, Sabrina
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
271--280
With the arrival of neural machine translation, the boundaries between revision and post-editing (PE) have started to blur (Koponen et al., 2020). To shed light on current professional practices and provide new pedagogical perspectives, we set up a survey-based study to investigate how PE and revision are carried out in professional settings. We received 86 responses from corporate translators working at 26 different corporate in-house language services in Switzerland. Although the differences between the two activities seem to be clear for in-house linguists, our findings show that they tend to use the same reading strategies when working with human-translated and machine-translated texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,135
inproceedings
alva-manchego-shardlow-2022-towards
Towards Readability-Controlled Machine Translation of {COVID}-19 Texts
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.33/
Alva-Manchego, Fernando and Shardlow, Matthew
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
287--288
This project investigates the capabilities of Machine Translation models for generating translations at varying levels of readability, focusing on texts related to COVID-19. Whilst it is possible to automatically translate this information, the resulting text may contain specialised terminology, or may be written in a style that is difficult for lay readers to understand. So far, we have collected a new dataset with manual simplifications for English and Spanish sentences in the TICO-19 dataset, as well as implemented baseline pipelines combining Machine Translation and Text Simplification models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,138
inproceedings
daems-hackenbuchner-2022-debiasbyus
{D}e{B}ias{B}y{U}s: Raising Awareness and Creating a Database of {MT} Bias
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.34/
Daems, Joke and Hackenbuchner, Jani{\c{c}}a
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
289--290
This paper presents the project initiated by the BiasByUs team resulting from the 2021 Artificially Correct Hackaton. We briefly explain our winning participation in the hackaton, tackling the challenge on {\textquoteleft}Database and detection of gender bi-as in A.I. translations', we highlight the importance of gender bias in Machine Translation (MT), and describe our pro-posed solution to the challenge, the cur-rent status of the project, and our envi-sioned future collaborations and re-search.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,139
inproceedings
forcada-etal-2022-multitrainmt
{M}ultitrai{NMT} Erasmus+ project: Machine Translation Training for multilingual citizens (multitrainmt.eu)
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.35/
Forcada, Mikel L. and S{\'a}nchez-Gij{\'o}n, Pilar and Kenny, Dorothy and S{\'a}nchez-Mart{\'i}nez, Felipe and Ortiz, Juan Antonio P{\'e}rez and Superbo, Riccardo and S{\'a}nchez, Gema Ram{\'i}rez and Torres-Hostench, Olga and Rossi, Caroline
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
291--292
The MultitraiNMT Erasmus+ project has developed an open innovative syl-labus in machine translation, focusing on neural machine translation (NMT) and targeting both language learners and translators. The training materials include an open access coursebook with more than 250 activities and a pedagogical NMT interface called MutNMT that allows users to learn how neural machine translation works. These materials will allow students to develop the technical and ethical skills and competences required to become informed, critical users of machine translation in their own language learn-ing and translation practice. The pro-ject started in July 2019 and it will end in July 2022.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,140
inproceedings
yamada-etal-2022-trados
Trados-to-Translog-{II}: Adding Gaze and Qualitivity data to the {CRITT} {TPR}-{DB}
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.37/
Yamada, Masaru and Mizowaki, Takanori and Zou, Longhui and Carl, Michael
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
295--296
The CRITT (Center for Research and Innovation in Translation and Translation Technology) provides a Translation Process Research Database (TPR-DB) and a rich set of summary tables and tools that help to investigate translator behavior. In this paper, we describe a new tool in the TPR-DB that converts Trados Studio keylogging data (Qualitivity) into Translog-II format and adds the converted data to the CRITT TPR-DB. The tool is also able to synchronize with the output of various eye-trackers. We describe the components of the new TPR-DB tool and highlight some of the features that it produces in the TPR-DB tables.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,142
inproceedings
bouillon-etal-2022-passage
The {PASSAGE} project : {S}tandard {G}erman Subtitling of {S}wiss {G}erman {TV} content
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.40/
Bouillon, Pierrette and Gerlach, Johanna and Mutal, Jonathan and Starlander, Marianne
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
301--302
We present the PASSAGE project, which aims at automatic Standard German subtitling of Swiss German TV content. This is achieved in a two step process, beginning with ASR to produce a normalised transcription, followed by translation into Standard German. We focus on the second step, for which we explore different approaches and contribute aligned corpora for future research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,145
inproceedings
non-etal-2022-macocu
{M}a{C}o{C}u: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.41/
Ba{\~n}{\'o}n, Marta and Espl{\`a}-Gomis, Miquel and Forcada, Mikel L. and Garc{\'i}a-Romero, Cristian and Kuzman, Taja and Ljube{\v{s}}i{\'c}, Nikola and van Noord, Rik and Sempere, Leopoldo Pla and Ram{\'i}rez-S{\'a}nchez, Gema and Rupnik, Peter and Suchomel, V{\'i}t and Toral, Antonio and van der Werff, Tobias and Zaragoza, Jaume
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
303--304
We introduce the project {\textquotedblleft}MaCoCu: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages{\textquotedblright}, funded by the Connecting Europe Facility, which is aimed at building monolingual and parallel corpora for under-resourced European languages. The approach followed consists of crawling large amounts of textual data from carefully selected top-level domains of the Internet, and then applying a curation and enrichment pipeline. In addition to corpora, the project will release successive versions of the free/open-source web crawling and curation software used.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,146
inproceedings
murgolo-etal-2022-quality
A Quality Estimation and Quality Evaluation Tool for the Translation Industry
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.43/
Murgolo, Elena and Sharami, Javad Pourmostafa Roshan and Shterionov, Dimitar
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
307--308
With the increase in machine translation (MT) quality over the latest years, it has now become a common practice to integrate MT in the workflow of language service providers (LSPs) and other actors in the translation industry. With MT having a direct impact on the translation workflow, it is important not only to use high-quality MT systems, but also to understand the quality dimension so that the humans involved in the translation workflow can make informed decisions. The evaluation and monitoring of MT output quality has become one of the essential aspects of language technology management in LSPs' workflows. First, a general practice is to carry out human tests to evaluate MT output quality before deployment. Second, a quality estimate of the translated text, thus after deployment, can inform post editors or even represent post-editing effort. In the former case, based on the quality assessment of a candidate engine, an informed decision can be made whether the engine would be deployed for production or not. In the latter, a quality estimate of the translation output can guide the human post-editor or even make rough approximations of the post-editing effort. Quality of an MT engine can be assessed on document- or on sentence-level. A tool to jointly provide all these functionalities does not exist yet. The overall objective of the project presented in this paper is to develop an MT quality assessment (MTQA) tool that simplifies the quality assessment of MT engines, combining quality evaluation and quality estimation on document- and sentence- level.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,148
inproceedings
bergmanis-etal-2022-mtee
{MT}ee: Open Machine Translation Platform for {E}stonian Government
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.44/
Bergmanis, Toms and Pinnis, Marcis and Rozis, Roberts and {\v{Slapi{\c{n{\v{s, J{\={anis and {\v{Sics, Valters and Bern{\={ane, Berta and Pu{\v{zulis, Guntars and Titomers, Endijs and T{\"attar, Andre and Purason, Taido and Kuulmets, Hele-Andra and Luhtaru, Agnes and R{\"atsep, Liisa and Tars, Maali and Laumets-T{\"attar, Annika and Fishel, Mark
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
309--310
We present the MTee project - a research initiative funded via an Estonian public procurement to develop machine translation technology that is open-source and free of charge. The MTee project delivered an open-source platform serving state-of-the-art machine translation systems supporting four domains for six language pairs translating from Estonian into English, German, and Russian and vice-versa. The platform also features grammatical error correction and speech translation for Estonian and allows for formatted document translation and automatic domain detection. The software, data and training workflows for machine translation engines are all made publicly available for further use and research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,149
inproceedings
vazquez-etal-2022-latest
Latest Development in the {F}o{T}ran Project {--} Scaling Up Language Coverage in Neural Machine Translation Using Distributed Training with Language-Specific Components
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.45/
V{\'azquez, Ra{\'ul and Boggia, Michele and Raganato, Alessandro and Loppi, Niki A. and Gr{\"onroos, Stig-Arne and Tiedemann, J{\"org
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
311--312
We describe the enhancement of a multilingual NMT toolkit developed as part of the FoTran project. We devise our modular attention-bridge model, which connects language-specific components through a shared network layer. The system now supports distributed training over many nodes and GPUs in order to substantially scale up the number of languages that can be included in a modern neural translation architecture. The model enables the study of emerging language-agnostic representations and also provides a modular toolkit for efficient machine translation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,150
inproceedings
sarti-bisazza-2022-indeep
{I}n{D}eep {\texttimes} {NMT}: Empowering Human Translators via Interpretable Neural Machine Translation
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.46/
Sarti, Gabriele and Bisazza, Arianna
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
313--314
Neural machine translation (NMT) systems are nowadays essential components of professional translation workflows. Consequently, human translators are increasingly working as post-editors for machine-translated content. The NWO-funded InDeep project aims to empower users of Deep Learning models of text, speech, and music by improving their ability to interact with such models and interpret their behaviors. In the specific context of translation, we aim at developing new tools and methodologies to improve prediction attribution, error analysis, and controllable generation for NMT systems. These advances will be evaluated through field studies involving professional translators to assess gains in efficiency and overall enjoyability of the post-editing process.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,151
inproceedings
de-souza-etal-2022-quartz
{QUARTZ}: Quality-Aware Machine Translation
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.47/
de Souza, Jos{\'e} G.C. and Rei, Ricardo and Farinha, Ana C. and Moniz, Helena and Martins, Andr{\'e} F. T.
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
315--316
This paper presents QUARTZ, QUality-AwaRe machine Translation, a project led by Unbabel which aims at developing machine translation systems that are more robust and produce fewer critical errors. With QUARTZ we want to enable machine translation for user-generated conversational content types that do not tolerate critical errors in automatic translations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,152
inproceedings
nowakowski-etal-2022-poleng
{POLENG} {MT}: An Adaptive {MT} Platform
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.48/
Nowakowski, Artur and Jassem, Krzysztof and Lison, Maciej and Guttmann, Kamil and Pokrywka, Miko{\l}aj
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
317--318
We introduce POLENG MT, an MT platform that may be used as a cloud web application or as an on-site solution. The platform is capable of providing accurate document translation, including the transfer of document formatting between the input document and the output document. The main feature of the on-site version is dedicated customer adaptation, which consists of training on specialized texts and applying forced terminology translation according to the user`s needs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,153
inproceedings
amaral-van-der-kreeft-2022-plain
plain {X} - {AI} Supported Multilingual Video Workflow Platform
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.49/
Amaral, Carlos and van der Kreeft, Peggy
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
319--320
The plain X platform is a toolbox for multilingual adaptation, for video, audio, and text content. The software is a 4-in-1 tool, combining several steps in the adaptation process, i.e., transcription, translation, subtitling, and voice-over, all automatically generated, but with a high level of editorial control. Users can choose which translation engine is used (e.g., MS Azure, Google, DeepL) depending on best performance. As a result, plain X enables a smooth semi-automated production of subtitles or voice-over, much faster than with older, manual workflows. The software was developed out of EU research projects and has recently been rolled out for professional use. It brings Artificial Intelligence (AI) into the multilingual media production process, while keeping the human in the loop.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,154
inproceedings
bernardinello-klein-2022-background
Background Search for Terminology in {STAR} {MT} Translate
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.51/
Bernardinello, Giorgio and Klein, Judith
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
323--324
When interested in an internal web ap-plication for MT, corporate customers always ask how reliable terminology will be in their translations. Coherent vocabulary is crucial in many aspects of corporate translations, such as doc-umentation or marketing. The main goal every MT provider would like to achieve is to fully integrate the cus-tomer`s terminology into the model, so that the result does not need to be edit-ed, but this is still not always guaran-teed. Besides, a web application like STAR MT Translate allows our cus-tomers to use {--} integrated within the same page {--} different generic MT pro-viders which were not trained with customer-specific data. So, as a prag-matic approach, we decided to in-crease the level of integration between WebTerm and STAR MT Translate, adding to the latter more terminological information, with which the user can post-edit the translation if needed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,156
inproceedings
shterionov-etal-2022-sign
Sign Language Translation: Ongoing Development, Challenges and Innovations in the {S}ign{ON} Project
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.52/
Shterionov, Dimitar and De Sisto, Mirella and Vandeghinste, Vincent and Brady, Aoife and De Coster, Mathieu and Leeson, Lorraine and Blat, Josep and Picron, Frankie and Scipioni, Marcello Paolo and Parikh, Aditya and ten Bosh, Louis and O{'}Flaherty, John and Dambre, Joni and Rijckaert, Jorn
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
325--326
The SignON project (www.signon-project.eu) focuses on the research and development of a Sign Language (SL) translation mobile application and an open communications framework. SignON rectifies the lack of technology and services for the automatic translation between signed and spoken languages, through an inclusive, humancentric solution which facilitates communication between deaf, hard of hearing (DHH) and hearing individuals. We present an overview of the current status of the project, describing the milestones reached to date and the approaches that are being developed to address the challenges and peculiarities of Sign Language Machine Translation (SLMT).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,157
inproceedings
martins-etal-2022-deepspin
{D}eep{SPIN}: Deep Structured Prediction for Natural Language Processing
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.53/
Martins, Andr{\'e} F. T. and Peters, Ben and Zerva, Chrysoula and Lyu, Chunchuan and Correia, Gon{\c{c}}alo and Treviso, Marcos and Martins, Pedro and Mihaylova, Tsvetomila
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
327--328
DeepSPIN is a research project funded by the European Research Council (ERC) whose goal is to develop new neural structured prediction methods, models, and algorithms for improving the quality, interpretability, and data-efficiency of natural language processing (NLP) systems, with special emphasis on machine translation and quality estimation. We describe in this paper the latest findings from this project.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,158
inproceedings
karakanta-etal-2022-towards
Towards a methodology for evaluating automatic subtitling
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.57/
Karakanta, Alina and Bentivogli, Luisa and Cettolo, Mauro and Negri, Matteo and Turchi, Marco
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
335--336
In response to the increasing interest towards automatic subtitling, this EAMT-funded project aimed at collecting subtitle post-editing data in a real use case scenario where professional subtitlers edit automatically generated subtitles. The post-editing setting includes, for the first time, automatic generation of timestamps and segmentation, and focuses on the effect of timing and segmentation edits on the post-editing process. The collected data will serve as the basis for investigating how subtitlers interact with automatic subtitling and for devising evaluation methods geared to the multimodal nature and formal requirements of subtitling.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,162
inproceedings
lapshinova-koltunski-etal-2022-dihutra
{D}i{H}u{T}ra: a Parallel Corpus to Analyse Differences between Human Translations
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.58/
Lapshinova-Koltunski, Ekaterina and Popovi{\'c}, Maja and Koponen, Maarit
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
337--338
This project aimed to design a corpus of parallel human translations (HTs) of the same source texts by professionals and students. The resulting corpus consists of English news and reviews source texts, their translations into Russian and Croatian, and translations of the reviews into Finnish. The corpus will be valuable for both studying variation in translation and evaluating machine translation (MT) systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,163
inproceedings
den-bogaert-etal-2022-automatically
Automatically extracting the semantic network out of public services to support cities becoming Smart Cities
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.61/
den Bogaert, Joachim Van and Meeus, Laurens and Kramchaninova, Alina and Defauw, Arne and Szoc, Sara and Everaert, Frederic and Winckel, Koen Van and Bardadym, Anna and Vanallemeersch, Tom
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
343--344
The CEFAT4Cities project aims at creating a multilingual semantic interoperability layer for Smart Cities that allows users from all EU member States to interact with public services in their own language. The CEFAT4Cities processing pipeline transforms natural-language administrative procedures into machine-readable data using various multilingual Natural Language Processing techniques, such as semantic networks and machine translation, thus allowing for the development of more sophisticated and more user-friendly public services applications.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,166
inproceedings
barreiro-etal-2022-multi3generation
{M}ulti3{G}eneration: Multitask, Multilingual, Multimodal Language Generation
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.63/
Barreiro, Anabela and de Souza, Jos{\'e} GC and Gatt, Albert and Bhatt, Mehul and Lloret, Elena and Erdem, Aykut and Gkatzia, Dimitra and Moniz, Helena and Russo, Irene and Kepler, Fabio and Calixto, Iacer and Paprzycki, Marcin and Portet, Fran{\c{c}}ois and Augenstein, Isabelle and Alhasani, Mirela
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
347--348
This paper presents the Multitask, Multilingual, Multimodal Language Generation COST Action {--} Multi3Generation (CA18231), an interdisciplinary network of research groups working on different aspects of language generation. This {\textquotedblleft}meta-paper{\textquotedblright} will serve as reference for citations of the Action in future publications. It presents the objectives, challenges and a the links for the achieved outcomes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,168
inproceedings
bago-etal-2022-achievements
Achievements of the {PRINCIPLE} Project: Promoting {MT} for {C}roatian, {I}celandic, {I}rish and {N}orwegian
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.64/
Bago, Petra and Castilho, Sheila and Dunne, Jane and Gaspari, Federico and K, Andre and Kristmannsson, Gauti and Olsen, Jon Arild and Resende, Natalia and G{\'i}slason, N{\'i}els R{\'u}nar and Sheridan, Dana D. and Sheridan, P{\'a}raic and Tinsley, John and Way, Andy
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
349--350
This paper provides an overview of the main achievements of the completed PRINCIPLE project, a 2-year action funded by the European Commission under the Connecting Europe Facility (CEF) programme. PRINCIPLE focused on collecting high-quality language resources for Croatian, Icelandic, Irish and Norwegian, which are severely low-resource languages, especially for building effective machine translation (MT) systems. We report the achievements of the project, primarily, in terms of the large amounts of data collected for all four low-resource languages and of promoting the uptake of neural MT (NMT) for these languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,169
inproceedings
gangi-etal-2022-automatic
Automatic Video Dubbing at {A}pp{T}ek
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.65/
Di Gangi, Mattia and Rossenbach, Nick and P{\'e}rez, Alejandro and Bahar, Parnia and Beck, Eugen and Wilken, Patrick and Matusov, Evgeny
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
351--352
Video dubbing is the activity of revoicing a video while offering a viewing experience equivalent to the original video. The revoicing usually comes with a changed script, mostly in a different language, and the revoicing should reproduce the original emotions, coherent with the body language, and lip synchronized. In this project, we aim to build an AD system in three phases: (1) voice-over; (2) emotional voice-over; (3) full dubbing, while enhancing the system with human-in-the-loop capabilities for a higher quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,170
inproceedings
aldabe-etal-2022-overview
Overview of the {ELE} Project
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.66/
Aldabe, Itziar and Dunne, Jane and Farwell, Aritz and Gallagher, Owen and Gaspari, Federico and Giagkou, Maria and Hajic, Jan and K{\"uckens, Jens Peter and Lynn, Teresa and Rehm, Georg and Rigau, German and Marheinecke, Katrin and Piperidis, Stelios and Resende, Natalia and Vojt{\v{echov{\'a, Tea and Way, Andy
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
353--354
This paper provides an overview of the ongoing European Language Equality(ELE) project, an 18-month action funded by the European Commission which involves 52 partners. The primary goal of ELE is to prepare the European Language Equality Programme, in the form of a strategic research, innovation and implementation agenda and a roadmap for achieving full digital language equality (DLE) in Europe by 2030.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,171
inproceedings
koponen-etal-2022-lithme
{LITHME}: Language in the Human-Machine Era
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.67/
Koponen, Maarit and Allkivi-Metsoja, Kais and Pareja-Lora, Antonio and Sayers, Dave and Seresi, M{\'a}rta
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
355--356
The LITHME COST Action brings together researchers from various fields of study focusing on language and technology. We present the overall goals of LITHME and the network`s working groups focusing on diverse questions related to language and technology. As an example of the work of the LITHME network, we discuss the working group on language work and language professionals.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,172
inproceedings
arenas-toral-2022-creamt
{CREAMT}: Creativity and narrative engagement of literary texts translated by translators and {NMT}
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.68/
Arenas, Ana Guerberof and Toral, Antonio
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
357--358
We present here the EU-funded project CREAMT that seeks to understand what is meant by creativity in different translation modalities, e.g. machine translation, post-editing or professional translation. Focusing on the textual elements that determine creativity in translated literary texts and the reader experience, CREAMT uses a novel, interdisciplinary approach to assess how effective MT is in literary translation considering creativity in translation and the ultimate user: the reader.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,173
inproceedings
lohar-etal-2022-developing
Developing Machine Translation Engines for Multilingual Participatory Spaces
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.69/
Lohar, Pintu and Xie, Guodong and Way, Andy
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
359--360
It is often a challenging task to build Machine Translation (MT) engines for a specific domain due to the lack of parallel data in that area. In this project, we develop a range of MT systems for 6 European languages (English, German, Italian, French, Polish and Irish) in all directions and in two domains (environment and economics).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,174
inproceedings
bentivogli-etal-2022-extending
Extending the {M}u{ST}-{C} Corpus for a Comparative Evaluation of Speech Translation Technology
Moniz, Helena and Macken, Lieve and Rufener, Andrew and Barrault, Lo{\"ic and Costa-juss{\`a, Marta R. and Declercq, Christophe and Koponen, Maarit and Kemp, Ellie and Pilos, Spyridon and Forcada, Mikel L. and Scarton, Carolina and Van den Bogaert, Joachim and Daems, Joke and Tezcan, Arda and Vanroy, Bram and Fonteyne, Margot
jun
2022
Ghent, Belgium
European Association for Machine Translation
https://aclanthology.org/2022.eamt-1.70/
Bentivogli, Luisa and Cettolo, Mauro and Gaido, Marco and Karakanta, Alina and Negri, Matteo and Turchi, Marco
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
361--362
This project aimed at extending the test sets of the MuST-C speech translation (ST) corpus with new reference translations. The new references were collected from professional post-editors working on the output of different ST systems for three language pairs: English-German/Italian/Spanish. In this paper, we shortly describe how the data were collected and how they are distributed. As an evidence of their usefulness, we also summarise the findings of the first comparative evaluation of cascade and direct ST approaches, which was carried out relying on the collected data. The project was partially funded by the European Association for Machine Translation (EAMT) through its 2020 Sponsorship of Activities programme.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,175
inproceedings
kumar-etal-2022-bert
{BERT}-Based Sequence Labelling Approach for Dependency Parsing in {T}amil
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.1/
Kumar, C S Ayush and Maharana, Advaith and Murali, Srinath and B, Premjith and Kp, Soman
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
1--8
Dependency parsing is a method for doing surface-level syntactic analysis on natural language texts. The scarcity of any viable tools for doing these tasks in Dravidian Languages has introduced a new line of research into these topics. This paper focuses on a novel approach that uses word-to-word dependency tagging using BERT models to improve the malt parser performance. We used Tamil, a morphologically rich and free word language. The individual words are tokenized using BERT models and the dependency relations are recognized using Machine Learning Algorithms. Oversampling algorithms such as SMOTE (Chawla et al., 2002) and ADASYN (He et al., 2008) are used to tackle data imbalance and consequently improve parsing results. The results obtained are used in the malt parser and this can be accustomed to further highlight that feature-based approaches can be used for such tasks.
null
null
10.18653/v1/2022.dravidianlangtech-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,178
inproceedings
bellamkonda-etal-2022-dataset
A Dataset for Detecting Humor in {T}elugu Social Media Text
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.2/
Bellamkonda, Sriphani and Lohakare, Maithili and Patel, Shaswat
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
9--14
Increased use of online social media sites has given rise to tremendous amounts of user generated data. Social media sites have become a platform where users express and voice their opinions in a real-time environment. Social media sites such as Twitter limit the number of characters used to express a thought in a tweet, leading to increased use of creative, humorous and confusing language in order to convey the message. Due to this, automatic humor detection has become a difficult task, especially for low-resource languages such as the Dravidian languages. Humor detection has been a well studied area for resource rich languages due to the availability of rich and accurate data. In this paper, we have attempted to solve this issue by working on low-resource languages, such as, Telugu, a Dravidian language, by collecting and annotating Telugu tweets and performing automatic humor detection on the collected data. We experimented on the corpus using various transformer models such as Multilingual BERT, Multilingual DistillBERT and XLM-RoBERTa to establish a baseline classification system. We concluded that XLM-RoBERTa was the best-performing model and it achieved an F1-score of 0.82 with 81.5{\%} accuracy.
null
null
10.18653/v1/2022.dravidianlangtech-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,179
inproceedings
kumar-etal-2022-mucot
{M}u{C}o{T}: Multilingual Contrastive Training for Question-Answering in Low-resource Languages
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.3/
Kumar, Gokul Karthik and Gehlot, Abhishek and Mullappilly, Sahal Shaji and Nandakumar, Karthik
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
15--24
Accuracy of English-language Question Answering (QA) systems has improved significantly in recent years with the advent of Transformer-based models (e.g., BERT). These models are pre-trained in a self-supervised fashion with a large English text corpus and further fine-tuned with a massive English QA dataset (e.g., SQuAD). However, QA datasets on such a scale are not available for most of the other languages. Multi-lingual BERT-based models (mBERT) are often used to transfer knowledge from high-resource languages to low-resource languages. Since these models are pre-trained with huge text corpora containing multiple languages, they typically learn language-agnostic embeddings for tokens from different languages. However, directly training an mBERT-based QA system for low-resource languages is challenging due to the paucity of training data. In this work, we augment the QA samples of the target language using translation and transliteration into other languages and use the augmented data to fine-tune an mBERT-based QA model, which is already pre-trained in English. Experiments on the Google ChAII dataset show that fine-tuning the mBERT model with translations from the same language family boosts the question-answering performance, whereas the performance degrades in the case of cross-language families. We further show that introducing a contrastive loss between the translated question-context feature pairs during the fine-tuning process, prevents such degradation with cross-lingual family translations and leads to marginal improvement. The code for this work is available at \url{https://github.com/gokulkarthik/mucot}.
null
null
10.18653/v1/2022.dravidianlangtech-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,180
inproceedings
s-etal-2022-tamilatis
{T}amil{ATIS}: Dataset for Task-Oriented Dialog in {T}amil
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.4/
S, Ramaneswaran and Vijay, Sanchit and Srinivasan, Kathiravan
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
25--32
Task-Oriented Dialogue (TOD) systems allow users to accomplish tasks by giving directions to the system using natural language utterances. With the widespread adoption of conversational agents and chat platforms, TOD has become mainstream in NLP research today. However, developing TOD systems require massive amounts of data, and there has been limited work done for TOD in low-resource languages like Tamil. Towards this objective, we introduce TamilATIS - a TOD dataset for Tamil which contains 4874 utterances. We present a detailed account of the entire data collection and data annotation process. We train state-of-the-art NLU models and report their performances. The joint BERT model with XLM-Roberta as utterance encoder achieved the highest score with an intent accuracy of 96.26{\%} and slot F1 of 94.01{\%}.
null
null
10.18653/v1/2022.dravidianlangtech-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,181
inproceedings
palanikumar-etal-2022-de
{DE}-{ABUSE}@{T}amil{NLP}-{ACL} 2022: Transliteration as Data Augmentation for Abuse Detection in {T}amil
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.5/
Palanikumar, Vasanth and Benhur, Sean and Hande, Adeep and Chakravarthi, Bharathi Raja
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
33--38
With the rise of social media and internet, thereis a necessity to provide an inclusive space andprevent the abusive topics against any gender,race or community. This paper describes thesystem submitted to the ACL-2022 shared taskon fine-grained abuse detection in Tamil. In ourapproach we transliterated code-mixed datasetas an augmentation technique to increase thesize of the data. Using this method we wereable to rank 3rd on the task with a 0.290 macroaverage F1 score and a 0.590 weighted F1score
null
null
10.18653/v1/2022.dravidianlangtech-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,182
inproceedings
garcia-diaz-etal-2022-umuteam
{UMUT}eam@{T}amil{NLP}-{ACL}2022: Emotional Analysis in {T}amil
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.6/
Garc{\'i}a-D{\'i}az, Jos{\'e} and Rodr{\'i}guez Garc{\'i}a, Miguel {\'A}ngel and Valencia-Garc{\'i}a, Rafael
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
39--44
This working notes summarises the participation of the UMUTeam on the TamilNLP (ACL 2022) shared task concerning emotion analysis in Tamil. We participated in the two multi-classification challenges proposed with a neural network that combines linguistic features with different feature sets based on contextual and non-contextual sentence embeddings. Our proposal achieved the 1st result for the second subtask, with an f1-score of 15.1{\%} discerning among 30 different emotions. However, our results for the first subtask were not recorded in the official leader board. Accordingly, we report our results for this subtask with the validation split, reaching a macro f1-score of 32.360{\%}.
null
null
10.18653/v1/2022.dravidianlangtech-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,183
inproceedings
garcia-diaz-etal-2022-umuteam-tamilnlp
{UMUT}eam@{T}amil{NLP}-{ACL}2022: Abusive Detection in {T}amil using Linguistic Features and Transformers
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.7/
Garc{\'i}a-D{\'i}az, Jos{\'e} and Valencia-Garcia, Manuel and Valencia-Garc{\'i}a, Rafael
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
45--50
Social media has become a dangerous place as bullies take advantage of the anonymity the Internet provides to target and intimidate vulnerable individuals and groups. In the past few years, the research community has focused on developing automatic classification tools for detecting hate-speech, its variants, and other types of abusive behaviour. However, these methods are still at an early stage in low-resource languages. With the aim of reducing this barrier, the TamilNLP shared task has proposed a multi-classification challenge for Tamil written in Tamil script and code-mixed to detect abusive comments and hope-speech. Our participation consists of a knowledge integration strategy that combines sentence embeddings from BERT, RoBERTa, FastText and a subset of language-independent linguistic features. We achieved our best result in code-mixed, reaching 3rd position with a macro-average f1-score of 35{\%}.
null
null
10.18653/v1/2022.dravidianlangtech-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,184
inproceedings
das-etal-2022-hate
hate-alert@{D}ravidian{L}ang{T}ech-{ACL}2022: Ensembling Multi-Modalities for {T}amil {T}roll{M}eme Classification
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.8/
Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
51--57
Social media platforms often act as breeding grounds for various forms of trolling or malicious content targeting users or communities. One way of trolling users is by creating memes, which in most cases unites an image with a short piece of text embedded on top of it. The situation is more complex for multilingual(e.g., Tamil) memes due to the lack of benchmark datasets and models. We explore several models to detect Troll memes in Tamil based on the shared task, {\textquotedblleft}Troll Meme Classification in DravidianLangTech2022{\textquotedblright} at ACL-2022. We observe while the text-based model MURIL performs better for Non-troll meme classification, the image-based model VGG16 performs better for Troll-meme classification. Further fusing these two modalities help us achieve stable outcomes in both classes. Our fusion model achieved a 0.561 weighted average F1 score and ranked second in this task.
null
null
10.18653/v1/2022.dravidianlangtech-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,185
inproceedings
andrew-2022-judithjeyafreedaandrew
{J}udith{J}eyafreeda{A}ndrew@{T}amil{NLP}-{ACL}2022:{CNN} for Emotion Analysis in {T}amil
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.9/
Andrew, Judith Jeyafreeda
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
58--63
Using technology for analysis of human emotion is a relatively nascent research area. There are several types of data where emotion recognition can be employed, such as - text, images, audio and video. In this paper, the focus is on emotion recognition in text data. Emotion recognition in text can be performed from both written comments and from conversations. In this paper, the dataset used for emotion recognition is a list of comments. While extensive research is being performed in this area, the language of the text plays a very important role. In this work, the focus is on the Dravidian language of Tamil. The language and its script demands an extensive pre-processing. The paper contributes to this by adapting various pre-processing methods to the Dravidian Language of Tamil. A CNN method has been adopted for the task at hand. The proposed method has achieved a comparable result.
null
null
10.18653/v1/2022.dravidianlangtech-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,186
inproceedings
balouchzahi-etal-2022-mucic
{MUCIC}@{T}amil{NLP}-{ACL}2022: Abusive Comment Detection in {T}amil Language using 1{D} Conv-{LSTM}
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.10/
Balouchzahi, Fazlourrahman and Gowda, Anusha and Shashirekha, Hosahalli and Sidorov, Grigori
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
64--69
Abusive language content such as hate speech, profanity, and cyberbullying etc., which is common in online platforms is creating lot of problems to the users as well as policy makers. Hence, detection of such abusive language in user-generated online content has become increasingly important over the past few years. Online platforms strive hard to moderate the abusive content to reduce societal harm, comply with laws, and create a more inclusive environment for their users. In spite of various methods to automatically detect abusive languages in online platforms, the problem still persists. To address the automatic detection of abusive languages in online platforms, this paper describes the models submitted by our team - MUCIC to the shared task on {\textquotedblleft}Abusive Comment Detection in Tamil-ACL 2022{\textquotedblright}. This shared task addresses the abusive comment detection in native Tamil script texts and code-mixed Tamil texts. To address this challenge, two models: i) n-gram-Multilayer Perceptron (n-gram-MLP) model utilizing MLP classifier fed with char-n gram features and ii) 1D Convolutional Long Short-Term Memory (1D Conv-LSTM) model, were submitted. The n-gram-MLP model fared well among these two models with weighted F1-scores of 0.560 and 0.430 for code-mixed Tamil and native Tamil script texts, respectively. This work may be reproduced using the code available in \url{https://github.com/anushamdgowda/abusive-detection}.
null
null
10.18653/v1/2022.dravidianlangtech-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,187
inproceedings
s-n-etal-2022-cen
{CEN}-{T}amil@{D}ravidian{L}ang{T}ech-{ACL}2022: Abusive Comment detection in {T}amil using {TF}-{IDF} and Random Kitchen Sink Algorithm
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.11/
S N, Prasanth and Aswin Raj, R and P, Adhithan and B, Premjith and Kp, Soman
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
70--74
This paper describes the approach of team CEN-Tamil used for abusive comment detection in Tamil. This task aims to identify whether a given comment contains abusive comments. We used TF-IDF with char-wb analyzers with Random Kitchen Sink (RKS) algorithm to create feature vectors and the Support Vector Machine (SVM) classifier with polynomial kernel for classification. We used this method for both Tamil and Tamil-English datasets and secured first place with an f1-score of 0.32 and seventh place with an f1-score of 0.25, respectively. The code for our approach is shared in the GitHub repository.
null
null
10.18653/v1/2022.dravidianlangtech-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,188
inproceedings
lekshmiammal-etal-2022-nitk
{NITK}-{IT}{\_}{NLP}@{T}amil{NLP}-{ACL}2022: Transformer based model for Toxic Span Identification in {T}amil
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.12/
LekshmiAmmal, Hariharan and Ravikiran, Manikandan and Madasamy, Anand Kumar
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
75--78
Toxic span identification in Tamil is a shared task that focuses on identifying harmful content, contributing to offensiveness. In this work, we have built a model that can efficiently identify the span of text contributing to offensive content. We have used various transformer-based models to develop the system, out of which the fine-tuned MuRIL model was able to achieve the best overall character F1-score of 0.4489.
null
null
10.18653/v1/2022.dravidianlangtech-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,189
inproceedings
nandi-etal-2022-teamx
{T}eam{X}@{D}ravidian{L}ang{T}ech-{ACL}2022: A Comparative Analysis for Troll-Based Meme Classification
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.13/
Nandi, Rabindra Nath and Alam, Firoj and Nakov, Preslav
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
79--85
The spread of fake news, propaganda, misinformation, disinformation, and harmful content online raised concerns among social mediaplatforms, government agencies, policymakers, and society as a whole. This is because such harmful or abusive content leads to several consequences to people such as physical, emotional, relational, and financial. Among different harmful content trolling-based online content is one of them, where the idea is to post a message that is provocative, offensive, or menacing with an intent to mislead the audience. The content can be textual, visual, a combination of both, or a meme. In this study, we provide a comparative analysis of troll-based memes classification using the textual, visual, and multimodal content. We report several interesting findings in terms of code-mixed text, multimodal setting, and combining an additional dataset, which shows improvements over the majority baseline.
null
null
10.18653/v1/2022.dravidianlangtech-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,190
inproceedings
prasad-etal-2022-gjg
{GJG}@{T}amil{NLP}-{ACL}2022: Emotion Analysis and Classification in {T}amil using Transformers
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.14/
Prasad, Janvi and Prasad, Gaurang and C, Gunavathi
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
86--92
This paper describes the systems built by our team for the {\textquotedblleft}Emotion Analysis in Tamil{\textquotedblright} shared task at the Second Workshop on Speech and Language Technologies for Dravidian Languages at ACL 2022. There were two multi-class classification sub-tasks as a part of this shared task. The dataset for sub-task A contained 11 types of emotions while sub-task B was more fine-grained with 31 emotions. We fine-tuned an XLM-RoBERTa and DeBERTA base model for each sub-task. For sub-task A, the XLM-RoBERTa model achieved an accuracy of 0.46 and the DeBERTa model achieved an accuracy of 0.45. We had the best classification performance out of 11 teams for sub-task A. For sub-task B, the XLM-RoBERTa model`s accuracy was 0.33 and the DeBERTa model had an accuracy of 0.26. We ranked 2nd out of 7 teams for sub-task B.
null
null
10.18653/v1/2022.dravidianlangtech-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,191
inproceedings
prasad-etal-2022-gjg-tamilnlp
{GJG}@{T}amil{NLP}-{ACL}2022: Using Transformers for Abusive Comment Classification in {T}amil
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.15/
Prasad, Gaurang and Prasad, Janvi and C, Gunavathi
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
93--99
This paper presents transformer-based models for the {\textquotedblleft}Abusive Comment Detection{\textquotedblright} shared task at the Second Workshop on Speech and Language Technologies for Dravidian Languages at ACL 2022. Our team participated in both the multi-class classification sub-tasks as a part of this shared task. The dataset for sub-task A was in Tamil text; while B was code-mixed Tamil-English text. Both the datasets contained 8 classes of abusive comments. We trained an XLM-RoBERTa and DeBERTA base model on the training splits for each sub-task. For sub-task A, the XLM-RoBERTa model achieved an accuracy of 0.66 and the DeBERTa model achieved an accuracy of 0.62. For sub-task B, both the models achieved a classification accuracy of 0.72; however, the DeBERTa model performed better in other classification metrics. Our team ranked 2nd in the code-mixed classification sub-task and 8th in Tamil-text sub-task.
null
null
10.18653/v1/2022.dravidianlangtech-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,192
inproceedings
biradar-saumya-2022-iiitdwd
{IIITDWD}@{T}amil{NLP}-{ACL}2022: Transformer-based approach to classify abusive content in {D}ravidian Code-mixed text
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.16/
Biradar, Shankar and Saumya, Sunil
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
100--104
Identifying abusive content or hate speech in social media text has raised the research community`s interest in recent times. The major driving force behind this is the widespread use of social media websites. Further, it also leads to identifying abusive content in low-resource regional languages, which is an important research problem in computational linguistics. As part of ACL-2022, organizers of DravidianLangTech@ACL 2022 have released a shared task on abusive category identification in Tamil and Tamil-English code-mixed text to encourage further research on offensive content identification in low-resource Indic languages. This paper presents the working notes for the model submitted by IIITDWD at DravidianLangTech@ACL 2022. Our team competed in Sub-Task B and finished in 9th place among the participating teams. In our proposed approach, we used a pre-trained transformer model such as Indic-bert for feature extraction, and on top of that, SVM classifier is used for stance detection. Further, our model achieved 62 {\%} accuracy on code-mixed Tamil-English text.
null
null
10.18653/v1/2022.dravidianlangtech-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,193
inproceedings
k-etal-2022-pandas
{PANDAS}@{T}amil{NLP}-{ACL}2022: Emotion Analysis in {T}amil Text using Language Agnostic Embeddings
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.17/
K, Divyasri and G L, Gayathri and Swaminathan, Krithika and Durairaj, Thenmozhi and B, Bharathi and B, Senthil Kumar
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
105--111
As the world around us continues to become increasingly digital, it has been acknowledged that there is a growing need for emotion analysis of social media content. The task of identifying the emotion in a given text has many practical applications ranging from screening public health to business and management. In this paper, we propose a language-agnostic model that focuses on emotion analysis in Tamil text. Our experiments yielded an F1-score of 0.010.
null
null
10.18653/v1/2022.dravidianlangtech-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,194
inproceedings
swaminathan-etal-2022-pandas
{PANDAS}@Abusive Comment Detection in {T}amil Code-Mixed Data Using Custom Embeddings with {L}a{BSE}
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.18/
G L, Gayathri and Swaminathan, Krithika and K, Divyasri and Durairaj, Thenmozhi and B, Bharathi
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
112--119
Abusive language has lately been prevalent in comments on various social media platforms. The increasing hostility observed on the internet calls for the creation of a system that can identify and flag such acerbic content, to prevent conflict and mental distress. This task becomes more challenging when low-resource languages like Tamil, as well as the often-observed Tamil-English code-mixed text, are involved. The approach used in this paper for the classification model includes different methods of feature extraction and the use of traditional classifiers. We propose a novel method of combining language-agnostic sentence embeddings with the TF-IDF vector representation that uses a curated corpus of words as vocabulary, to create a custom embedding, which is then passed to an SVM classifier. Our experimentation yielded an accuracy of 52{\%} and an F1-score of 0.54.
null
null
10.18653/v1/2022.dravidianlangtech-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,195
inproceedings
goyal-etal-2022-translation
Translation Techies @{D}ravidian{L}ang{T}ech-{ACL}2022-Machine Translation in {D}ravidian Languages
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.19/
Goyal, Piyushi and Supriya, Musica and U, Dinesh and Nayak, Ashalatha
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
120--124
This paper discusses the details of submission made by team Translation Techies to the Shared Task on Machine Translation in Dravidian languages- ACL 2022. In connection to the task, five language pairs were provided to test the accuracy of submitted model. A baseline transformer model with Neural Machine Translation(NMT) technique is used which has been taken directly from the OpenNMT framework. On this baseline model, tokenization is applied using the IndicNLP library. Finally, the evaluation is performed using the BLEU scoring mechanism.
null
null
10.18653/v1/2022.dravidianlangtech-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,196
inproceedings
b-varsha-2022-ssncse
{SSNCSE}{\_}{NLP}@{T}amil{NLP}-{ACL}2022: Transformer based approach for Emotion analysis in {T}amil language
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.20/
B, Bharathi and Varsha, Josephine
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
125--131
Emotion analysis is the process of identifying and analyzing the underlying emotions expressed in textual data. Identifying emotions from a textual conversation is a challenging task due to the absence of gestures, vocal intonation, and facial expressions. Once the chatbots and messengers detect and report the emotions of the user, a comfortable conversation can be carried out with no misunderstandings. Our task is to categorize text into a predefined notion of emotion. In this thesis, it is required to classify text into several emotional labels depending on the task. We have adopted the transformer model approach to identify the emotions present in the text sequence. Our task is to identify whether a given comment contains emotion, and the emotion it stands for. The datasets were provided to us by the LT-EDI organizers (CITATION) for two tasks, in the Tamil language. We have evaluated the datasets using the pretrained transformer models and we have obtained the micro-averaged F1 scores as 0.19 and 0.12 for Task1 and Task 2 respectively.
null
null
10.18653/v1/2022.dravidianlangtech-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,197
inproceedings
hariprasad-etal-2022-ssn
{SSN}{\_}{MLRG}1@{D}ravidian{L}ang{T}ech-{ACL}2022: Troll Meme Classification in {T}amil using Transformer Models
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.21/
Hariprasad, Shruthi and Esackimuthu, Sarika and Madhavan, Saritha and Sivanaiah, Rajalakshmi and S, Angel
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
132--137
The ACL shared task of DravidianLangTech-2022 for Troll Meme classification is a binary classification task that involves identifying Tamil memes as troll or not-troll. Classification of memes is a challenging task since memes express humour and sarcasm in an implicit way. Team SSN{\_}MLRG1 tested and compared results obtained by using three models namely BERT, ALBERT and XLNET. The XLNet model outperformed the other two models in terms of various performance metrics. The proposed XLNet model obtained the 3rd rank in the shared task with a weighted F1-score of 0.558.
null
null
10.18653/v1/2022.dravidianlangtech-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,198
inproceedings
pahwa-2022-bphigh
{B}p{H}igh@{T}amil{NLP}-{ACL}2022: Effects of Data Augmentation on Indic-Transformer based classifier for Abusive Comments Detection in {T}amil
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.22/
Pahwa, Bhavish
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
138--144
Social Media platforms have grown their reach worldwide. As an effect of this growth, many vernacular social media platforms have also emerged, focusing more on the diverse languages in the specific regions. Tamil has also emerged as a popular language for use on social media platforms due to the increasing penetration of vernacular media like Sharechat and Moj, which focus more on local Indian languages than English and encourage their users to converse in Indic languages. Abusive language remains a significant challenge in the social media framework and more so when we consider languages like Tamil, which are low-resource languages and have poor performance on multilingual models and lack language-specific models. Based on this shared task, {\textquotedblleft}Abusive Comment detection in Tamil@DravidianLangTech-ACL 2022{\textquotedblright}, we present an exploration of different techniques used to tackle and increase the accuracy of our models using data augmentation in NLP. We also show the results of these techniques.
null
null
10.18653/v1/2022.dravidianlangtech-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,199
inproceedings
hegde-etal-2022-mucs
{MUCS}@{D}ravidian{L}ang{T}ech@{ACL}2022: Ensemble of Logistic Regression Penalties to Identify Emotions in {T}amil Text
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.23/
Hegde, Asha and Coelho, Sharal and Shashirekha, Hosahalli
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
145--150
Emotion Analysis (EA) is the process of automatically analyzing and categorizing the input text into one of the predefined sets of emotions. In recent years, people have turned to social media to express their emotions, opinions or feelings about news, movies, products, services, and so on. These users' emotions may help the public, governments, business organizations, film producers, and others in devising strategies, making decisions, and so on. The increasing number of social media users and the increasing amount of user generated text containing emotions on social media demands automated tools for the analysis of such data as handling this data manually is labor intensive and error prone. Further, the characteristics of social media data makes the EA challenging. Most of the EA research works have focused on English language leaving several Indian languages including Tamil unexplored for this task. To address the challenges of EA in Tamil texts, in this paper, we - team MUCS, describe the model submitted to the shared task on Emotion Analysis in Tamil at DravidianLangTech@ACL 2022. Out of the two subtasks in this shared task, our team submitted the model only for Task a. The proposed model comprises of an Ensemble of Logistic Regression (LR) classifiers with three penalties, namely: L1, L2, and Elasticnet. This Ensemble model trained with Term Frequency - Inverse Document Frequency (TF-IDF) of character bigrams and trigrams secured 4th rank in Task a with a macro averaged F1-score of 0.04. The code to reproduce the proposed models is available in github1.
null
null
10.18653/v1/2022.dravidianlangtech-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,200
inproceedings
v-etal-2022-bphc
{BPHC}@{D}ravidian{L}ang{T}ech-{ACL}2022-A comparative analysis of classical and pre-trained models for troll meme classification in {T}amil
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.24/
V, Achyuta and S R, Mithun Kumar and Malapati, Aruna and Kumar, Lov
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
151--157
Trolling refers to any user behaviour on the internet to intentionally provoke or instigate conflict predominantly in social media. This paper aims to classify troll meme captions in Tamil-English code-mixed form. Embeddings are obtained for raw code-mixed text and the translated and transliterated version of the text and their relative performances are compared. Furthermore, this paper compares the performances of 11 different classification algorithms using Accuracy and F1- Score. We conclude that we were able to achieve a weighted F1 score of 0.74 through MuRIL pretrained model.
null
null
10.18653/v1/2022.dravidianlangtech-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,201
inproceedings
b-varsha-2022-ssncse-nlp
{SSNCSE} {NLP}@{T}amil{NLP}-{ACL}2022: Transformer based approach for detection of abusive comment for {T}amil language
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.25/
B, Bharathi and Varsha, Josephine
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
158--164
Social media platforms along with many other public forums on the Internet have shown a significant rise in the cases of abusive behavior such as Misogynism, Misandry, Homophobia, and Cyberbullying. To tackle these concerns, technologies are being developed and applied, as it is a tedious and time-consuming task to identify, report and block these offenders. Our task was to automate the process of identifying abusive comments and classify them into appropriate categories. The datasets provided by the DravidianLangTech@ACL2022 organizers were a code-mixed form of Tamil text. We trained the datasets using pre-trained transformer models such as BERT,m-BERT, and XLNET and achieved a weighted average of F1 scores of 0.96 for Tamil-English code mixed text and 0.59 for Tamil text.
null
null
10.18653/v1/2022.dravidianlangtech-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,202
inproceedings
s-etal-2022-varsini
{V}arsini{\_}and{\_}{K}irthanna@{D}ravidian{L}ang{T}ech-{ACL}2022-Emotional Analysis in {T}amil
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.26/
S, Varsini and Rajan, Kirthanna and S, Angel and Sivanaiah, Rajalakshmi and Rajendram, Sakaya Milton and T T, Mirnalinee
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
165--169
In this paper, we present our system for the task of Emotion analysis in Tamil. Over 3.96 million people use these platforms to send messages formed using texts, images, videos, audio or combinations of these to express their thoughts and feelings. Text communication on social media platforms is quite overwhelming due to its enormous quantity and simplicity. The data must be processed to understand the general feeling felt by the author. We present a lexicon-based approach for the extraction emotion in Tamil texts. We use dictionaries of words labelled with their respective emotions. The process of assigning an emotional label to each text, and then capture the main emotion expressed in it. Finally, the F1-score in the official test set is 0.0300 and our method ranks 5th.
null
null
10.18653/v1/2022.dravidianlangtech-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,203
inproceedings
hasan-etal-2022-cuet
{CUET}-{NLP}@{D}ravidian{L}ang{T}ech-{ACL}2022: Investigating Deep Learning Techniques to Detect Multimodal Troll Memes
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.27/
Hasan, Md and Jannat, Nusratul and Hossain, Eftekhar and Sharif, Omar and Hoque, Mohammed Moshiul
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
170--176
With the substantial rise of internet usage, social media has become a powerful communication medium to convey information, opinions, and feelings on various issues. Recently, memes have become a popular way of sharing information on social media. Usually, memes are visuals with text incorporated into them and quickly disseminate hatred and offensive content. Detecting or classifying memes is challenging due to their region-specific interpretation and multimodal nature. This work presents a meme classification technique in Tamil developed by the CUET NLP team under the shared task (DravidianLangTech-ACL2022). Several computational models have been investigated to perform the classification task. This work also explored visual and textual features using VGG16, ResNet50, VGG19, CNN and CNN+LSTM models. Multimodal features are extracted by combining image (VGG16) and text (CNN, LSTM+CNN) characteristics. Results demonstrate that the textual strategy with CNN+LSTM achieved the highest weighted $f_1$-score (0.52) and recall (0.57). Moreover, the CNN-Text+VGG16 outperformed the other models concerning the multimodal memes detection by achieving the highest $f_1$-score of 0.49, but the LSTM+CNN model allowed the team to achieve $4^{th}$ place in the shared task.
null
null
10.18653/v1/2022.dravidianlangtech-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,204
inproceedings
vyawahare-etal-2022-pict
{PICT}@{D}ravidian{L}ang{T}ech-{ACL}2022: Neural Machine Translation On {D}ravidian Languages
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.28/
Vyawahare, Aditya and Tangsali, Rahul and Mandke, Aditya and Litake, Onkar and Kadam, Dipali
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
177--183
This paper presents a summary of the findings that we obtained based on the shared task on machine translation of Dravidian languages. As a part of this shared task, we carried out neural machine translations for the following five language pairs: Kannada to Tamil, Kannada to Telugu, Kannada to Malayalam, Kannada to Sanskrit, and Kannada to Tulu. The datasets for each of the five language pairs were used to train various translation models, including Seq2Seq models such as LSTM, bidirectional LSTM, Conv Seq2Seq, and training state-of-the-art as transformers from scratch, and fine-tuning already pre-trained models. For some models involving monolingual corpora, we implemented backtranslation as well. These models' accuracy was later tested with a part of the same dataset using BLEU score as an evaluation metric.
null
null
10.18653/v1/2022.dravidianlangtech-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,205
inproceedings
s-r-etal-2022-sentiment
Sentiment Analysis on Code-Switched {D}ravidian Languages with Kernel Based Extreme Learning Machines
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.29/
S R, Mithun Kumar and Kumar, Lov and Malapati, Aruna
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
184--190
Code-switching refers to the textual or spoken data containing multiple languages. Application of natural language processing (NLP) tasks like sentiment analysis is a harder problem on code-switched languages due to the irregularities in the sentence structuring and ordering. This paper shows the experiment results of building a Kernel based Extreme Learning Machines(ELM) for sentiment analysis for code-switched Dravidian languages with English. Our results show that ELM performs better than traditional machine learning classifiers on various metrics as well as trains faster than deep learning models. We also show that Polynomial kernels perform better than others in the ELM architecture. We were able to achieve a median AUC of 0.79 with a polynomial kernel.
null
null
10.18653/v1/2022.dravidianlangtech-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,206
inproceedings
mustakim-etal-2022-cuet
{CUET}-{NLP}@{D}ravidian{L}ang{T}ech-{ACL}2022: Exploiting Textual Features to Classify Sentiment of Multimodal Movie Reviews
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.30/
Mustakim, Nasehatul and Jannat, Nusratul and Hasan, Md and Hossain, Eftekhar and Sharif, Omar and Hoque, Mohammed Moshiul
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
191--198
With the proliferation of internet usage, a massive growth of consumer-generated content on social media has been witnessed in recent years that provide people`s opinions on diverse issues. Through social media, users can convey their emotions and thoughts in distinctive forms such as text, image, audio, video, and emoji, which leads to the advancement of the multimodality of the content users on social networking sites. This paper presents a technique for classifying multimodal sentiment using the text modality into five categories: highly positive, positive, neutral, negative, and highly negative categories. A shared task was organized to develop models that can identify the sentiments expressed by the videos of movie reviewers in both Malayalam and Tamil languages. This work applied several machine learning techniques (LR, DT, MNB, SVM) and deep learning (BiLSTM, CNN+BiLSTM) to accomplish the task. Results demonstrate that the proposed model with the decision tree (DT) outperformed the other methods and won the competition by acquiring the highest macro $f_1$-score of 0.24.
null
null
10.18653/v1/2022.dravidianlangtech-1.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,207
inproceedings
mustakim-etal-2022-cuet-nlp
{CUET}-{NLP}@{T}amil{NLP}-{ACL}2022: Multi-Class Textual Emotion Detection from Social Media using Transformer
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.31/
Mustakim, Nasehatul and Rabu, Rabeya and Md. Mursalin, Golam and Hossain, Eftekhar and Sharif, Omar and Hoque, Mohammed Moshiul
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
199--206
Recently, emotion analysis has gained increased attention by NLP researchers due to its various applications in opinion mining, e-commerce, comprehensive search, healthcare, personalized recommendations and online education. Developing an intelligent emotion analysis model is challenging in resource-constrained languages like Tamil. Therefore a shared task is organized to identify the underlying emotion of a given comment expressed in the Tamil language. The paper presents our approach to classifying the textual emotion in Tamil into 11 classes: ambiguous, anger, anticipation, disgust, fear, joy, love, neutral, sadness, surprise and trust. We investigated various machine learning (LR, DT, MNB, SVM), deep learning (CNN, LSTM, BiLSTM) and transformer-based models (Multilingual-BERT, XLM-R). Results reveal that the XLM-R model outdoes all other models by acquiring the highest macro $f_1$-score (0.33).
null
null
10.18653/v1/2022.dravidianlangtech-1.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,208
inproceedings
rajalakshmi-etal-2022-dlrg
{DLRG}@{D}ravidian{L}ang{T}ech-{ACL}2022: Abusive Comment Detection in {T}amil using Multilingual Transformer Models
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.32/
Rajalakshmi, Ratnavel and Duraphe, Ankita and Shibani, Antonette
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
207--213
Online Social Network has let people to connect and interact with each other. It does, however, also provide a platform for online abusers to propagate abusive content. The vast majority of abusive remarks are written in a multilingual style, which allows them to easily slip past internet inspection. This paper presents a system developed for the Shared Task on Abusive Comment Detection (Misogyny, Misandry, Homophobia, Transphobic, Xenophobia, CounterSpeech, Hope Speech) in Tamil DravidianLangTech@ACL 2022 to detect the abusive category of each comment. We approach the task with three methodologies - Machine Learning, Deep Learning and Transformer-based modeling, for two sets of data - Tamil and Tamil+English language dataset. The dataset used in our system can be accessed from the competition on CodaLab. For Machine Learning, eight algorithms were implemented, among which Random Forest gave the best result with Tamil+English dataset, with a weighted average F1-score of 0.78. For Deep Learning, Bi-Directional LSTM gave best result with pre-trained word embeddings. In Transformer-based modeling, we used IndicBERT and mBERT with fine-tuning, among which mBERT gave the best result for Tamil dataset with a weighted average F1-score of 0.7.
null
null
10.18653/v1/2022.dravidianlangtech-1.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,209
inproceedings
bhattacharyya-2022-aanisha
Aanisha@{T}amil{NLP}-{ACL}2022:Abusive Detection in {T}amil
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.33/
Bhattacharyya, Aanisha
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
214--220
In social media, there are instances where people present their opinions in strong language, resorting to abusive/toxic comments. There are instances of communal hatred, hate-speech, toxicity and bullying. And, in this age of social media, it`s very important to find means to keep check on these toxic comments, as to preserve the mental peace of people in social media. While there are tools, models to detect andpotentially filter these kind of content, developing these kinds of models for the low resource language space is an issue of research. In this paper, the task of abusive comment identification in Tamil language, is seen upon as a multi-class classification problem. There are different pre-processing as well as modelling approaches discussed in this paper. The different approaches are compared on the basis of weighted average accuracy.
null
null
10.18653/v1/2022.dravidianlangtech-1.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,210
inproceedings
hossain-etal-2022-combatant
{COMBATANT}@{T}amil{NLP}-{ACL}2022: Fine-grained Categorization of Abusive Comments using Logistic Regression
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.34/
Hossain, Alamgir and Bishal, Mahathir and Hossain, Eftekhar and Sharif, Omar and Hoque, Mohammed Moshiul
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
221--228
With the widespread usage of social media and effortless internet access, millions of posts and comments are generated every minute. Unfortunately, with this substantial rise, the usage of abusive language has increased significantly in these mediums. This proliferation leads to many hazards such as cyber-bullying, vulgarity, online harassment and abuse. Therefore, it becomes a crucial issue to detect and mitigate the usage of abusive language. This work presents our system developed as part of the shared task to detect the abusive language in Tamil. We employed three machine learning (LR, DT, SVM), two deep learning (CNN+BiLSTM, CNN+BiLSTM with FastText) and a transformer-based model (Indic-BERT). The experimental results show that Logistic regression (LR) and CNN+BiLSTM models outperformed the others. Both Logistic Regression (LR) and CNN+BiLSTM with FastText achieved the weighted $F_1$-score of 0.39. However, LR obtained a higher recall value (0.44) than CNN+BiLSTM (0.36). This leads us to stand the $2^{nd}$ rank in the shared task competition.
null
null
10.18653/v1/2022.dravidianlangtech-1.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,211
inproceedings
gokhale-etal-2022-optimize
{O}ptimize{\_}{P}rime@{D}ravidian{L}ang{T}ech-{ACL}2022: Emotion Analysis in {T}amil
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.dravidianlangtech-1.35/
Gokhale, Omkar and Patankar, Shantanu and Litake, Onkar and Mandke, Aditya and Kadam, Dipali
Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages
229--234
This paper aims to perform an emotion analysis of social media comments in Tamil. Emotion analysis is the process of identifying the emotional context of the text. In this paper, we present the findings obtained by Team Optimize{\_}Prime in the ACL 2022 shared task {\textquotedblleft}Emotion Analysis in Tamil.{\textquotedblright} The task aimed to classify social media comments into categories of emotion like Joy, Anger, Trust, Disgust, etc. The task was further divided into two subtasks, one with 11 broad categories of emotions and the other with 31 specific categories of emotion. We implemented three different approaches to tackle this problem: transformer-based models, Recurrent Neural Networks (RNNs), and Ensemble models. XLM-RoBERTa performed the best on the first task with a macro-averaged f1 score of 0.27, while MuRIL provided the best results on the second task with a macro-averaged f1 score of 0.13.
null
null
10.18653/v1/2022.dravidianlangtech-1.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,212