entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
jansen-2022-systematic
A Systematic Survey of Text Worlds as Embodied Natural Language Environments
C{\^o}t{\'e}, Marc-Alexandre and Yuan, Xingdi and Ammanabrolu, Prithviraj
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.wordplay-1.1/
Jansen, Peter
Proceedings of the 3rd Wordplay: When Language Meets Games Workshop (Wordplay 2022)
1--15
Text Worlds are virtual environments for embodied agents that, unlike 2D or 3D environments, are rendered exclusively using textual descriptions. These environments offer an alternative to higher-fidelity 3D environments due to their low barrier to entry, providing the ability to study semantics, compositional inference, and other high-level tasks with rich action spaces while controlling for perceptual input. This systematic survey outlines recent developments in tooling, environments, and agent modeling for Text Worlds, while examining recent trends in knowledge graphs, common sense reasoning, transfer learning of Text World performance to higher-fidelity environments, as well as near-term development targets that, once achieved, make Text Worlds an attractive general research paradigm for natural language processing.
null
null
10.18653/v1/2022.wordplay-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,985
inproceedings
montfort-bartlett-fernandez-2022-minimal
A Minimal Computational Improviser Based on Oral Thought
C{\^o}t{\'e}, Marc-Alexandre and Yuan, Xingdi and Ammanabrolu, Prithviraj
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.wordplay-1.2/
Montfort, Nick and Bartlett Fernandez, Sebastian
Proceedings of the 3rd Wordplay: When Language Meets Games Workshop (Wordplay 2022)
16--24
A prototype system for playing a minimal improvisational game with one or more human or computer players is discussed. The game, Chain Reaction, has players collectively build a chain of word pairs or solid compounds. With a basis in oral culture, it emphasizes memory and rapid improvisation. Chains are only locally coherent, so absurdity and humor increases during play. While it is trivial to develop a computer player using textual corpora and literature-culture concepts, our approach is unique in that we have grounded our work in the principles of oral culture according to Walter Ong, an early scholar of orature. We show how a simple computer model can be designed to embody many aspects of oral poetics as theorized by Ong, suggesting design directions for other work in oral improvisation and poetics. The opportunities for own our system`s further development include creating culturally specific automated players and situating play in different temporal, physical, and social contexts.
null
null
10.18653/v1/2022.wordplay-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,986
inproceedings
volum-etal-2022-craft
Craft an Iron Sword: Dynamically Generating Interactive Game Characters by Prompting Large Language Models Tuned on Code
C{\^o}t{\'e}, Marc-Alexandre and Yuan, Xingdi and Ammanabrolu, Prithviraj
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.wordplay-1.3/
Volum, Ryan and Rao, Sudha and Xu, Michael and DesGarennes, Gabriel and Brockett, Chris and Van Durme, Benjamin and Deng, Olivia and Malhotra, Akanksha and Dolan, Bill
Proceedings of the 3rd Wordplay: When Language Meets Games Workshop (Wordplay 2022)
25--43
Non-Player Characters (NPCs) significantly enhance the player experience in many games. Historically, players' interactions with NPCs have tended to be highly scripted, to be limited to natural language responses to be selected by the player, and to not involve dynamic change in game state. In this work, we demonstrate that use of a few example conversational prompts can power a conversational agent to generate both natural language and novel code. This approach can permit development of NPCs with which players can have grounded conversations that are free-form and less repetitive. We demonstrate our approach using OpenAI Codex (GPT-3 finetuned on GitHub), with Minecraft game development as our test bed. We show that with a few example prompts, a Codex-based agent can generate novel code, hold multi-turn conversations and answer questions about structured data. We evaluate this application using experienced gamers in a Minecraft realm and provide analysis of failure cases and suggest possible directions for solutions.
null
null
10.18653/v1/2022.wordplay-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,987
inproceedings
furman-etal-2022-sequence
A Sequence Modelling Approach to Question Answering in Text-Based Games
C{\^o}t{\'e}, Marc-Alexandre and Yuan, Xingdi and Ammanabrolu, Prithviraj
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.wordplay-1.4/
Furman, Gregory and Toledo, Edan and Shock, Jonathan and Buys, Jan
Proceedings of the 3rd Wordplay: When Language Meets Games Workshop (Wordplay 2022)
44--58
Interactive Question Answering (IQA) requires an intelligent agent to interact with a dynamic environment in order to gather information necessary to answer a question. IQA tasks have been proposed as means of training systems to develop language or visual comprehension abilities. To this end, the Question Answering with Interactive Text (QAit) task was created to produce and benchmark interactive agents capable of seeking information and answering questions in unseen environments. While prior work has exclusively focused on IQA as a reinforcement learning problem, such methods suffer from low sample efficiency and poor accuracy in zero-shot evaluation. In this paper, we propose the use of the recently proposed Decision Transformer architecture to provide improvements upon prior baselines. By utilising a causally masked GPT-2 Transformer for command generation and a BERT model for question answer prediction, we show that the Decision Transformer achieves performance greater than or equal to current state-of-the-art RL baselines on the QAit task in a sample efficient manner. In addition, these results are achievable by training on sub-optimal random trajectories, therefore not requiring the use of online agents to gather data.
null
null
10.18653/v1/2022.wordplay-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,988
inproceedings
teodorescu-etal-2022-automatic
Automatic Exploration of Textual Environments with Language-Conditioned Autotelic Agents
C{\^o}t{\'e}, Marc-Alexandre and Yuan, Xingdi and Ammanabrolu, Prithviraj
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.wordplay-1.5/
Teodorescu, Laetitia and Yuan, Xingdi and C{\^o}t{\'e}, Marc-Alexandre and Oudeyer, Pierre-Yves
Proceedings of the 3rd Wordplay: When Language Meets Games Workshop (Wordplay 2022)
59--62
The purpose of this extended abstract is to discuss the possible fruitful interactions between intrinsically-motivated language-conditioned agents and textual environments. We define autotelic agents as agents able to set their own goals. We identify desirable properties of textual nenvironments that makes them a good testbed for autotelic agents. We them list drivers of exploration for such agents that would allow them to achieve large repertoires of skills in these environments, enabling such agents to be repurposed for solving the benchmarks implemented in textual environments. We then discuss challenges and further perspectives brought about by this interaction.
null
null
10.18653/v1/2022.wordplay-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,989
inproceedings
yuan-etal-2022-separating
Separating Hate Speech and Offensive Language Classes via Adversarial Debiasing
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.1/
Yuan, Shuzhou and Maronikolakis, Antonis and Sch{\"utze, Hinrich
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
1--10
Research to tackle hate speech plaguing online media has made strides in providing solutions, analyzing bias and curating data. A challenging problem is ambiguity between hate speech and offensive language, causing low performance both overall and specifically for the hate speech class. It can be argued that misclassifying actual hate speech content as merely offensive can lead to further harm against targeted groups. In our work, we mitigate this potentially harmful phenomenon by proposing an adversarial debiasing method to separate the two classes. We show that our method works for English, Arabic German and Hindi, plus in a multilingual setting, improving performance over baselines.
null
null
10.18653/v1/2022.woah-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,991
inproceedings
ashida-komachi-2022-towards
Towards Automatic Generation of Messages Countering Online Hate Speech and Microaggressions
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.2/
Ashida, Mana and Komachi, Mamoru
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
11--23
With the widespread use of social media, online hate is increasing, and microaggressions are receiving attention. We explore the potential for using pretrained language models to automatically generate messages that combat the associated offensive texts. Specifically, we focus on using prompting to steer model generation as it requires less data and computation than fine-tuning. We also propose a human evaluation perspective; offensiveness, stance, and informativeness. After obtaining 306 counterspeech and 42 microintervention messages generated by GPT-2, 3, Neo, we conducted a human evaluation using Amazon Mechanical Turk. The results indicate the potential of using prompting in the proposed generation task. All the generated texts along with the annotation are published to encourage future research on countering hate and microaggressions online.
null
null
10.18653/v1/2022.woah-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,992
inproceedings
datta-etal-2022-greasevision-rewriting
{G}rease{V}ision: Rewriting the Rules of the Interface
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.3/
Datta, Siddhartha and Kollnig, Konrad and Shadbolt, Nigel
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
24--28
Digital harms can manifest across any interface. Key problems in addressing these harms include the high individuality of harms and the fast-changing nature of digital systems. We put forth GreaseVision, a collaborative human-in-the-loop learning framework that enables end-users to analyze their screenomes to annotate harms as well as render overlay interventions. We evaluate HITL intervention development with a set of completed tasks in a cognitive walkthrough, and test scalability with one-shot element removal and fine-tuning hate speech classification models. The contribution of the framework and tool allow individual end-users to study their usage history and create personalized interventions. Our contribution also enables researchers to study the distribution of multi-modal harms and interventions at scale.
null
null
10.18653/v1/2022.woah-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,993
inproceedings
ludwig-etal-2022-improving
Improving Generalization of Hate Speech Detection Systems to Novel Target Groups via Domain Adaptation
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.4/
Ludwig, Florian and Dolos, Klara and Zesch, Torsten and Hobley, Eleanor
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
29--39
Despite recent advances in machine learning based hate speech detection, classifiers still struggle with generalizing knowledge to out-of-domain data samples. In this paper, we investigate the generalization capabilities of deep learning models to different target groups of hate speech under clean experimental settings. Furthermore, we assess the efficacy of three different strategies of unsupervised domain adaptation to improve these capabilities. Given the diversity of hate and its rapid dynamics in the online world (e.g. the evolution of new target groups like virologists during the COVID-19 pandemic), robustly detecting hate aimed at newly identified target groups is a highly relevant research question. We show that naively trained models suffer from a target group specific bias, which can be reduced via domain adaptation. We were able to achieve a relative improvement of the F1-score between 5.8{\%} and 10.7{\%} for out-of-domain target groups of hate speech compared to baseline approaches by utilizing domain adaptation.
null
null
10.18653/v1/2022.woah-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,994
inproceedings
ruitenbeek-etal-2022-zo
{\textquotedblleft}Zo Grof !{\textquotedblright}: A Comprehensive Corpus for Offensive and Abusive Language in {D}utch
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.5/
Ruitenbeek, Ward and Zwart, Victor and Van Der Noord, Robin and Gnezdilov, Zhenja and Caselli, Tommaso
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
40--56
This paper presents a comprehensive corpus for the study of socially unacceptable language in Dutch. The corpus extends and revise an existing resource with more data and introduces a new annotation dimension for offensive language, making it a unique resource in the Dutch language panorama. Each language phenomenon (abusive and offensive language) in the corpus has been annotated with a multi-layer annotation scheme modelling the explicitness and the target(s) of the message. We have conducted a new set of experiments with different classification algorithms on all annotation dimensions. Monolingual Pre-Trained Language Models prove as the best systems, obtaining a macro-average F1 of 0.828 for binary classification of offensive language, and 0.579 for the targets of offensive messages. Furthermore, the best system obtains a macro-average F1 of 0.667 for distinguishing between abusive and offensive messages.
null
null
10.18653/v1/2022.woah-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,995
inproceedings
goffredo-etal-2022-counter
Counter-{TWIT}: An {I}talian Corpus for Online Counterspeech in Ecological Contexts
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.6/
Goffredo, Pierpaolo and Basile, Valerio and Cepollaro, Bianca and Patti, Viviana
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
57--66
This work describes the process of creating a corpus of Twitter conversations annotated for the presence of counterspeech in response to toxic speech related to axes of discrimination linked to sexism, racism and homophobia. The main novelty is an annotated dataset comprising relevant tweets in their context of occurrence. The corpus is made up of tweets and responses captured by different profiles replying to discriminatory content or objectionably couched news. An annotation scheme was created to make explicit the knowledge on the dimensions of toxic speech and counterspeech.An analysis of the collected and annotated data and of the IAA that emerged during the annotation process is included. Moreover, we report about preliminary experiments on automatic counterspeech detection, based on supervised automatic learning models trained on the new dataset. The results highlight the fundamental role played by the context in this detection task, confirming our intuitions about the importance to collect tweets in their context of occurrence.
null
null
10.18653/v1/2022.woah-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,996
inproceedings
deshpande-etal-2022-stereokg
{S}tereo{KG}: Data-Driven Knowledge Graph Construction For Cultural Knowledge and Stereotypes
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.7/
Deshpande, Awantee and Ruiter, Dana and Mosbach, Marius and Klakow, Dietrich
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
67--78
Analyzing ethnic or religious bias is important for improving fairness, accountability, and transparency of natural language processing models. However, many techniques rely on human-compiled lists of bias terms, which are expensive to create and are limited in coverage. In this study, we present a fully data-driven pipeline for generating a knowledge graph (KG) of cultural knowledge and stereotypes. Our resulting KG covers 5 religious groups and 5 nationalities and can easily be extended to more entities. Our human evaluation shows that the majority (59.2{\%}) of non-singleton entries are coherent and complete stereotypes. We further show that performing intermediate masked language model training on the verbalized KG leads to a higher level of cultural awareness in the model and has the potential to increase classification performance on knowledge-crucial samples on a related task, i.e., hate speech detection.
null
null
10.18653/v1/2022.woah-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,997
inproceedings
lu-jurgens-2022-subtle
The subtle language of exclusion: Identifying the Toxic Speech of Trans-exclusionary Radical Feminists
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.8/
Lu, Christina and Jurgens, David
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
79--91
Toxic language can take many forms, from explicit hate speech to more subtle microaggressions. Within this space, models identifying transphobic language have largely focused on overt forms. However, a more pernicious and subtle source of transphobic comments comes in the form of statements made by Trans-exclusionary Radical Feminists (TERFs); these statements often appear seemingly-positive and promote women`s causes and issues, while simultaneously denying the inclusion of transgender women as women. Here, we introduce two models to mitigate this antisocial behavior. The first model identifies TERF users in social media, recognizing that these users are a main source of transphobic material that enters mainstream discussion and whom other users may not desire to engage with in good faith. The second model tackles the harder task of recognizing the masked rhetoric of TERF messages and introduces a new dataset to support this task. Finally, we discuss the ethics of deploying these models to mitigate the harm of this language, arguing for a balanced approach that allows for restorative interactions.
null
null
10.18653/v1/2022.woah-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,998
inproceedings
chvasta-etal-2022-lost
Lost in Distillation: A Case Study in Toxicity Modeling
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.9/
Chvasta, Alyssa and Lees, Alyssa and Sorensen, Jeffrey and Vasserman, Lucy and Goyal, Nitesh
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
92--101
In an era of increasingly large pre-trained language models, knowledge distillation is a powerful tool for transferring information from a large model to a smaller one. In particular, distillation is of tremendous benefit when it comes to real-world constraints such as serving latency or serving at scale. However, a loss of robustness in language understanding may be hidden in the process and not immediately revealed when looking at high-level evaluation metrics. In this work, we investigate the hidden costs: what is {\textquotedblleft}lost in distillation{\textquotedblright}, especially in regards to identity-based model bias using the case study of toxicity modeling. With reproducible models using open source training sets, we investigate models distilled from a BERT teacher baseline. Using both open source and proprietary big data models, we investigate these hidden performance costs.
null
null
10.18653/v1/2022.woah-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
21,999
inproceedings
stamou-etal-2022-cleansing
Cleansing {\&} expanding the {HURTLEX}(el) with a multidimensional categorization of offensive words
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.10/
Stamou, Vivian and Alexiou, Iakovi and Klimi, Antigone and Molou, Eleftheria and Saivanidou, Alexandra and Markantonatou, Stella
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
102--108
We present a cleansed version of the multilingual lexicon HURTLEX-(EL) comprising 737 offensive words of Modern Greek. We worked bottom-up in two annotation rounds and developed detailed guidelines by cross-classifying words on three dimensions: context, reference, and thematic domain. Our classification reveals a wider spectrum of thematic domains concerning the study of offensive language than previously thought Efthymiou et al. (2014) and reveals social and cultural aspects that are not included in the HURTLEX categories.
null
null
10.18653/v1/2022.woah-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,000
inproceedings
israeli-tsur-2022-free
Free speech or Free Hate Speech? Analyzing the Proliferation of Hate Speech in Parler
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.11/
Israeli, Abraham and Tsur, Oren
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
109--121
Social platforms such as Gab and Parler, branded as {\textquoteleft}free-speech' networks, have seen a significant growth of their user base in recent years. This popularity is mainly attributed to the stricter moderation enforced by mainstream platforms such as Twitter, Facebook, and Reddit.In this work we provide the first large scale analysis of hate-speech on Parler. We experiment with an array of algorithms for hate-speech detection, demonstrating limitations of transfer learning in that domain, given the illusive and ever changing nature of the ways hate-speech is delivered. In order to improve classification accuracy we annotated 10K Parler posts, which we use to fine-tune a BERT classifier. Classification of individual posts is then leveraged for the classification of millions of users via label propagation over the social network. Classifying users by their propensity to disseminate hate, we find that hate mongers make 16.1{\%} of Parler active users, and that they have distinct characteristics comparing to other user groups. We further complement our analysis by comparing the trends observed in Parler to those found in Gab. To the best of our knowledge, this is among the first works to analyze hate speech in Parler in a quantitative manner and on the user level.
null
null
10.18653/v1/2022.woah-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,001
inproceedings
arango-monnar-etal-2022-resources
Resources for Multilingual Hate Speech Detection
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.12/
Arango Monnar, Ayme and Perez, Jorge and Poblete, Barbara and Salda{\~n}a, Magdalena and Proust, Valentina
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
122--130
Most of the published approaches and resources for hate speech detection are tailored for the English language. In consequence, cross-lingual and cross-cultural perspectives lack some essential resources. The lack of diversity of the datasets in Spanish is notable. Variations throughout Spanish-speaking countries make existing datasets not enough to encompass the task in the different Spanish variants. We annotated 9834 tweets from Chile to enrich the existing Spanish resources with different words and new targets of hate that have not been considered in previous studies. We conducted several cross-dataset evaluation experiments of the models published in the literature using our Chilean dataset and two others in English and Spanish. We propose a comparative framework for quickly conducting comparative experiments using different previously published models. In addition, we set up a Codalab competition for further comparison of new models in a standard scenario, that is, data partitions and evaluation metrics. All resources can be accessed trough a centralized repository for researchers to get a complete picture of the progress on the multilingual hate speech and offensive language detection task.
null
null
10.18653/v1/2022.woah-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,002
inproceedings
saleem-etal-2022-enriching
Enriching Abusive Language Detection with Community Context
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.13/
Saleem, Haji Mohammad and Kurrek, Jana and Ruths, Derek
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
131--142
Uses of pejorative expressions can be benign or actively empowering. When models for abuse detection misclassify these expressions as derogatory, they inadvertently censor productive conversations held by marginalized groups. One way to engage with non-dominant perspectives is to add context around conversations. Previous research has leveraged user- and thread-level features, but it often neglects the spaces within which productive conversations take place. Our paper highlights how community context can improve classification outcomes in abusive language detection. We make two main contributions to this end. First, we demonstrate that online communities cluster by the nature of their support towards victims of abuse. Second, we establish how community context improves accuracy and reduces the false positive rates of state-of-the-art abusive language classifiers. These findings suggest a promising direction for context-aware models in abusive language research.
null
null
10.18653/v1/2022.woah-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,003
inproceedings
demus-etal-2022-comprehensive
DeTox: A Comprehensive Dataset for {G}erman Offensive Language and Conversation Analysis
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.14/
Demus, Christoph and Pitz, Jonas and Sch{\"utz, Mina and Probol, Nadine and Siegel, Melanie and Labudde, Dirk
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
143--153
In this work, we present a new publicly available offensive language dataset of 10.278 German social media comments collected in the first half of 2021 that were annotated by in total six annotators. With twelve different annotation categories, it is far more comprehensive than other datasets, and goes beyond just hate speech detection. The labels aim in particular also at toxicity, criminal relevance and discrimination types of comments. Furthermore, about half of the comments are from coherent parts of conversations, which opens the possibility to consider the comments' contexts and do conversation analyses in order to research the contagion of offensive language in conversations.
null
null
10.18653/v1/2022.woah-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,004
inproceedings
rottger-etal-2022-multilingual
Multilingual {H}ate{C}heck: Functional Tests for Multilingual Hate Speech Detection Models
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.15/
R{\"ottger, Paul and Seelawi, Haitham and Nozza, Debora and Talat, Zeerak and Vidgen, Bertie
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
154--169
Hate speech detection models are typically evaluated on held-out test sets. However, this risks painting an incomplete and potentially misleading picture of model performance because of increasingly well-documented systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, recent research has thus introduced functional tests for hate speech detection models. However, these tests currently only exist for English-language content, which means that they cannot support the development of more effective models in other languages spoken by billions across the world. To help address this issue, we introduce Multilingual HateCheck (MHC), a suite of functional tests for multilingual hate speech detection models. MHC covers 34 functionalities across ten languages, which is more languages than any other hate speech dataset. To illustrate MHC`s utility, we train and test a high-performing multilingual hate speech detection model, and reveal critical model weaknesses for monolingual and cross-lingual applications.
null
null
10.18653/v1/2022.woah-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,005
inproceedings
hertzberg-etal-2022-distributional
Distributional properties of political dogwhistle representations in {S}wedish {BERT}
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.16/
Hertzberg, Niclas and Cooper, Robin and Lindgren, Elina and R{\"onnerstrand, Bj{\"orn and Rettenegger, Gregor and Breitholtz, Ellen and Sayeed, Asad
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
170--175
{\textquotedblleft}Dogwhistles{\textquotedblright} are expressions intended by the speaker have two messages: a socially-unacceptable {\textquotedblleft}in-group{\textquotedblright} message understood by a subset of listeners, and a benign message intended for the out-group. We take the result of a word-replacement survey of the Swedish population intended to reveal how dogwhistles are understood, and we show that the difficulty of annotating dogwhistles is reflected in the separability in the space of a sentence-transformer Swedish BERT trained on general data.
null
null
10.18653/v1/2022.woah-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,006
inproceedings
khurana-etal-2022-hate
Hate Speech Criteria: A Modular Approach to Task-Specific Hate Speech Definitions
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.17/
Khurana, Urja and Vermeulen, Ivar and Nalisnick, Eric and Van Noorloos, Marloes and Fokkens, Antske
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
176--191
The subjectivity of automatic hate speech detection makes it a complex task, reflected in different and incomplete definitions in NLP. We present hate speech criteria, developed with insights from a law and social science expert, that help researchers create more explicit definitions and annotation guidelines on five aspects: (1) target groups and (2) dominance, (3) perpetrator characteristics, (4) explicit presence of negative interactions, and the (5) type of consequences/effects. Definitions can be structured so that they cover a more broad or more narrow phenomenon and conscious choices can be made on specifying criteria or leaving them open. We argue that the goal and exact task developers have in mind should determine how the scope of hate speech is defined. We provide an overview of the properties of datasets from hatespeechdata.com that may help select the most suitable dataset for a specific scenario.
null
null
10.18653/v1/2022.woah-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,007
inproceedings
diaz-etal-2022-accounting
Accounting for Offensive Speech as a Practice of Resistance
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.18/
Diaz, Mark and Amironesei, Razvan and Weidinger, Laura and Gabriel, Iason
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
192--202
Tasks such as toxicity detection, hate speech detection, and online harassment detection have been developed for identifying interactions involving offensive speech. In this work we articulate the need for a relational understanding of offensiveness to help distinguish denotative offensive speech from offensive speech serving as a mechanism through which marginalized communities resist oppressive social norms. Using examples from the queer community, we argue that evaluations of offensive speech must focus on the impacts of language use. We call this the cynic perspective{--} or a characteristic of language with roots in Cynic philosophy that pertains to employing offensive speech as a practice of resistance. We also explore the degree to which NLP systems may encounter limits to modeling relational context.
null
null
10.18653/v1/2022.woah-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,008
inproceedings
zheng-etal-2022-towards
Towards a Multi-Entity Aspect-Based Sentiment Analysis for Characterizing Directed Social Regard in Online Messaging
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.19/
Zheng, Joan and Friedman, Scott and Schmer-galunder, Sonja and Magnusson, Ian and Wheelock, Ruta and Gottlieb, Jeremy and Gomez, Diana and Miller, Christopher
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
203--208
Online messaging is dynamic, influential, and highly contextual, and a single post may contain contrasting sentiments towards multiple entities, such as dehumanizing one actor while empathizing with another in the same message. These complexities are important to capture for understanding the systematic abuse voiced within an online community, or for determining whether individuals are advocating for abuse, opposing abuse, or simply reporting abuse. In this work, we describe a formulation of directed social regard (DSR) as a problem of multi-entity aspect-based sentiment analysis (ME-ABSA), which models the degree of intensity of multiple sentiments that are associated with entities described by a text document. Our DSR schema is informed by Bandura`s psychosocial theory of moral disengagement and by recent work in ABSA. We present a dataset of over 2,900 posts and sentences, comprising over 24,000 entities annotated for DSR over nine psychosocial dimensions by three annotators. We present a novel transformer-based ME-ABSA model for DSR, achieving favorable preliminary results on this dataset.
null
null
10.18653/v1/2022.woah-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,009
inproceedings
fryer-etal-2022-flexible
Flexible text generation for counterfactual fairness probing
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.20/
Fryer, Zee and Axelrod, Vera and Packer, Ben and Beutel, Alex and Chen, Jilin and Webster, Kellie
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
209--229
A common approach for testing fairness issues in text-based classifiers is through the use of counterfactuals: does the classifier output change if a sensitive attribute in the input is changed? Existing counterfactual generation methods typically rely on wordlists or templates, producing simple counterfactuals that fail to take into account grammar, context, or subtle sensitive attribute references, and could miss issues that the wordlist creators had not considered. In this paper, we introduce a task for generating counterfactuals that overcomes these shortcomings, and demonstrate how large language models (LLMs) can be leveraged to accomplish this task. We show that this LLM-based method can produce complex counterfactuals that existing methods cannot, comparing the performance of various counterfactual generation methods on the Civil Comments dataset and showing their value in evaluating a toxicity classifier.
null
null
10.18653/v1/2022.woah-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,010
inproceedings
moldovan-etal-2022-users
Users Hate Blondes: Detecting Sexism in User Comments on Online {R}omanian News
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.21/
Moldovan, Andreea and Cs{\"ur{\"os, Karla and Bucur, Ana-maria and Bercuci, Loredana
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
230--230
Romania ranks almost last in Europe when it comes to gender equality in political representation, with about 10{\%} fewer women in politics than the E.U. average. We proceed from the assumption that this underrepresentation is also influenced by the sexism and verbal abuse female politicians face in the public sphere, especially in online media. We collect a novel dataset with sexist comments in Romanian language from newspaper articles about Romanian female politicians and propose baseline models using classical machine learning models and fine-tuned pretrained transformer models for the classification of sexist language in the online medium.
null
null
10.18653/v1/2022.woah-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,011
inproceedings
sachdeva-etal-2022-targeted
Targeted Identity Group Prediction in Hate Speech Corpora
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.22/
Sachdeva, Pratik and Barreto, Renata and Von Vacano, Claudia and Kennedy, Chris
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
231--244
The past decade has seen an abundance of work seeking to detect, characterize, and measure online hate speech. A related, but less studied problem, is the detection of identity groups targeted by that hate speech. Predictive accuracy on this task can supplement additional analyses beyond hate speech detection, motivating its study. Using the Measuring Hate Speech corpus, which provided annotations for targeted identity groups, we created neural network models to perform multi-label binary prediction of identity groups targeted by a comment. Specifically, we studied 8 broad identity groups and 12 identity sub-groups within race and gender identity. We found that these networks exhibited good predictive performance, achieving ROC AUCs of greater than 0.9 and PR AUCs of greater than 0.7 on several identity groups. We validated their performance on HateCheck and Gab Hate Corpora, finding that predictive performance generalized in most settings. We additionally examined the performance of the model on comments targeting multiple identity groups. Our results demonstrate the feasibility of simultaneously identifying targeted groups in social media comments.
null
null
10.18653/v1/2022.woah-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,012
inproceedings
ramesh-etal-2022-revisiting
Revisiting Queer Minorities in Lexicons
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.23/
Ramesh, Krithika and Kumar, Sumeet and Khudabukhsh, Ashiqur
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
245--251
Lexicons play an important role in content moderation often being the first line of defense. However, little or no literature exists in analyzing the representation of queer-related words in them. In this paper, we consider twelve well-known lexicons containing inappropriate words and analyze how gender and sexual minorities are represented in these lexicons. Our analyses reveal that several of these lexicons barely make any distinction between pejorative and non-pejorative queer-related words. We express concern that such unfettered usage of non-pejorative queer-related words may impact queer presence in mainstream discourse. Our analyses further reveal that the lexicons have poor overlap in queer-related words. We finally present a quantifiable measure of consistency and show that several of these lexicons are not consistent in how they include (or omit) queer-related words.
null
null
10.18653/v1/2022.woah-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,013
inproceedings
nozza-etal-2022-hate
{HATE}-{ITA}: Hate Speech Detection in {I}talian Social Media Text
Narang, Kanika and Mostafazadeh Davani, Aida and Mathias, Lambert and Vidgen, Bertie and Talat, Zeerak
jul
2022
Seattle, Washington (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.woah-1.24/
Nozza, Debora and Bianchi, Federico and Attanasio, Giuseppe
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
252--260
Online hate speech is a dangerous phenomenon that can (and should) be promptly counteracted properly. While Natural Language Processing supplies appropriate algorithms for trying to reach this objective, all research efforts are directed toward the English language. This strongly limits the classification power on non-English languages. In this paper, we test several learning frameworks for identifying hate speech in Italian text. We release HATE-ITA, a multi-language model trained on a large set of English data and available Italian datasets. HATE-ITA performs better than mono-lingual models and seems to adapt well also on language-specific slurs. We hope our findings will encourage the research in other mid-to-low resource communities and provide a valuable benchmarking tool for the Italian community.
null
null
10.18653/v1/2022.woah-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,014
inproceedings
zhang-etal-2022-changes
Changes in Tweet Geolocation over Time: A Study with Carmen 2.0
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.1/
Zhang, Jingyu and DeLucia, Alexandra and Dredze, Mark
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
1--14
Researchers across disciplines use Twitter geolocation tools to filter data for desired locations. These tools have largely been trained and tested on English tweets, often originating in the United States from almost a decade ago. Despite the importance of these tools for data curation, the impact of tweet language, country of origin, and creation date on tool performance remains largely unknown. We explore these issues with Carmen, a popular tool for Twitter geolocation. To support this study we introduce Carmen 2.0, a major update which includes the incorporation of GeoNames, a gazetteer that provides much broader coverage of locations. We evaluate using two new Twitter datasets, one for multilingual, multiyear geolocation evaluation, and another for usage trends over time. We found that language, country origin, and time does impact geolocation tool performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,016
inproceedings
collard-etal-2022-extracting
Extracting Mathematical Concepts from Text
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.2/
Collard, Jacob and de Paiva, Valeria and Fong, Brendan and Subrahmanian, Eswaran
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
15--23
We investigate different systems for extracting mathematical entities from English texts in the mathematical field of category theory as a first step for constructing a mathematical knowledge graph. We consider four different term extractors and compare their results. This small experiment showcases some of the issues with the construction and evaluation of terms extracted from noisy domain text. We also make available two open corpora in research mathematics, in particular in category theory: a small corpus of 755 abstracts from the journal TAC (3188 sentences), and a larger corpus from the nLab community wiki (15,000 sentences)
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,017
inproceedings
ehghaghi-etal-2022-data
Data-driven Approach to Differentiating between Depression and Dementia from Noisy Speech and Language Data
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.3/
Ehghaghi, Malikeh and Rudzicz, Frank and Novikova, Jekaterina
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
24--37
A significant number of studies apply acoustic and linguistic characteristics of human speech as prominent markers of dementia and depression. However, studies on discriminating depression from dementia are rare. Co-morbid depression is frequent in dementia and these clinical conditions share many overlapping symptoms, but the ability to distinguish between depression and dementia is essential as depression is often curable. In this work, we investigate the ability of clustering approaches in distinguishing between depression and dementia from human speech. We introduce a novel aggregated dataset, which combines narrative speech data from multiple conditions, i.e., Alzheimer`s disease, mild cognitive impairment, healthy control, and depression. We compare linear and non-linear clustering approaches and show that non-linear clustering techniques distinguish better between distinct disease clusters. Our interpretability analysis shows that the main differentiating symptoms between dementia and depression are acoustic abnormality, repetitiveness (or circularity) of speech, word finding difficulty, coherence impairment, and differences in lexical complexity and richness.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,018
inproceedings
eggleston-oconnor-2022-cross
Cross-Dialect Social Media Dependency Parsing for Social Scientific Entity Attribute Analysis
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.4/
Eggleston, Chloe and O{'}Connor, Brendan
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
38--50
In this paper, we utilize recent advancements in social media natural language processing to obtain state-of-the-art syntactic dependency parsing results for social media English. We observe performance gains of 3.4 UAS and 4.0 LAS against the previous state-of-the-art as well as less disparity between African-American and Mainstream American English dialects. We demonstrate the computational social scientific utility of this parser for the task of socially embedded entity attribute analysis: for a specified entity, derive its semantic relationships from parses' rich syntax, and accumulate and compare them across social variables. We conduct a case study on politicized views of U.S. official Anthony Fauci during the COVID-19 pandemic.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,019
inproceedings
novikova-2022-impact
Impact of Environmental Noise on {A}lzheimer`s Disease Detection from Speech: Should You Let a Baby Cry?
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.5/
Novikova, Jekaterina
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
51--61
Research related to automatically detecting Alzheimer`s disease (AD) is important, given the high prevalence of AD and the high cost of traditional methods. Since AD significantly affects the acoustics of spontaneous speech, speech processing and machine learning (ML) provide promising techniques for reliably detecting AD. However, speech audio may be affected by different types of background noise and it is important to understand how the noise influences the accuracy of ML models detecting AD from speech. In this paper, we study the effect of fifteen types of environmental noise from five different categories on the performance of four ML models trained with three types of acoustic representations. We perform a thorough analysis showing how ML models and acoustic features are affected by different types of acoustic noise. We show that acoustic noise is not necessarily harmful - certain types of noise are beneficial for AD detection models and help increasing accuracy by up to 4.8{\%}. We provide recommendations on how to utilize acoustic noise in order to achieve the best performance results with the ML models deployed in real world.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,020
inproceedings
pranesh-2022-exploring
Exploring Multimodal Features and Fusion Strategies for Analyzing Disaster Tweets
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.6/
Pranesh, Raj
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
62--68
Social media platforms, such as Twitter, often provide firsthand news during the outbreak of a crisis. It is extremely essential to process these facts quickly to plan the response efforts for minimal loss. Therefore, in this paper, we present an analysis of various multimodal feature fusion techniques to analyze and classify disaster tweets into multiple crisis events via transfer learning. In our study, we utilized three image models pre-trained on ImageNet dataset and three fine-tuned language models to learn the visual and textual features of the data and combine them to make predictions. We have presented a systematic analysis of multiple intra-modal and cross-modal fusion strategies and their effect on the performance of the multimodal disaster classification system. In our experiment, we used 8,242 disaster tweets, each comprising image, and text data with five disaster event classes. The results show that the multimodal with transformer-attention mechanism and factorized bilinear pooling (FBP) for intra-modal and cross-modal feature fusion respectively achieved the best performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,021
inproceedings
li-etal-2022-ntulm
{NTULM}: Enriching Social Media Text Representations with Non-Textual Units
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.7/
Li, Jinning and Mishra, Shubhanshu and El-Kishky, Ahmed and Mehta, Sneha and Kulkarni, Vivek
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
69--82
On social media, additional context is often present in the form of annotations and meta-data such as the post`s author, mentions, Hashtags, and hyperlinks. We refer to these annotations as Non-Textual Units (NTUs). We posit that NTUs provide social context beyond their textual semantics and leveraging these units can enrich social media text representations. In this work we construct an NTU-centric social heterogeneous network to co-embed NTUs. We then principally integrate these NTU embeddings into a large pretrained language model by fine-tuning with these additional units. This adds context to noisy short-text social media. Experiments show that utilizing NTU-augmented text representations significantly outperforms existing text-only baselines by 2-5{\%} relative points on many downstream tasks highlighting the importance of context to social media NLP. We also highlight that including NTU context into the initial layers of language model alongside text is better than using it after the text embedding is generated. Our work leads to the generation of holistic general purpose social media content embedding.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,022
inproceedings
hebert-etal-2022-robust
Robust Candidate Generation for Entity Linking on Short Social Media Texts
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.8/
Hebert, Liam and Makki, Raheleh and Mishra, Shubhanshu and Saghir, Hamidreza and Kamath, Anusha and Merhav, Yuval
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
83--89
Entity Linking (EL) is the gateway into Knowledge Bases. Recent advances in EL utilize dense retrieval approaches for Candidate Generation, which addresses some of the shortcomings of the Lookup based approach of matching NER mentions against pre-computed dictionaries. In this work, we show that in the domain of Tweets, such methods suffer as users often include informal spelling, limited context, and lack of specificity, among other issues. We investigate these challenges on a large and recent Tweets benchmark for EL, empirically evaluate lookup and dense retrieval approaches, and demonstrate a hybrid solution using long contextual representation from Wikipedia is necessary to achieve considerable gains over previous work, achieving 0.93 recall.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,023
inproceedings
li-etal-2022-transpos
{T}rans{POS}: Transformers for Consolidating Different {POS} Tagset Datasets
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.9/
Li, Alex and Bankole-Hameed, Ilyas and Singh, Ranadeep and Ng, Gabriel and Gupta, Akshat
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
90--95
In hope of expanding training data, researchers often want to merge two or more datasets that are created using different labeling schemes. This paper considers two datasets that label part-of-speech (POS) tags under different tagging schemes and leverage the supervised labels of one dataset to help generate labels for the other dataset. This paper further discusses the theoretical difficulties of this approach and proposes a novel supervised architecture employing Transformers to tackle the problem of consolidating two completely disjoint datasets. The results diverge from initial expectations and discourage exploration into the use of disjoint labels to consolidate datasets with different labels.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,024
inproceedings
fu-etal-2022-effective-performant
An Effective, Performant Named Entity Recognition System for Noisy Business Telephone Conversation Transcripts
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.10/
Fu, Xue-Yong and Chen, Cheng and Laskar, Md Tahmid Rahman and Tn, Shashi Bhushan and Corston-Oliver, Simon
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
96--100
We present a simple yet effective method to train a named entity recognition (NER) model that operates on business telephone conversation transcripts that contain noise due to the nature of spoken conversation and artifacts of automatic speech recognition. We first fine-tune LUKE, a state-of-the-art Named Entity Recognition (NER) model, on a limited amount of transcripts, then use it as the teacher model to teach a smaller DistilBERT-based student model using a large amount of weakly labeled data and a small amount of human-annotated data. The model achieves high accuracy while also satisfying the practical constraints for inclusion in a commercial telephony product: realtime performance when deployed on cost-effective CPUs rather than GPUs. In this paper, we introduce the fine-tune-then-distill method for entity recognition on real world noisy data to deploy our NER model in a limited budget production environment. By generating pseudo-labels using a large teacher model pre-trained on typed text while fine-tuned on noisy speech text to train a smaller student model, we make the student model 75x times faster while reserving 99.09{\%} of its accuracy. These findings demonstrate that our proposed approach is very effective in limited budget scenarios to alleviate the need of human labeling of a large amount of noisy data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,025
inproceedings
khan-etal-2022-leveraging
Leveraging Semantic and Sentiment Knowledge for User-Generated Text Sentiment Classification
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.11/
Khan, Jawad and Ahmad, Niaz and Alam, Aftab and Lee, Youngmoon
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
101--105
Sentiment analysis is essential to process and understand unstructured user-generated content for better data analytics and decision-making. State-of-the-art techniques suffer from a high dimensional feature space because of noisy and irrelevant features from the noisy user-generated text. Our goal is to mitigate such problems using DNN-based text classification and popular word embeddings (Glove, fastText, and BERT) in conjunction with statistical filter feature selection (mRMR and PCA) to select relevant sentiment features and pick out unessential/irrelevant ones. We propose an effective way of integrating the traditional feature construction methods with the DNN-based methods to improve the performance of sentiment classification. We evaluate our model on three real-world benchmark datasets demonstrating that our proposed method improves the classification performance of several existing methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,026
inproceedings
labat-etal-2022-emotional
An Emotional Journey: Detecting Emotion Trajectories in {D}utch Customer Service Dialogues
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.12/
Labat, Sofie and Hadifar, Amir and Demeester, Thomas and Hoste, Veronique
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
106--112
The ability to track fine-grained emotions in customer service dialogues has many real-world applications, but has not been studied extensively. This paper measures the potential of prediction models on that task, based on a real-world dataset of Dutch Twitter conversations in the domain of customer service. We find that modeling emotion trajectories has a small, but measurable benefit compared to predictions based on isolated turns. The models used in our study are shown to generalize well to different companies and economic sectors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,027
inproceedings
orlov-artemova-2022-supervised
Supervised and Unsupervised Evaluation of Synthetic Code-Switching
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.13/
Orlov, Evgeny and Artemova, Ekaterina
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
113--123
Code-switching (CS) is a phenomenon of mixing words and phrases from multiple languages within a single sentence or conversation. The ever-growing amount of CS communication among multilingual speakers in social media has highlighted the need to adapt existing NLP products for CS speakers and lead to a rising interest in solving CS NLP tasks. A large number of contemporary approaches use synthetic CS data for training. As previous work has shown the positive effect of pretraining on high-quality CS data, the task of evaluating synthetic CS becomes crucial. In this paper, we address the task of evaluating synthetic CS in two settings. In supervised setting, we apply Hinglish finetuned models to solve the quality rating prediction task of HinglishEval competition and establish a new SOTA. In unsupervised setting, we employ the method of acceptability measures with the same models. We find that in both settings, models finetuned on CS data consistently outperform their original counterparts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,028
inproceedings
mubarak-etal-2022-arabgend
{A}rab{G}end: Gender Analysis and Inference on {A}rabic {T}witter
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.14/
Mubarak, Hamdy and Chowdhury, Shammur Absar and Alam, Firoj
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
124--135
Gender analysis of Twitter can reveal important socio-cultural differences between male and female users. There has been a significant effort to analyze and automatically infer gender in the past for most widely spoken languages' content, however, to our knowledge very limited work has been done for Arabic. In this paper, we perform an extensive analysis of differences between male and female users on the Arabic Twitter-sphere. We study differences in user engagement, topics of interest, and the gender gap in professions. Along with gender analysis, we also propose a method to infer gender by utilizing usernames, profile pictures, tweets, and networks of friends. In order to do so, we manually annotated gender and locations for {\textasciitilde}166K Twitter accounts associated with {\textasciitilde}92K user location, which we plan to make publicly available. Our proposed gender inference method achieve an F1 score of 82.1{\%} (47.3{\%} higher than majority baseline). We also developed a demo and made it publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,029
inproceedings
sampath-kumar-etal-2022-automatic
Automatic Identification of 5{C} Vaccine Behaviour on Social Media
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.15/
Sampath Kumar, Ajay Hemanth and Shausan, Aminath and Demartini, Gianluca and Rahimi, Afshin
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
136--146
Monitoring vaccine behaviour through social media can guide health policy. We present a new dataset of 9471 tweets posted in Australia from 2020 to 2022, annotated with sentiment toward vaccines and also 5C, the five types of behaviour toward vaccines, a scheme commonly used in health psychology literature. We benchmark our dataset using BERT and Gradient Boosting Machine and show that jointly training both sentiment and 5C tasks (F1=48) outperforms individual training (F1=39) in this highly imbalanced data. Our sentiment analysis indicates close correlation between the sentiments and prominent events during the pandemic. We hope that our dataset and benchmark models will inform further work in online monitoring of vaccine behaviour. The dataset and benchmark methods are accessible online.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,030
inproceedings
dimeski-rahimi-2022-automatic
Automatic Extraction of Structured Mineral Drillhole Results from Unstructured Mining Company Reports
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.16/
Dimeski, Adam and Rahimi, Afshin
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
147--153
Aggregate mining exploration results can help companies and governments to optimise and police mining permits and operations, a necessity for transition to a renewable energy future, however, these results are buried in unstructured text. We present a novel dataset from 23 Australian mining company reports, framing the extraction of structured drillhole information as a sequence labelling task. Our two benchmark models based on Bi-LSTM-CRF and BERT, show their effectiveness in this task with a F1 score of 77{\%} and 87{\%}, respectively. Our dataset and benchmarks are accessible online.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,031
inproceedings
s-shrivastava-2022-kanglish
{\textquotedblleft}Kanglish alli names!{\textquotedblright} Named Entity Recognition for {K}annada-{E}nglish Code-Mixed Social Media Data
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.17/
S, Sumukh and Shrivastava, Manish
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
154--161
Code-mixing (CM) is a frequently observed phenomenon on social media platforms in multilingual societies such as India. While the increase in code-mixed content on these platforms provides good amount of data for studying various aspects of code-mixing, the lack of automated text analysis tools makes such studies difficult. To overcome the same, tools such as language identifiers and parts of-speech (POS) taggers for analysing code-mixed data have been developed. One such tool is Named Entity Recognition (NER), an important Natural Language Processing (NLP) task, which is not only a subtask of Information Extraction, but is also needed for downstream NLP tasks such as semantic role labeling. While entity extraction from social media data is generally difficult due to its informal nature, code-mixed data further complicates the problem due to its informal, unstructured and incomplete information. In this work, we present the first ever corpus for Kannada-English code-mixed social media data with the corresponding named entity tags for NER. We provide strong baselines with machine learning classification models such as CRF, Bi-LSTM, and Bi-LSTM-CRF on our corpus with word, character, and lexical features.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,032
inproceedings
s-etal-2022-span
Span Extraction Aided Improved Code-mixed Sentiment Classification
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.18/
S, Ramaneswaran and Benhur, Sean and Ghosh, Sreyan
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
162--170
Sentiment classification is a fundamental NLP task of detecting the sentiment polarity of a given text. In this paper we show how solving sentiment span extraction as an auxiliary task can help improve final sentiment classification performance in a low-resource code-mixed setup. To be precise, we don`t solve a simple multi-task learning objective, but rather design a unified transformer framework that exploits the bidirectional connection between the two tasks simultaneously. To facilitate research in this direction we release gold-standard human-annotated sentiment span extraction dataset for Tamil-english code-switched texts. Extensive experiments and strong baselines show that our proposed approach outperforms sentiment and span prediction by 1.27{\%} and 2.78{\%} respectively when compared to the best performing MTL baseline. We also establish the generalizability of our approach on the Twitter Sentiment Extraction dataset. We make our code and data publicly available on GitHub
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,033
inproceedings
das-etal-2022-adbert
{A}d{BERT}: An Effective Few Shot Learning Framework for Aligning Tweets to Superbowl Advertisements
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.19/
Das, Debarati and Chenchu, Roopana and Abdollahi, Maral and Huh, Jisu and Srivastava, Jaideep
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
171--179
The tremendous increase in social media usage for sharing Television (TV) experiences has provided a unique opportunity in the Public Health and Marketing sectors to understand viewer engagement and attitudes through viewer-generated content on social media. However, this opportunity also comes with associated technical challenges. Specifically, given a televised event and related tweets about this event, we need methods to effectively align these tweets and the corresponding event. In this paper, we consider the specific ecosystem of the Superbowl 2020 and map viewer tweets to advertisements they are referring to. Our proposed model, AdBERT, is an effective few-shot learning framework that is able to handle the technical challenges of establishing ad-relatedness, class imbalance as well as the scarcity of labeled data. As part of this study, we have curated and developed two datasets that can prove to be useful for Social TV research: 1) dataset of ad-related tweets and 2) dataset of ad descriptions of Superbowl advertisements. Explaining connections to SentenceBERT, we describe the advantages of AdBERT that allow us to make the most out of a challenging and interesting dataset which we will open-source along with the models developed in this paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,034
inproceedings
vielsted-etal-2022-increasing
Increasing Robustness for Cross-domain Dialogue Act Classification on Social Media Data
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.20/
Vielsted, Marcus and Wallenius, Nikolaj and van der Goot, Rob
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
180--193
Automatically detecting the intent of an utterance is important for various downstream natural language processing tasks. This task is also called Dialogue Act Classification (DAC) and was primarily researched on spoken one-to-one conversations. The rise of social media has made this an interesting data source to explore within DAC, although it comes with some difficulties: non-standard form, variety of language types (across and within platforms), and quickly evolving norms. We therefore investigate the robustness of DAC on social media data in this paper. More concretely, we provide a benchmark that includes cross-domain data splits, as well as a variety of improvements on our transformer-based baseline. Our experiments show that lexical normalization is not beneficial in this setup, balancing the labels through resampling is beneficial in some cases, and incorporating context is crucial for this task and leads to the highest performance improvements 7 F1 percentage points in-domain and 20 cross-domain).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,035
inproceedings
dao-etal-2022-disfluency
Disfluency Detection for {V}ietnamese
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.21/
Dao, Mai Hoang and Truong, Thinh Hung and Nguyen, Dat Quoc
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
194--200
In this paper, we present the first empirical study for Vietnamese disfluency detection. To conduct this study, we first create a disfluency detection dataset for Vietnamese, with manual annotations over two disfluency types. We then empirically perform experiments using strong baseline models, and find that: automatic Vietnamese word segmentation improves the disfluency detection performances of the baselines, and the highest performance results are obtained by fine-tuning pre-trained language models in which the monolingual model PhoBERT for Vietnamese does better than the multilingual model XLM-R.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,036
inproceedings
marcuzzo-etal-2022-multi
A multi-level approach for hierarchical Ticket Classification
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.22/
Marcuzzo, Matteo and Zangari, Alessandro and Schiavinato, Michele and Giudice, Lorenzo and Gasparetto, Andrea and Albarelli, Andrea
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
201--214
The automatic categorization of support tickets is a fundamental tool for modern businesses. Such requests are most commonly composed of concise textual descriptions that are noisy and filled with technical jargon. In this paper, we test the effectiveness of pre-trained LMs for the classification of issues related to software bugs. First, we test several strategies to produce single, ticket-wise representations starting from their BERT-generated word embeddings. Then, we showcase a simple yet effective way to build a multi-level classifier for the categorization of documents with two hierarchically dependent labels. We experiment on a public bugs dataset and compare our results with standard BERT-based and traditional SVM classifiers. Our findings suggest that both embedding strategies and hierarchical label dependencies considerably impact classification accuracy.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,037
inproceedings
laippala-etal-2022-towards
Towards better structured and less noisy Web data: Oscar with Register annotations
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.23/
Laippala, Veronika and Salmela, Anna and R{\"onnqvist, Samuel and Aji, Alham Fikri and Chang, Li-Hsin and Dhifallah, Asma and Goulart, Larissa and Kortelainen, Henna and P{\`amies, Marc and Prina Dutra, Deise and Skantsi, Valtteri and Sutawika, Lintang and Pyysalo, Sampo
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
215--221
Web-crawled datasets are known to be noisy, as they feature a wide range of language use covering both user-generated and professionally edited content as well as noise originating from the crawling process. This article presents one solution to reduce this noise by using automatic register (genre) identification -whether the texts are, e.g., forum discussions, lyrical or how-to pages. We apply the multilingual register identification model by R{\"onnqvist et al. (2021) and label the widely used Oscar dataset. Additionally, we evaluate the model against eight new languages, showing that the performance is comparable to previous findings on a restricted set of languages. Finally, we present and apply a machine learning method for further cleaning text files originating from Web crawls from remains of boilerplate and other elements not belonging to the main text of the Web page. The register labeled and cleaned dataset covers 351 million documents in 14 languages and is available at \url{https://huggingface.co/datasets/TurkuNLP/register_oscar.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,038
inproceedings
rode-hasinger-etal-2022-true
True or False? Detecting False Information on Social Media Using Graph Neural Networks
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.24/
Rode-Hasinger, Samyo and Kruspe, Anna and Zhu, Xiao Xiang
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
222--229
In recent years, false information such as fake news, rumors and conspiracy theories on many relevant issues in society have proliferated. This phenomenon has been significantly amplified by the fast and inexorable spread of misinformation on social media and instant messaging platforms. With this work, we contribute to containing the negative impact on society caused by fake news. We propose a graph neural network approach for detecting false information on Twitter. We leverage the inherent structure of graph-based social media data aggregating information from short text messages (tweets), user profiles and social interactions. We use knowledge from pre-trained language models efficiently, and show that user-defined descriptions of profiles provide useful information for improved prediction performance. The empirical results indicate that our proposed framework significantly outperforms text- and user-based methods on misinformation datasets from two different domains, even in a difficult multilingual setting.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,039
inproceedings
aggarwal-zesch-2022-analyzing
Analyzing the Real Vulnerability of Hate Speech Detection Systems against Targeted Intentional Noise
null
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.wnut-1.25/
Aggarwal, Piush and Zesch, Torsten
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)
230--242
Hate speech detection systems have been shown to be vulnerable against obfuscation attacks, where a potential hater tries to circumvent detection by deliberately introducing noise in their posts. In previous work, noise is often introduced for all words (which is likely overestimating the impact) or single untargeted words (likely underestimating the vulnerability). We perform a user study asking people to select words they would obfuscate in a post. Using this realistic setting, we find that the real vulnerability of hate speech detection systems against deliberately introduced noise is almost as high as when using a whitebox attack and much more severe than when using a non-targeted dictionary. Our results are based on 4 different datasets, 12 different obfuscation strategies, and hate speech detection systems using different paradigms.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,040
inproceedings
wang-etal-2022-uncovering
Uncovering Surprising Event Boundaries in Narratives
Clark, Elizabeth and Brahman, Faeze and Iyyer, Mohit
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.wnu-1.1/
Wang, Zhilin and Jafarpour, Anna and Sap, Maarten
Proceedings of the 4th Workshop of Narrative Understanding (WNU2022)
1--12
When reading stories, people can naturally identify sentences in which a new event starts, i.e., event boundaries, using their knowledge of how events typically unfold, but a computational model to detect event boundaries is not yet available. We characterize and detect sentences with expected or surprising event boundaries in an annotated corpus of short diary-like stories, using a model that combines commonsense knowledge and narrative flow features with a RoBERTa classifier. Our results show that, while commonsense and narrative features can help improve performance overall, detecting event boundaries that are more subjective remains challenging for our model. We also find that sentences marking surprising event boundaries are less likely to be causally related to the preceding sentence, but are more likely to express emotional reactions of story characters, compared to sentences with no event boundary.
null
null
10.18653/v1/2022.wnu-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,042
inproceedings
wei-etal-2022-compositional
Compositional Generalization for Kinship Prediction through Data Augmentation
Clark, Elizabeth and Brahman, Faeze and Iyyer, Mohit
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.wnu-1.2/
Wei, Kangda and Ghosh, Sayan and Srivastava, Shashank
Proceedings of the 4th Workshop of Narrative Understanding (WNU2022)
13--19
Transformer-based models have shown promising performance in numerous NLP tasks. However, recent work has shown the limitation of such models in showing compositional generalization, which requires models to generalize to novel compositions of known concepts. In this work, we explore two strategies for compositional generalization on the task of kinship prediction from stories, (1) data augmentation and (2) predicting and using intermediate structured representation (in form of kinship graphs). Our experiments show that data augmentation boosts generalization performance by around 20{\%} on average relative to a baseline model from prior work not using these strategies. However, predicting and using intermediate kinship graphs leads to a deterioration in the generalization of kinship prediction by around 50{\%} on average relative to models that only leverage data augmentation.
null
null
10.18653/v1/2022.wnu-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,043
inproceedings
wang-torres-2022-helpful
How to be Helpful on Online Support Forums?
Clark, Elizabeth and Brahman, Faeze and Iyyer, Mohit
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.wnu-1.3/
Wang, Zhilin and Torres, Pablo E.
Proceedings of the 4th Workshop of Narrative Understanding (WNU2022)
20--28
Internet forums such as Reddit offer people a platform to ask for advice when they encounter various issues at work, school or in relationships. Telling helpful comments apart from unhelpful comments to these advice-seeking posts can help people and dialogue agents to become more helpful in offering advice. We propose a dataset that contains both helpful and unhelpful comments in response to such requests. We then relate helpfulness to the closely related construct of empathy. Finally, we analyze the language features that are associated with helpful and unhelpful comments.
null
null
10.18653/v1/2022.wnu-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,044
inproceedings
rosa-etal-2022-gpt
{GPT}-2-based Human-in-the-loop Theatre Play Script Generation
Clark, Elizabeth and Brahman, Faeze and Iyyer, Mohit
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.wnu-1.4/
Rosa, Rudolf and Schmidtov{\'a}, Patr{\'i}cia and Du{\v{s}}ek, Ond{\v{r}}ej and Musil, Tom{\'a}{\v{s}} and Mare{\v{c}}ek, David and Obaid, Saad and Nov{\'a}kov{\'a}, Marie and Voseck{\'a}, Kl{\'a}ra and Dole{\v{z}}al, Josef
Proceedings of the 4th Workshop of Narrative Understanding (WNU2022)
29--37
We experiment with adapting generative language models for the generation of long coherent narratives in the form of theatre plays. Since fully automatic generation of whole plays is not currently feasible, we created an interactive tool that allows a human user to steer the generation somewhat while minimizing intervention. We pursue two approaches to long-text generation: a flat generation with summarization of context, and a hierarchical text-to-text two-stage approach, where a synopsis is generated first and then used to condition generation of the final script. Our preliminary results and discussions with theatre professionals show improvements over vanilla language model generation, but also identify important limitations of our approach.
null
null
10.18653/v1/2022.wnu-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,045
inproceedings
hosseini-etal-2022-gispy
{G}is{P}y: A Tool for Measuring Gist Inference Score in Text
Clark, Elizabeth and Brahman, Faeze and Iyyer, Mohit
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.wnu-1.5/
Hosseini, Pedram and Wolfe, Christopher and Diab, Mona and Broniatowski, David
Proceedings of the 4th Workshop of Narrative Understanding (WNU2022)
38--46
Decision making theories such as Fuzzy-Trace Theory (FTT) suggest that individuals tend to rely on gist, or bottom-line meaning, in the text when making decisions. In this work, we delineate the process of developing GisPy, an opensource tool in Python for measuring the Gist Inference Score (GIS) in text. Evaluation of GisPy on documents in three benchmarks from the news and scientific text domains demonstrates that scores generated by our tool significantly distinguish low vs. high gist documents. Our tool is publicly available to use at: https: //github.com/phosseini/GisPy.
null
null
10.18653/v1/2022.wnu-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,046
inproceedings
ganti-etal-2022-narrative
Narrative Detection and Feature Analysis in Online Health Communities
Clark, Elizabeth and Brahman, Faeze and Iyyer, Mohit
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.wnu-1.7/
Ganti, Achyutarama and Wilson, Steven and Ma, Zexin and Zhao, Xinyan and Ma, Rong
Proceedings of the 4th Workshop of Narrative Understanding (WNU2022)
57--65
Narratives have been shown to be an effective way to communicate health risks and promote health behavior change, and given the growing amount of health information being shared on social media, it is crucial to study health-related narratives in social media. However, expert identification of a large number of narrative texts is a time consuming process, and larger scale studies on the use of narratives may be enabled through automatic text classification approaches. Prior work has demonstrated that automatic narrative detection is possible, but modern deep learning approaches have not been used for this task in the domain of online health communities. Therefore, in this paper, we explore the use of deep learning methods to automatically classify the presence of narratives in social media posts, finding that they outperform previously proposed approaches. We also find that in many cases, these models generalize well across posts from different health organizations. Finally, in order to better understand the increase in performance achieved by deep learning models, we use feature analysis techniques to explore the features that most contribute to narrative detection for posts in online health communities.
null
null
10.18653/v1/2022.wnu-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,048
inproceedings
van-duijn-etal-2022-looking
Looking from the Inside: How Children Render Character`s Perspectives in Freely Told Fantasy Stories
Clark, Elizabeth and Brahman, Faeze and Iyyer, Mohit
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.wnu-1.8/
van Duijn, Max and van Dijk, Bram and Spruit, Marco
Proceedings of the 4th Workshop of Narrative Understanding (WNU2022)
66--76
Story characters not only perform actions, they typically also perceive, feel, think, and communicate. Here we are interested in how children render characters' perspectives when freely telling a fantasy story. Drawing on a sample of 150 narratives elicited from Dutch children aged 4-12, we provide an inventory of 750 instances of character-perspective representation (CPR), distinguishing fourteen different types. Firstly, we observe that character perspectives are ubiquitous in freely told children`s stories and take more varied forms than traditional frameworks can accommodate. Secondly, we discuss variation in the use of different types of CPR across age groups, finding that character perspectives are being fleshed out in more advanced and diverse ways as children grow older. Thirdly, we explore whether such variation can be meaningfully linked to automatically extracted linguistic features, thereby probing the potential for using automated tools from NLP to extract and classify character perspectives in children`s stories.
null
null
10.18653/v1/2022.wnu-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,049
inproceedings
kocmi-etal-2022-findings
Findings of the 2022 Conference on Machine Translation ({WMT}22)
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.1/
Kocmi, Tom and Bawden, Rachel and Bojar, Ond{\v{r}}ej and Dvorkovich, Anton and Federmann, Christian and Fishel, Mark and Gowda, Thamme and Graham, Yvette and Grundkiewicz, Roman and Haddow, Barry and Knowles, Rebecca and Koehn, Philipp and Monz, Christof and Morishita, Makoto and Nagata, Masaaki and Nakazawa, Toshiaki and Nov{\'a}k, Michal and Popel, Martin and Popovi{\'c}, Maja
Proceedings of the Seventh Conference on Machine Translation (WMT)
1--45
This paper presents the results of the General Machine Translation Task organised as part of the Conference on Machine Translation (WMT) 2022. In the general MT task, participants were asked to build machine translation systems for any of 11 language pairs, to be evaluated on test sets consisting of four different domains. We evaluate system outputs with human annotators using two different techniques: reference-based direct assessment and (DA) and a combination of DA and scalar quality metric (DA+SQM).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,051
inproceedings
freitag-etal-2022-results
Results of {WMT}22 Metrics Shared Task: Stop Using {BLEU} {--} Neural Metrics Are Better and More Robust
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.2/
Freitag, Markus and Rei, Ricardo and Mathur, Nitika and Lo, Chi-kiu and Stewart, Craig and Avramidis, Eleftherios and Kocmi, Tom and Foster, George and Lavie, Alon and Martins, Andr{\'e} F. T.
Proceedings of the Seventh Conference on Machine Translation (WMT)
46--68
This paper presents the results of the WMT22 Metrics Shared Task. Participants submitting automatic MT evaluation metrics were asked to score the outputs of the translation systems competing in the WMT22 News Translation Task on four different domains: news, social, ecommerce, and chat. All metrics were evaluated on how well they correlate with human ratings at the system and segment level. Similar to last year, we acquired our own human ratings based on expert-based human evaluation via Multidimensional Quality Metrics (MQM). This setup had several advantages, among other things: (i) expert-based evaluation is more reliable, (ii) we extended the pool of translations by 5 additional translations based on MBR decoding or rescoring which are challenging for current metrics. In addition, we initiated a challenge set subtask, where participants had to create contrastive test suites for evaluating metrics' ability to capture and penalise specific types of translation errors. Finally, we present an extensive analysis on how well metrics perform on three language pairs: English to German, English to Russian and Chinese to English. The results demonstrate the superiority of neural-based learned metrics and demonstrate again that overlap metrics like Bleu, spBleu or chrf correlate poorly with human ratings. The results also reveal that neural-based metrics are remarkably robust across different domains and challenges.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,052
inproceedings
zerva-etal-2022-findings
Findings of the {WMT} 2022 Shared Task on Quality Estimation
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.3/
Zerva, Chrysoula and Blain, Fr{\'e}d{\'e}ric and Rei, Ricardo and Lertvittayakumjorn, Piyawat and C. de Souza, Jos{\'e} G. and Eger, Steffen and Kanojia, Diptesh and Alves, Duarte and Or{\u{a}}san, Constantin and Fomicheva, Marina and Martins, Andr{\'e} F. T. and Specia, Lucia
Proceedings of the Seventh Conference on Machine Translation (WMT)
69--99
We report the results of the WMT 2022 shared task on Quality Estimation, in which the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels, without access to reference translations. This edition introduces a few novel aspects and extensions that aim to enable more fine-grained, and explainable quality estimation approaches. We introduce an updated quality annotation scheme using Multidimensional Quality Metrics to obtain sentence- and word-level quality scores for three language pairs. We also extend the Direct Assessments and post-edit data (MLQE-PE) to new language pairs: we present a novel and large dataset on English-Marathi, as well as a zero-shot test set on English-Yoruba. Further, we include an explainability sub-task for all language pairs and present a new format of a critical error detection task for two new language pairs. Participants from 11 different teams submitted altogether 991 systems to different task variants and language pairs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,053
inproceedings
heafield-etal-2022-findings
Findings of the {WMT} 2022 Shared Task on Efficient Translation
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.4/
Heafield, Kenneth and Zhang, Biao and Nail, Graeme and Van Der Linde, Jelmer and Bogoychev, Nikolay
Proceedings of the Seventh Conference on Machine Translation (WMT)
100--108
The machine translation efficiency task challenges participants to make their systems faster and smaller with minimal impact on translation quality. How much quality to sacrifice for efficiency depends upon the application, so participants were encouraged to make multiple submissions covering the space of trade-offs. In total, there were 76 submissions from 5 teams. The task covers GPU, single-core CPU, and multi-core CPU hardware tracks as well as batched throughput or single-sentence latency conditions. Submissions showed hundreds of millions of words can be translated for a dollar, average latency is 3.5{--}25 ms, and models fit in 7.5{--}900 MB.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,054
inproceedings
bhattacharyya-etal-2022-findings
Findings of the {WMT} 2022 Shared Task on Automatic Post-Editing
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.5/
Bhattacharyya, Pushpak and Chatterjee, Rajen and Freitag, Markus and Kanojia, Diptesh and Negri, Matteo and Turchi, Marco
Proceedings of the Seventh Conference on Machine Translation (WMT)
109--117
We present the results from the 8th round of the WMT shared task on MT Automatic PostEditing, which consists in automatically correcting the output of a {\textquotedblleft}black-box{\textquotedblright} machine translation system by learning from human corrections. This year, the task focused on a new language pair (English{\textrightarrow}Marathi) and on data coming from multiple domains (healthcare, tourism, and general/news). Although according to several indicators this round was of medium-high difficulty compared to the past,the best submission from the three participating teams managed to significantly improve (with an error reduction of 3.49 TER points) the original translations produced by a generic neural MT system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,055
inproceedings
vernikos-etal-2022-embarrassingly
Embarrassingly Easy Document-Level {MT} Metrics: How to Convert Any Pretrained Metric into a Document-Level Metric
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.6/
Vernikos, Giorgos and Thompson, Brian and Mathur, Prashant and Federico, Marcello
Proceedings of the Seventh Conference on Machine Translation (WMT)
118--128
We present a very simple method for extending pretrained machine translation metrics to incorporate document-level context. We apply our method to four popular metrics: BERTScore, Prism, COMET, and the reference-free metric COMET-QE. We evaluate our document-level metrics on the MQM annotations from the WMT 2021 metrics shared task and find that the document-level metrics outperform their sentence-level counterparts in about 85{\%} of the tested conditions, when excluding results on low-quality human references. Additionally, we show that our document-level extension of COMET-QE dramatically improves accuracy on discourse phenomena tasks, supporting our hypothesis that our document-level metrics are resolving ambiguities in the reference sentence by using additional context.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,056
inproceedings
wei-etal-2022-searching
Searching for a Higher Power in the Human Evaluation of {MT}
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.7/
Wei, Johnny and Kocmi, Tom and Federmann, Christian
Proceedings of the Seventh Conference on Machine Translation (WMT)
129--139
In MT evaluation, pairwise comparisons are conducted to identify the better system. In conducting the comparison, the experimenter must allocate a budget to collect Direct Assessment (DA) judgments. We provide a cost effective way to spend the budget, but show that typical budget sizes often do not allow for solid comparison. Taking the perspective that the basis of solid comparison is in achieving statistical significance, we study the power (rate of achieving significance) on a large collection of pairwise DA comparisons. Due to the nature of statistical estimation, power is low for differentiating less than 1-2 DA points, and to achieve a notable increase in power requires at least 2-3x more samples. Applying variance reduction alone will not yield these gains, so we must face the reality of undetectable differences and spending increases. In this context, we propose interim testing, an {\textquotedblleft}early stopping{\textquotedblright} collection procedure that yields more power per judgment collected, which adaptively focuses the budget on pairs that are borderline significant. Interim testing can achieve up to a 27{\%} efficiency gain when spending 3x the current budget, or 18{\%} savings at the current evaluation power.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,057
inproceedings
knowles-lo-2022-test
Test Set Sampling Affects System Rankings: Expanded Human Evaluation of {WMT}20 {E}nglish-{I}nuktitut Systems
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.8/
Knowles, Rebecca and Lo, Chi-kiu
Proceedings of the Seventh Conference on Machine Translation (WMT)
140--153
We present a collection of expanded human annotations of the WMT20 English-Inuktitut machine translation shared task, covering the Nunavut Hansard portion of the dataset. Additionally, we recompute News rankings to take into account the completed set of human annotations and certain irregularities in the annotation task construction. We show the effect of these changes on the downstream task of the evaluation of automatic metrics. Finally, we demonstrate that character-level metrics correlate well with human judgments for the task of automatically evaluating translation into this polysynthetic language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,058
inproceedings
javorsky-etal-2022-continuous
Continuous Rating as Reliable Human Evaluation of Simultaneous Speech Translation
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.9/
Javorsk{\'y}, D{\'a}vid and Mach{\'a}{\v{c}}ek, Dominik and Bojar, Ond{\v{r}}ej
Proceedings of the Seventh Conference on Machine Translation (WMT)
154--164
Simultaneous speech translation (SST) can be evaluated on simulated online events where human evaluators watch subtitled videos and continuously express their satisfaction by pressing buttons (so called Continuous Rating). Continuous Rating is easy to collect, but little is known about its reliability, or relation to comprehension of foreign language document by SST users. In this paper, we contrast Continuous Rating with factual questionnaires on judges with different levels of source language knowledge. Our results show that Continuous Rating is easy and reliable SST quality assessment if the judges have at least limited knowledge of the source language. Our study indicates users' preferences on subtitle layout and presentation style and, most importantly, provides a significant evidence that users with advanced source language knowledge prefer low latency over fewer re-translations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,059
inproceedings
corral-saralegi-2022-gender
Gender Bias Mitigation for {NMT} Involving Genderless Languages
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.10/
Corral, Ander and Saralegi, Xabier
Proceedings of the Seventh Conference on Machine Translation (WMT)
165--176
It has been found that NMT systems have a strong preference towards social defaults and biases when translating certain occupations, which due to their widespread use, can unintentionally contribute to amplifying and perpetuating these patterns. In that sense, this work focuses on sentence-level gender agreement between gendered entities and occupations when translating from genderless languages to languages with grammatical gender. Specifically, we address the Basque to Spanish translation direction for which bias mitigation has not been addressed. Gender information in Basque is explicit in neither the grammar nor the morphology. It is only present in a limited number of gender specific common nouns and person proper names. We propose a template-based fine-tuning strategy with explicit gender tags to provide a stronger gender signal for the proper inflection of occupations. This strategy is compared against systems fine-tuned on real data extracted from Wikipedia biographies. We provide a detailed gender bias assessment analysis and perform a template ablation study to determine the optimal set of templates. We report a substantial gender bias mitigation (up to 50{\%} on gender bias scores) while keeping the original translation quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,060
inproceedings
agrawal-etal-2022-exploring
Exploring the Benefits and Limitations of Multilinguality for Non-autoregressive Machine Translation
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.11/
Agrawal, Sweta and Kreutzer, Julia and Cherry, Colin
Proceedings of the Seventh Conference on Machine Translation (WMT)
177--187
Non-autoregressive (NAR) machine translation has recently received significant developments and now achieves comparable quality with autoregressive (AR) models on some benchmarks while providing an efficient alternative to AR inference. However, while AR translation is often used to implement multilingual models that benefit from transfer between languages and from improved serving efficiency, multilingual NAR models remain relatively unexplored. Taking Connectionist Temporal Classification as an example NAR model and IMPUTER as a semi-NAR model, we present a comprehensive empirical study of multilingual NAR. We test its capabilities with respect to positive transfer between related languages and negative transfer under capacity constraints. As NAR models require distilled training sets, we carefully study the impact of bilingual versus multilingual teachers. Finally, we fit a scaling law for multilingual NAR to determine capacity bottlenecks, which quantifies its performance relative to the AR model as the model scale increases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,061
inproceedings
liu-niehues-2022-learning
Learning an Artificial Language for Knowledge-Sharing in Multilingual Translation
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.12/
Liu, Danni and Niehues, Jan
Proceedings of the Seventh Conference on Machine Translation (WMT)
188--202
The cornerstone of multilingual neural translation is shared representations across languages. Given the theoretically infinite representation power of neural networks, semantically identical sentences are likely represented differently. While representing sentences in the continuous latent space ensures expressiveness, it introduces the risk of capturing of irrelevant features which hinders the learning of a common representation. In this work, we discretize the encoder output latent space of multilingual models by assigning encoder states to entries in a codebook,which in effect represents source sentences in a new artificial language. This discretization process not only offers a new way to interpret the otherwise black-box model representations,but, more importantly, gives potential for increasing robustness in unseen testing conditions. We validate our approach on large-scale experiments with realistic data volumes and domains. When tested in zero-shot conditions, our approach is competitive with two strong alternatives from the literature. We also use the learned artificial language to analyze model behavior, and discover that using a similar bridge language increases knowledge-sharing among the remaining languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,062
inproceedings
amrhein-haddow-2022-dont
Don`t Discard Fixed-Window Audio Segmentation in Speech-to-Text Translation
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.13/
Amrhein, Chantal and Haddow, Barry
Proceedings of the Seventh Conference on Machine Translation (WMT)
203--219
For real-life applications, it is crucial that end-to-end spoken language translation models perform well on continuous audio, without relying on human-supplied segmentation. For online spoken language translation, where models need to start translating before the full utterance is spoken,most previous work has ignored the segmentation problem. In this paper, we compare various methods for improving models' robustness towards segmentation errors and different segmentation strategies in both offline and online settings and report results on translation quality, flicker and delay. Our findings on five different language pairs show that a simple fixed-window audio segmentation can perform surprisingly well given the right conditions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,063
inproceedings
rippeth-post-2022-additive
Additive Interventions Yield Robust Multi-Domain Machine Translation Models
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.14/
Rippeth, Elijah and Post, Matt
Proceedings of the Seventh Conference on Machine Translation (WMT)
220--232
Additive interventions are a recently-proposed mechanism for controlling target-side attributes in neural machine translation by modulating the encoder`s representation of a source sequence as opposed to manipulating the raw source sequence as seen in most previous tag-based approaches. In this work we examine the role of additive interventions in a large-scale multi-domain machine translation setting and compare its performance in various inference scenarios. We find that while the performance difference is small between intervention-based systems and tag-based systems when the domain label matches the test domain, intervention-based systems are robust to label error, making them an attractive choice under label uncertainty. Further, we find that the superiority of single-domain fine-tuning comes under question when training data is scaled, contradicting previous findings.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,064
inproceedings
alabi-etal-2022-inria
Inria-{ALMA}na{CH} at {WMT} 2022: Does Transcription Help Cross-Script Machine Translation?
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.15/
Alabi, Jesujoba and Nishimwe, Lydia and Muller, Benjamin and Rey, Camille and Sagot, Beno{\^i}t and Bawden, Rachel
Proceedings of the Seventh Conference on Machine Translation (WMT)
233--243
This paper describes the Inria ALMAnaCH team submission to the WMT 2022 general translation shared task. Participating in the language directions cs,ru,uk{\textrightarrow}en and cs{\ensuremath{\leftrightarrow}}uk, we experiment with the use of a dedicated Latin-script transcription convention aimed at representing all Slavic languages involved in a way that maximises character- and word-level correspondences between them as well as with the English language. Our hypothesis was that bringing the source and target language closer could have a positive impact on machine translation results. We provide multiple comparisons, including bilingual and multilingual baselines, with and without transcription. Initial results indicate that the transcription strategy was not successful, resulting in lower results than baselines. We nevertheless submitted our multilingual, transcribed models as our primary systems, and in this paper provide some indications as to why we got these negative results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,065
inproceedings
deguchi-etal-2022-naist
{NAIST}-{NICT}-{TIT} {WMT}22 General {MT} Task Submission
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.16/
Deguchi, Hiroyuki and Imamura, Kenji and Kaneko, Masahiro and Nishida, Yuto and Sakai, Yusuke and Vasselli, Justin and Vu, Huy Hien and Watanabe, Taro
Proceedings of the Seventh Conference on Machine Translation (WMT)
244--250
In this paper, we describe our NAIST-NICT-TIT submission to the WMT22 general machine translation task. We participated in this task for the English {\ensuremath{\leftrightarrow}} Japanese language pair. Our system is characterized as an ensemble of Transformer big models, k-nearest-neighbor machine translation (kNN-MT) (Khandelwal et al., 2021), and reranking.In our translation system, we construct the datastore for kNN-MT from back-translated monolingual data and integrate kNN-MT into the ensemble model. We designed a reranking system to select a translation from the n-best translation candidates generated by the translation system. We also use a context-aware model to improve the document-level consistency of the translation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,066
inproceedings
dobrowolski-etal-2022-samsung
{S}amsung {R}{\&}{D} Institute {P}oland Participation in {WMT} 2022
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.17/
Dobrowolski, Adam and Klimaszewski, Mateusz and My{\'s}liwy, Adam and Szyma{\'n}ski, Marcin and Kowalski, Jakub and Szypu{\l}a, Kornelia and Przew{\l}ocki, Pawe{\l} and Przybysz, Pawe{\l}
Proceedings of the Seventh Conference on Machine Translation (WMT)
251--259
This paper presents the system description of Samsung R{\&}D Institute Poland participation in WMT 2022 for General MT solution for medium and low resource languages: Russian and Croatian. Our approach combines iterative noised/tagged back-translation and iterative distillation. We investigated different monolingual resources and compared their influence on final translations. We used available BERT-likemodels for text classification and for extracting domains of texts. Then we prepared an ensemble of NMT models adapted to multiple domains. Finally we attempted to predict ensemble weight vectors from the BERT-based domain classifications for individual sentences. Our final trained models reached quality comparable to best online translators using only limited constrained resources during training.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,067
inproceedings
he-etal-2022-tencent
Tencent {AI} Lab - Shanghai Jiao Tong University Low-Resource Translation System for the {WMT}22 Translation Task
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.18/
He, Zhiwei and Wang, Xing and Tu, Zhaopeng and Shi, Shuming and Wang, Rui
Proceedings of the Seventh Conference on Machine Translation (WMT)
260--267
This paper describes Tencent AI Lab - Shanghai Jiao Tong University (TAL-SJTU) Low-Resource Translation systems for the WMT22 shared task. We participate in the general translation task on English-Livonian.Our system is based on M2M100 with novel techniques that adapt it to the target language pair.(1) Cross-model word embedding alignment: inspired by cross-lingual word embedding alignment, we successfully transfer a pre-trained word embedding to M2M100, enabling it to support Livonian.(2) Gradual adaptation strategy: we exploit Estonian and Latvian as auxiliary languages for many-to-many translation training and then adapt to English-Livonian.(3) Data augmentation: to enlarge the parallel data for English-Livonian, we construct pseudo-parallel data with Estonian and Latvian as pivot languages.(4) Fine-tuning: to make the most of all available data, we fine-tune the model with the validation set and online back-translation, further boosting the performance. In model evaluation: (1) We find that previous work underestimated the translation performance of Livonian due to inconsistent Unicode normalization, which may cause a discrepancy of up to 14.9 BLEU score.(2) In addition to the standard validation set, we also employ round-trip BLEU to evaluate the models, which we find more appropriate for this task. Finally, our unconstrained system achieves BLEU scores of 17.0 and 30.4 for English to/from Livonian.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,068
inproceedings
han-etal-2022-lan
Lan-Bridge {MT}`s Participation in the {WMT} 2022 General Translation Shared Task
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.19/
Han, Bing and Wu, Yangjian and Hu, Gang and Chen, Qiulin
Proceedings of the Seventh Conference on Machine Translation (WMT)
268--274
This paper describes Lan-Bridge Translation systems for the WMT 2022 General Translation shared task. We participate in 18 language directions: English to and from Czech, German, Ukrainian, Japanese, Russian, Chinese, English to Croatian, French to German, Yakut to and from Russian and Ukrainian to and from Czech.To develop systems covering all these direc{\_}x0002{\_}tions, we mainly focus on multilingual mod{\_}x0002{\_}els. In general, we apply data corpus filtering, scaling model size, sparse expert model (in par{\_}x0002{\_}ticular, Transformer with adapters), large scale backtranslation and language model rerankingtechniques. Our system ranks first in 6 directions based on automatic evaluation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,069
inproceedings
jin-etal-2022-manifolds
Manifold`s {E}nglish-{C}hinese System at {WMT}22 General {MT} Task
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.20/
Jin, Chang and Shi, Tingxun and Xue, Zhengshan and Lin, Xiaodong
Proceedings of the Seventh Conference on Machine Translation (WMT)
275--279
Manifold`s English-Chinese System at WMT22 is an ensemble of 4 models trained by different configurations with scheduled sampling-based fine-tuning. The four configurations are DeepBig (XenC), DeepLarger (XenC), DeepBig-TalkingHeads (XenC) and DeepBig (LaBSE). Concretely, DeepBig extends Transformer-Big to 24 encoder layers. DeepLarger has 20 encoder layers and its feed-forward network (FFN) dimension is 8192. TalkingHeads applies the talking-heads trick. For XenC configs, we selected monolingual and parallel data that is similar to the past newstest datasets using XenC, and for LaBSE, we cleaned the officially provided parallel data using LaBSE pretrained model. According to the officially released autonomic metrics leaderboard, our final constrained system ranked 1st among all others when evaluated by bleu-all, chrf-all and COMET-B, 2nd by COMET-A.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,070
inproceedings
jon-etal-2022-cuni
{CUNI}-Bergamot Submission at {WMT}22 General Translation Task
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.21/
Jon, Josef and Popel, Martin and Bojar, Ond{\v{r}}ej
Proceedings of the Seventh Conference on Machine Translation (WMT)
280--289
We present the CUNI-Bergamot submission for the WMT22 General translation task. We compete in English-Czech direction. Our submission further explores block backtranslation techniques. Compared to the previous work, we measure performance in terms of COMET score and named entities translation accuracy. We evaluate performance of MBR decoding compared to traditional mixed backtranslation training and we show a possible synergy when using both of the techniques simultaneously. The results show that both approaches are effective means of improving translation quality and they yield even better results when combined.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,071
inproceedings
kalkar-etal-2022-kyb
{KYB} General Machine Translation Systems for {WMT}22
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.22/
Kalkar, Shivam and Matsuzaki, Yoko and Li, Ben
Proceedings of the Seventh Conference on Machine Translation (WMT)
290--294
We here describe our neural machine translation system for general machine translation shared task in WMT 2022. Our systems are based on the Transformer (Vaswani et al., 2017) with base settings. We explore the high-efficiency model training strategies, aimed to train a model with high-accuracy by using small model and a reasonable amount of data. We performed fine-tuning and ensembling with N-best ranking in English to/from Japanese directions. We found that fine-tuning by filtered JParaCrawl data set leads to better translations for both of direction in English to/from Japanese models. In English to Japanese direction model, ensembling and N-best ranking of 10 different checkpoints improved translations. By comparing with other online translation service, we found that our model achieved a great translation quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,072
inproceedings
lam-etal-2022-analyzing
Analyzing the Use of Influence Functions for Instance-Specific Data Filtering in Neural Machine Translation
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.23/
Lam, Tsz Kin and Hasler, Eva and Hieber, Felix
Proceedings of the Seventh Conference on Machine Translation (WMT)
295--309
Customer feedback can be an important signal for improving commercial machine translation systems. One solution for fixing specific translation errors is to remove the related erroneous training instances followed by re-training of the machine translation system, which we refer to as instance-specific data filtering. Influence functions (IF) have been shown to be effective in finding such relevant training examples for classification tasks such as image classification, toxic speech detection and entailment task. Given a probing instance, IF find influential training examples by measuring the similarity of the probing instance with a set of training examples in gradient space. In this work, we examine the use of influence functions for Neural Machine Translation (NMT). We propose two effective extensions to a state of the art influence function and demonstrate on the sub-problem of copied training examples that IF can be applied more generally than hand-crafted regular expressions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,073
inproceedings
liu-etal-2022-aisp
The {AISP}-{SJTU} Translation System for {WMT} 2022
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.24/
Liu, Guangfeng and Zhu, Qinpei and Chen, Xingyu and Feng, Renjie and Ren, Jianxin and Wu, Renshou and Miao, Qingliang and Wang, Rui and Yu, Kai
Proceedings of the Seventh Conference on Machine Translation (WMT)
310--317
This paper describes AISP-SJTU`s participation in WMT 2022 shared general MT task. In this shared task, we participated in four translation directions: English-Chinese, Chinese-English, English-Japanese and Japanese-English. Our systems are based on the Transformer architecture with several novel and effective variants, including network depth and internal structure. In our experiments, we employ data filtering, large-scale back-translation, knowledge distillation, forward-translation, iterative in-domain knowledge finetune and model ensemble. The constrained systems achieve 48.8, 29.7, 39.3 and 22.0 case-sensitive BLEU scores on EN-ZH, ZH-EN, EN-JA and JA-EN, respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,074
inproceedings
morishita-etal-2022-nt5
{NT}5 at {WMT} 2022 General Translation Task
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.25/
Morishita, Makoto and Kudo, Keito and Oka, Yui and Chousa, Katsuki and Kiyono, Shun and Takase, Sho and Suzuki, Jun
Proceedings of the Seventh Conference on Machine Translation (WMT)
318--325
This paper describes the NTT-Tohoku-TokyoTech-RIKEN (NT5) team`s submission system for the WMT`22 general translation task. This year, we focused on the English-to-Japanese and Japanese-to-English translation tracks. Our submission system consists of an ensemble of Transformer models with several extensions. We also applied data augmentation and selection techniques to obtain potentially effective training data for training individual Transformer models in the pre-training and fine-tuning scheme. Additionally, we report our trial of incorporating a reranking module and the reevaluated results of several techniques that have been recently developed and published.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,075
inproceedings
nowakowski-etal-2022-adam
{A}dam {M}ickiewicz {U}niversity at {WMT} 2022: {NER}-Assisted and Quality-Aware Neural Machine Translation
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.26/
Nowakowski, Artur and Pa{\l}ka, Gabriela and Guttmann, Kamil and Pokrywka, Miko{\l}aj
Proceedings of the Seventh Conference on Machine Translation (WMT)
326--334
This paper presents Adam Mickiewicz University`s (AMU) submissions to the constrained track of the WMT 2022 General MT Task. We participated in the Ukrainian {\ensuremath{\leftrightarrow}} Czech translation directions. The systems are a weighted ensemble of four models based on the Transformer (big) architecture. The models use source factors to utilize the information about named entities present in the input. Each of the models in the ensemble was trained using only the data provided by the shared task organizers. A noisy back-translation technique was used to augment the training corpora. One of the models in the ensemble is a document-level model, trained on parallel and synthetic longer sequences. During the sentence-level decoding process, the ensemble generated the n-best list. The n-best list was merged with the n-best list generated by a single document-level model which translated multiple sentences at a time. Finally, existing quality estimation models and minimum Bayes risk decoding were used to rerank the n-best list so that the best hypothesis was chosen according to the COMET evaluation metric. According to the automatic evaluation results, our systems rank first in both translation directions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,076
inproceedings
molchanov-etal-2022-promt
{PROMT} Systems for {WMT}22 General Translation Task
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.28/
Molchanov, Alexander and Kovalenko, Vladislav and Makhamalkina, Natalia
Proceedings of the Seventh Conference on Machine Translation (WMT)
342--345
The PROMT systems are trained with the MarianNMT toolkit. All systems use the transformer-big configuration. We use BPE for text encoding, the vocabulary sizes vary from 24k to 32k for different language pairs. All systems are unconstrained. We use all data provided by the WMT organizers, all publicly available data and some private data. We participate in four directions: English-Russian, English-German and German-English, Ukrainian-English.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,078
inproceedings
oravecz-etal-2022-etranslations
e{T}ranslation`s Submissions to the {WMT}22 General Machine Translation Task
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.29/
Oravecz, Csaba and Bontcheva, Katina and Kolovratn{\`i}k, David and Kovachev, Bogomil and Scott, Christopher
Proceedings of the Seventh Conference on Machine Translation (WMT)
346--351
The paper describes the NMT models for French-German, English-Ukranian and English-Russian, submitted by the eTranslation team to the WMT22 general machine translation shared task. In the WMT news task last year, multilingual systems with deep and complex architectures utilizing immense amount of data and resources were dominant. This year with the task extended to cover less domain specific text we expected even more dominance of such systems. In the hope to produce competitive (constrained) systems despite our limited resources, this time we selected only medium resource language pairs, which are serviced in the European Commission`s eTranslation system. We took the approach of exploring less resource intensive strategies focusing on data selection and filtering to improve the performance of baseline systems. With our submitted systems our approach scored competitively according to the automatic rankings, except for the the English{--}Russian model where our submission was only a baseline reference model developed as a by-product of the multilingual setup we built focusing primarily on the English-Ukranian language pair.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,079
inproceedings
popel-etal-2022-cuni
{CUNI} Systems for the {WMT} 22 {C}zech-{U}krainian Translation Task
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.30/
Popel, Martin and Libovick{\'y}, Jind{\v{r}}ich and Helcl, Jind{\v{r}}ich
Proceedings of the Seventh Conference on Machine Translation (WMT)
352--357
We present Charles University submissions to the WMT 22 GeneralTranslation Shared Task on Czech-Ukrainian and Ukrainian-Czech machine translation. We present two constrained submissions based on block back-translation and tagged back-translation and experiment with rule-basedromanization of Ukrainian. Our results show that the romanization onlyhas a minor effect on the translation quality. Further, we describe Charles Translator,a system that was developed in March 2022 as a response to the migrationfrom Ukraine to the Czech Republic. Compared to our constrained systems,it did not use the romanization and used some proprietary data sources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,080
inproceedings
roussis-papavassiliou-2022-arc
The {ARC}-{NKUA} Submission for the {E}nglish-{U}krainian General Machine Translation Shared Task at {WMT}22
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.31/
Roussis, Dimitrios and Papavassiliou, Vassilis
Proceedings of the Seventh Conference on Machine Translation (WMT)
358--365
The ARC-NKUA ({\textquotedblleft}Athena{\textquotedblright} Research Center - National and Kapodistrian University of Athens) submission to the WMT22 General Machine Translation shared task concerns the unconstrained tracks of the English-Ukrainian and Ukrainian-English translation directions. The two Neural Machine Translation systems are based on Transformer models and our primary submissions were determined through experimentation with (a) ensemble decoding, (b) selected fine-tuning with a subset of the training data, (c) data augmentation with back-translated monolingual data, and (d) post-processing of the translation outputs. Furthermore, we discuss filtering techniques and the acquisition of additional data used for training the systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,081
inproceedings
shan-etal-2022-niutrans
The {N}iu{T}rans Machine Translation Systems for {WMT}22
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.32/
Shan, Weiqiao and Cao, Zhiquan and Han, Yuchen and Wu, Siming and Hu, Yimin and Wang, Jie and Zhang, Yi and Baoyu, Hou and Cao, Hang and Gao, Chenghao and Liu, Xiaowen and Xiao, Tong and Ma, Anxiang and Zhu, Jingbo
Proceedings of the Seventh Conference on Machine Translation (WMT)
366--374
This paper describes the NiuTrans neural machine translation systems of the WMT22 General MT constrained task. We participate in four directions, including Chinese{\textrightarrow}English, English{\textrightarrow}Croatian, and Livonian{\ensuremath{\leftrightarrow}}English. Our models are based on several advanced Transformer variants, e.g., Transformer-ODE, Universal Multiscale Transformer (UMST). The main workflow consists of data filtering, large-scale data augmentation (i.e., iterative back-translation, iterative knowledge distillation), and specific-domain fine-tuning. Moreover, we try several multi-domain methods, such as a multi-domain model structure and a multi-domain data clustering method, to rise to this year`s newly proposed multi-domain test set challenge. For low-resource scenarios, we build a multi-language translation model to enhance the performance, and try to use the pre-trained language model (mBERT) to initialize the translation model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,082
inproceedings
tars-etal-2022-teaching
Teaching Unseen Low-resource Languages to Large Translation Models
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.33/
Tars, Maali and Purason, Taido and T{\"attar, Andre
Proceedings of the Seventh Conference on Machine Translation (WMT)
375--380
In recent years, large multilingual pre-trained neural machine translation model research has grown and it is common for these models to be publicly available for usage and fine-tuning. Low-resource languages benefit from the pre-trained models, because of knowledge transfer from high- to medium-resource languages. The recently available M2M-100 model is our starting point for cross-lingual transfer learning to Finno-Ugric languages, like Livonian. We participate in the WMT22 General Machine Translation task, where we focus on the English-Livonian language pair. We leverage data from other Finno-Ugric languages and through that, we achieve high scores for English-Livonian translation directions. Overall, instead of training a model from scratch, we use transfer learning and back-translation as the main methods and fine-tune a publicly available pre-trained model. This in turn reduces the cost and duration of training high-quality multilingual neural machine translation models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,083
inproceedings
vu-etal-2022-domains
Can Domains Be Transferred across Languages in Multi-Domain Multilingual Neural Machine Translation?
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.34/
Vu, Thuy-trang and Khadivi, Shahram and He, Xuanli and Phung, Dinh and Haffari, Gholamreza
Proceedings of the Seventh Conference on Machine Translation (WMT)
381--396
Previous works mostly focus on either multilingual or multi-domain aspects of neural machine translation (NMT). This paper investigates whether the domain information can be transferred across languages on the composition of multi-domain and multilingual NMT, particularly for the incomplete data condition where in-domain bitext is missing for some language pairs. Our results in the curated leave-one-domain-out experiments show that multi-domain multilingual (MDML) NMT can boost zero-shot translation performance up to +10 gains on BLEU, as well as aid the generalisation of multi-domain NMT to the missing domain. We also explore strategies for effective integration of multilingual and multi-domain NMT, including language and domain tag combination and auxiliary task training. We find that learning domain-aware representations and adding target-language tags to the encoder leads to effective MDML-NMT.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,084
inproceedings
wang-etal-2022-dutnlp
{DUTNLP} Machine Translation System for {WMT}22 General {MT} Task
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.35/
Wang, Ting and Liu, Huan and Liu, Junpeng and Huang, Degen
Proceedings of the Seventh Conference on Machine Translation (WMT)
397--402
This paper describes DUTNLP Lab`s submission to the WMT22 General MT Task on four translation directions: English to/from Chinese and English to/from Japanese under the constrained condition. Our primary system are built on several Transformer variants which employ wider FFN layer or deeper encoder layer. The bilingual data are filtered by detailed data pre-processing strategies and four data augmentation methods are combined to enlarge the training data with the provided monolingual data. Several common methods are also employed to further improve the model performance, such as fine-tuning, model ensemble and post-editing. As a result, our constrained systems achieve 29.01, 63.87, 41.84, and 24.82 BLEU scores on Chinese-to-English, English-to-Chinese, English-to-Japanese, and Japanese-to-English, respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,085
inproceedings
wei-etal-2022-hw
{HW}-{TSC}`s Submissions to the {WMT} 2022 General Machine Translation Shared Task
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.36/
Wei, Daimeng and Rao, Zhiqiang and Wu, Zhanglin and Li, Shaojun and Luo, Yuanchang and Xie, Yuhao and Chen, Xiaoyu and Shang, Hengchao and Li, Zongyao and Yu, Zhengzhe and Yang, Jinlong and Ma, Miaomiao and Lei, Lizhi and Yang, Hao and Qin, Ying
Proceedings of the Seventh Conference on Machine Translation (WMT)
403--410
This paper presents the submissions of Huawei Translate Services Center (HW-TSC) to the WMT 2022 General Machine Translation Shared Task. We participate in 6 language pairs, including Zh{\ensuremath{\leftrightarrow}}En, Ru{\ensuremath{\leftrightarrow}}En, Uk{\ensuremath{\leftrightarrow}}En, Hr{\ensuremath{\leftrightarrow}}En, Uk{\ensuremath{\leftrightarrow}}Cs and Liv{\ensuremath{\leftrightarrow}}En. We use Transformer architecture and obtain the best performance via multiple variants with larger parameter sizes. We perform fine-grained pre-processing and filtering on the provided large-scale bilingual and monolingual datasets. For medium and highresource languages, we mainly use data augmentation strategies, including Back Translation, Self Training, Ensemble Knowledge Distillation, Multilingual, etc. For low-resource languages such as Liv, we use pre-trained machine translation models, and then continue training with Regularization Dropout (R-Drop). The previous mentioned data augmentation methods are also used. Our submissions obtain competitive results in the final evaluation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,086
inproceedings
zan-etal-2022-vega
Vega-{MT}: The {JD} Explore Academy Machine Translation System for {WMT}22
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.37/
Zan, Changtong and Peng, Keqin and Ding, Liang and Qiu, Baopu and Liu, Boan and He, Shwai and Lu, Qingyu and Zhang, Zheng and Liu, Chuang and Liu, Weifeng and Zhan, Yibing and Tao, Dacheng
Proceedings of the Seventh Conference on Machine Translation (WMT)
411--422
We describe the JD Explore Academy`s submission of the WMT 2022 shared general translation task. We participated in all high-resource tracks and one medium-resource track, including Chinese-English, German-English, Czech-English, Russian-English, and Japanese-English. We push the limit of our previous work {--} bidirectional training for translation by scaling up two main factors, i.e. language pairs and model sizes, namely the \textbf{Vega-MT} system. As for language pairs, we scale the {\textquotedblleft}bidirectional{\textquotedblright} up to the {\textquotedblleft}multidirectional{\textquotedblright} settings, covering all participating languages, to exploit the common knowledge across languages, and transfer them to the downstream bilingual tasks. As for model sizes, we scale the Transformer-Big up to the extremely large model that owns nearly 4.7 Billion parameters, to fully enhance the model capacity for our Vega-MT. Also, we adopt the data augmentation strategies, e.g. cycle translation for monolingual data, and bidirectional self-training for bilingual and monolingual data, to comprehensively exploit the bilingual and monolingual data. To adapt our Vega-MT to the general domain test set, generalization tuning is designed. Based on the official automatic scores of constrained systems, in terms of the sacreBLEU shown in Figure-1, we got the 1st place on Zh-En (33.5), En-Zh (49.7), De-En (33.7), En-De (37.8), Cs-En (54.9), En-Cs (41.4) and En-Ru (32.7), 2nd place on Ru-En (45.1) and Ja-En (25.6), and 3rd place on En-Ja(41.5), respectively; W.R.T the COMET, we got the 1st place on Zh-En (45.1), En-Zh (61.7), De-En (58.0), En-De (63.2), Cs-En (74.7), Ru-En (64.9), En-Ru (69.6) and En-Ja (65.1), 2nd place on En-Cs (95.3) and Ja-En (40.6), respectively. Models will be released to facilitate the MT community through GitHub and OmniForce Platform.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,087
inproceedings
zeng-2022-domain
No Domain Left behind
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.38/
Zeng, Hui
Proceedings of the Seventh Conference on Machine Translation (WMT)
423--427
We participated in the WMT General MT task and focus on four high resource language pairs: English to Chinese, Chinese to English, English to Japanese and Japanese to English). The submitted systems (LanguageX) focus on data cleaning, data selection, data mixing and TM-augmented NMT. Rules and multilingual language model are used for data filtering and data selection. In the automatic evaluation, our best submitted English to Chinese system achieved 54.3 BLEU score and 63.8 COMET score, which is the highest among all the submissions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,088
inproceedings
zong-bei-2022-gtcom
{GTCOM} Neural Machine Translation Systems for {WMT}22
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.39/
Zong, Hao and Bei, Chao
Proceedings of the Seventh Conference on Machine Translation (WMT)
428--431
GTCOM participates in five directions: English to/from Ukrainian, Ukrainian to/from Czech, English to Chinese and English to Croatian. Our submitted systems are unconstrained and focus on backtranslation, multilingual translation model and finetuning. Multilingual translation model focus on X to one and one to X. We also apply rules and language model to filter monolingual, parallel sentences and synthetic sentences.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,089
inproceedings
macketanz-etal-2022-linguistically-motivated
Linguistically Motivated Evaluation of the 2022 State-of-the-art Machine Translation Systems for Three Language Directions
Koehn, Philipp and Barrault, Lo{\"ic and Bojar, Ond{\v{rej and Bougares, Fethi and Chatterjee, Rajen and Costa-juss{\`a, Marta R. and Federmann, Christian and Fishel, Mark and Fraser, Alexander and Freitag, Markus and Graham, Yvette and Grundkiewicz, Roman and Guzman, Paco and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Kocmi, Tom and Martins, Andr{\'e and Morishita, Makoto and Monz, Christof and Nagata, Masaaki and Nakazawa, Toshiaki and Negri, Matteo and N{\'ev{\'eol, Aur{\'elie and Neves, Mariana and Popel, Martin and Turchi, Marco and Zampieri, Marcos
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wmt-1.40/
Macketanz, Vivien and Manakhimova, Shushen and Avramidis, Eleftherios and Lapshinova-koltunski, Ekaterina and Bagdasarov, Sergei and M{\"oller, Sebastian
Proceedings of the Seventh Conference on Machine Translation (WMT)
432--449
This document describes a fine-grained linguistically motivated analysis of 29 machine translation systems submitted at the Shared Task of the 7th Conference of Machine Translation (WMT22). This submission expands the test suite work of previous years by adding the language direction of English{--}Russian. As a result, evaluation takes place for the language directions of German{--}English, English{--}German, and English{--}Russian. We find that the German{--}English systems suffer in translating idioms, some tenses of modal verbs, and resultative predicates, the English{--}German ones in idioms, transitive-past progressive, and middle voice, whereas the English{--}Russian ones in pseudogapping and idioms.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,090