entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
jansen-boyd-graber-2022-picard
{P}icard understanding Darmok: A Dataset and Model for Metaphor-Rich Translation in a Constructed Language
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.5/
Jansen, Peter A. and Boyd-Graber, Jordan
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
34--38
Tamarian, a fictional language introduced in the Star Trek episode Darmok, communicates meaning through utterances of metaphorical references, such as {\textquotedblleft}Darmok and Jalad at Tanagra{\textquotedblright} instead of {\textquotedblleft}We should work together.{\textquotedblright} This work assembles a Tamarian-English dictionary of utterances from the original episode and several follow-on novels, and uses this to construct a parallel corpus of 456 English-Tamarian utterances. A machine translation system based on a large language model (T5) is trained using this parallel corpus, and is shown to produce an accuracy of 76{\%} when translating from English to Tamarian on known utterances.
null
null
10.18653/v1/2022.flp-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,862
inproceedings
li-etal-2022-secret
The Secret of Metaphor on Expressing Stronger Emotion
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.6/
Li, Yucheng and Guerin, Frank and Lin, Chenghua
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
39--43
Metaphors are proven to have stronger emotional impact than literal expressions. Although this conclusion is shown to be promising in benefiting various NLP applications, the reasons behind this phenomenon are not well studied. This paper conducts the first study in exploring how metaphors convey stronger emotion than their literal counterparts. We find that metaphors are generally more specific than literal expressions. The more specific property of metaphor can be one of the reasons for metaphors' superiority in emotion expression. When we compare metaphors with literal expressions with the same specificity level, the gap of emotion expressing ability between both reduces significantly. In addition, we observe specificity is crucial in literal language as well, as literal language can express stronger emotion by making it more specific.
null
null
10.18653/v1/2022.flp-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,863
inproceedings
wachowiak-etal-2022-drum
Drum Up {SUPPORT}: Systematic Analysis of Image-Schematic Conceptual Metaphors
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.7/
Wachowiak, Lennart and Gromann, Dagmar and Xu, Chao
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
44--53
Conceptual metaphors represent a cognitive mechanism to transfer knowledge structures from one onto another domain. Image-schematic conceptual metaphors (ISCMs) specialize on transferring sensorimotor experiences to abstract domains. Natural language is believed to provide evidence of such metaphors. However, approaches to verify this hypothesis largely rely on top-down methods, gathering examples by way of introspection, or on manual corpus analyses. In order to contribute towards a method that is systematic and can be replicated, we propose to bring together existing processing steps in a pipeline to detect ISCMs, exemplified for the image schema SUPPORT in the COVID-19 domain. This pipeline consist of neural metaphor detection, dependency parsing to uncover construction patterns, clustering, and BERT-based frame annotation of dependent constructions to analyse ISCMs.
null
null
10.18653/v1/2022.flp-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,864
inproceedings
bigoulaeva-etal-2022-effective
Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.8/
Bigoulaeva, Irina and Singh Sachdeva, Rachneet and Tayyar Madabushi, Harish and Villavicencio, Aline and Gurevych, Iryna
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
54--60
We compare sequential fine-tuning with a model for multi-task learning in the context where we are interested in boosting performance on two of the tasks, one of which depends on the other. We test these models on the FigLang2022 shared task which requires participants to predict language inference labels on figurative language along with corresponding textual explanations of the inference predictions. Our results show that while sequential multi-task learning can be tuned to be good at the first of two target tasks, it performs less well on the second and additionally struggles with overfitting. Our findings show that simple sequential fine-tuning of text-to-text models is an extraordinarily powerful method of achieving cross-task knowledge transfer while simultaneously predicting multiple interdependent targets. So much so, that our best model achieved the (tied) highest score on the task.
null
null
10.18653/v1/2022.flp-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,865
inproceedings
kesen-etal-2022-detecting
Detecting Euphemisms with Literal Descriptions and Visual Imagery
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.9/
Kesen, Ilker and Erdem, Aykut and Erdem, Erkut and Calixto, Iacer
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
61--67
This paper describes our two-stage system for the Euphemism Detection shared task hosted by the 3rd Workshop on Figurative Language Processing in conjunction with EMNLP 2022. Euphemisms tone down expressions about sensitive or unpleasant issues like addiction and death. The ambiguous nature of euphemistic words or expressions makes it challenging to detect their actual meaning within a context. In the first stage, we seek to mitigate this ambiguity by incorporating literal descriptions into input text prompts to our baseline model. It turns out that this kind of direct supervision yields remarkable performance improvement. In the second stage, we integrate visual supervision into our system using visual imageries, two sets of images generated by a text-to-image model by taking terms and descriptions as input. Our experiments demonstrate that visual supervision also gives a statistically significant performance boost. Our system achieved the second place with an F1 score of 87.2{\%}, only about 0.9{\%} worse than the best submission.
null
null
10.18653/v1/2022.flp-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,866
inproceedings
bunescu-uduehi-2022-distribution
Distribution-Based Measures of Surprise for Creative Language: Experiments with Humor and Metaphor
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.10/
Bunescu, Razvan C. and Uduehi, Oseremen O.
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
68--78
Novelty or surprise is a fundamental attribute of creative output. As such, we postulate that a writer`s creative use of language leads to word choices and, more importantly, corresponding semantic structures that are unexpected for the reader. In this paper we investigate measures of surprise that rely solely on word distributions computed by language models and show empirically that creative language such as humor and metaphor is strongly correlated with surprise. Surprisingly at first, information content is observed to be at least as good a predictor of creative language as any of the surprise measures investigated. However, the best prediction performance is obtained when information and surprise measures are combined, showing that surprise measures capture an aspect of creative language that goes beyond information content.
null
null
10.18653/v1/2022.flp-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,867
inproceedings
wang-etal-2022-euphemism
Euphemism Detection by Transformers and Relational Graph Attention Network
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.11/
Wang, Yuting and Liu, Yiyi and Zhang, Ruqing and Fan, Yixing and Guo, Jiafeng
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
79--83
Euphemism is a type of figurative language broadly adopted in social media and daily conversations. People use euphemism for politeness or to conceal what they are discussing. Euphemism detection is a challenging task because of its obscure and figurative nature. Even humans may not agree on if a word expresses euphemism. In this paper, we propose to employ bidirectional encoder representations transformers (BERT), and relational graph attention network in order to model the semantic and syntactic relations between the target words and the input sentence. The best performing method of ours reaches a Macro-F1 score of 84.0 on the euphemism detection dataset of the third workshop on figurative language processing shared task 2022.
null
null
10.18653/v1/2022.flp-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,868
inproceedings
gu-etal-2022-just
Just-{DREAM}-about-it: Figurative Language Understanding with {DREAM}-{FLUTE}
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.12/
Gu, Yuling and Fu, Yao and Pyatkin, Valentina and Magnusson, Ian and Dalvi Mishra, Bhavana and Clark, Peter
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
84--93
Figurative language (e.g., {\textquotedblleft}he flew like the wind{\textquotedblright}) is challenging to understand, as it is hard to tell what implicit information is being conveyed from the surface form alone. We hypothesize that to perform this task well, the reader needs to mentally elaborate the scene being described to identify a sensible meaning of the language. We present DREAM-FLUTE, a figurative language understanding system that does this, first forming a {\textquotedblleft}mental model{\textquotedblright} of situations described in a premise and hypothesis before making an entailment/contradiction decision and generating an explanation. DREAM-FLUTE uses an existing scene elaboration model, DREAM, for constructing its {\textquotedblleft}mental model.{\textquotedblright} In the FigLang2022 Shared Task evaluation, DREAM-FLUTE achieved (joint) first place (Acc@60=63.3{\%}), and can perform even better with ensemble techniques, demonstrating the effectiveness of this approach. More generally, this work suggests that adding a reflective component to pretrained language models can improve their performance beyond standard fine-tuning (3.3{\%} improvement in Acc@60).
null
null
10.18653/v1/2022.flp-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,869
inproceedings
trust-etal-2022-bayes
{B}ayes at {F}ig{L}ang 2022 Euphemism Detection shared task: Cost-Sensitive {B}ayesian Fine-tuning and {V}enn-Abers Predictors for Robust Training under Class Skewed Distributions
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.13/
Trust, Paul and Provia, Kadusabe and Omala, Kizito
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
94--99
Transformers have achieved a state of the art performance across most natural language processing tasks. However the performance of these models degrade when being trained on skewed class distributions (class imbalance) because training tends to be biased towards head classes with most of the data points . Classical methods that have been proposed to handle this problem (re-sampling and re-weighting) often suffer from unstable performance, poor applicability and poor calibration. In this paper, we propose to use Bayesian methods and Venn-Abers predictors for well calibrated and robust training against class imbalance. Our proposed approach improves f1-score of the baseline RoBERTa (A Robustly Optimized Bidirectional Embedding from Transformers Pretraining Approach) model by about 6 points (79.0{\%} against 72.6{\%}) when training with class imbalanced data.
null
null
10.18653/v1/2022.flp-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,870
inproceedings
santing-etal-2022-food
Food for Thought: How can we exploit contextual embeddings in the translation of idiomatic expressions?
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.14/
Santing, Lukas and Sijstermans, Ryan and Anerdi, Giacomo and Jeuris, Pedro and ten Thij, Marijn and Batista-Navarro, Riza
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
100--110
Idiomatic expressions (or idioms) are phrases where the meaning of the phrase cannot be determined from the meaning of the individual words in the expression. Translating idioms between languages is therefore a challenging task. Transformer models based on contextual embeddings have advanced the state-of-the-art across many domains in the field of natural language processing. While research using transformers has advanced both idiom detection as well as idiom disambiguation, idiom translation has not seen a similar advancement. In this work, we investigate two approaches to fine-tuning a pretrained Text-to-Text Transfer Transformer (T5) model to perform idiom translation from English to German. The first approach directly translates English idiom-containing sentences to German, while the second is underpinned by idiom paraphrasing, firstly paraphrasing English idiomatic expressions to their simplified English versions before translating them to German. Results of our evaluation show that each of the approaches is able to generate adequate translations.
null
null
10.18653/v1/2022.flp-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,871
inproceedings
keh-etal-2022-eureka
{EUREKA}: {EU}phemism Recognition Enhanced through Knn-based methods and Augmentation
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.15/
Keh, Sedrick Scott and Bharadwaj, Rohit and Liu, Emmy and Tedeschi, Simone and Gangal, Varun and Navigli, Roberto
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
111--117
We introduce EUREKA, an ensemble-based approach for performing automatic euphemism detection. We (1) identify and correct potentially mislabelled rows in the dataset, (2) curate an expanded corpus called EuphAug, (3) leverage model representations of Potentially Euphemistic Terms (PETs), and (4) explore using representations of semantically close sentences to aid in classification. Using our augmented dataset and kNN-based methods, EUREKA was able to achieve state-of-the-art results on the public leaderboard of the Euphemism Detection Shared Task, ranking first with a macro F1 score of 0.881.
null
null
10.18653/v1/2022.flp-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,872
inproceedings
reyes-saldivar-2022-insulin
An insulin pump? Identifying figurative links in the construction of the drug lexicon
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.16/
Reyes, Antonio and Saldivar, Rafael
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
118--124
One of the remarkable characteristics of the drug lexicon is its elusive nature. In order to communicate information related to drugs or drug trafficking, the community uses several terms that are mostly unknown to regular people, or even to the authorities. For instance, the terms jolly green, joystick, or jive are used to refer to marijuana. The selection of such terms is not necessarily a random or senseless process, but a communicative strategy in which figurative language plays a relevant role. In this study, we describe an ongoing research to identify drug-related terms by applying machine learning techniques. To this end, a data set regarding drug trafficking in Spanish was built. This data set was used to train a word embedding model to identify terms used by the community to creatively refer to drugs and related matters. The initial findings show an interesting repository of terms created to consciously veil drug-related contents by using figurative language devices, such as metaphor or metonymy. These findings can provide preliminary evidence to be applied by law agencies in order to address actions against crime, drug transactions on the internet, illicit activities, or human trafficking.
null
null
10.18653/v1/2022.flp-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,873
inproceedings
dankin-etal-2022-yes
Can Yes-No Question-Answering Models be Useful for Few-Shot Metaphor Detection?
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.17/
Dankin, Lena and Bar, Kfir and Dershowitz, Nachum
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
125--130
Metaphor detection has been a challenging task in the NLP domain both before and after the emergence of transformer-based language models. The difficulty lies in subtle semantic nuances that are required to detect metaphor and in the scarcity of labeled data. We explore few-shot setups for metaphor detection, and also introduce new question answering data that can enhance classifiers that are trained on a small amount of data. We formulate the classification task as a question-answering one, and train a question-answering model. We perform extensive experiments for few shot on several architectures and report the results of several strong baselines. Thus, the answer to the question posed in the title is a definite {\textquotedblleft}Yes!{\textquotedblright}
null
null
10.18653/v1/2022.flp-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,874
inproceedings
tiwari-parde-2022-exploration
An Exploration of Linguistically-Driven and Transfer Learning Methods for Euphemism Detection
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.18/
Tiwari, Devika and Parde, Natalie
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
131--136
Euphemisms are often used to drive rhetoric, but their automated recognition and interpretation are under-explored. We investigate four methods for detecting euphemisms in sentences containing potentially euphemistic terms. The first three linguistically-motivated methods rest on an understanding of (1) euphemism`s role to attenuate the harsh connotations of a taboo topic and (2) euphemism`s metaphorical underpinnings. In contrast, the fourth method follows recent innovations in other tasks and employs transfer learning from a general-domain pre-trained language model. While the latter method ultimately (and perhaps surprisingly) performed best (F1 = 0.74), we comprehensively evaluate all four methods to derive additional useful insights from the negative results.
null
null
10.18653/v1/2022.flp-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,875
inproceedings
sengupta-etal-2022-back
Back to the Roots: Predicting the Source Domain of Metaphors using Contrastive Learning
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.19/
Sengupta, Meghdut and Alshomary, Milad and Wachsmuth, Henning
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
137--142
Metaphors frame a given target domain using concepts from another, usually more concrete, source domain. Previous research in NLP has focused on the identification of metaphors and the interpretation of their meaning. In contrast, this paper studies to what extent the source domain can be predicted computationally from a metaphorical text. Given a dataset with metaphorical texts from a finite set of source domains, we propose a contrastive learning approach that ranks source domains by their likelihood of being referred to in a metaphorical text. In experiments, it achieves reasonable performance even for rare source domains, clearly outperforming a classification baseline.
null
null
10.18653/v1/2022.flp-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,876
inproceedings
lal-bastan-2022-sbu
{SBU} Figures It Out: Models Explain Figurative Language
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.20/
Lal, Yash Kumar and Bastan, Mohaddeseh
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
143--149
Figurative language is ubiquitous in human communication. However, current NLP models are unable to demonstrate a significant understanding of instances of this phenomena. The EMNLP 2022 shared task on figurative language understanding posed the problem of predicting and explaining the relation between a premise and a hypothesis containing an instance of the use of figurative language. We experiment with different variations of using T5-large for this task and build a model that significantly outperforms the task baseline. Treating it as a new task for T5 and simply finetuning on the data achieves the best score on the defined evaluation. Furthermore, we find that hypothesis-only models are able to achieve most of the performance.
null
null
10.18653/v1/2022.flp-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,877
inproceedings
phan-etal-2022-nlp
{NLP}@{UIT} at {F}ig{L}ang-{EMNLP} 2022: A Divide-and-Conquer System For Shared Task On Understanding Figurative Language
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.21/
Phan, Khoa Thi-Kim and Nguyen, Duc-Vu and Nguyen, Ngan Luu-Thuy
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
150--153
This paper describes our submissions to the EMNLP 2022 shared task on Understanding Figurative Language as part of the Figurative Language Workshop (FigLang 2022). Our systems based on pre-trained language model T5 are divide-and-conquer models which can address both two requirements of the task: 1) classification, and 2) generation. In this paper, we introduce different approaches in which each approach we employ a processing strategy on input model. We also emphasize the influence of the types of figurative language on our systems.
null
null
10.18653/v1/2022.flp-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,878
inproceedings
kohli-etal-2022-adversarial
Adversarial Perturbations Augmented Language Models for Euphemism Identification
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.22/
Kohli, Guneet and Kaur, Prabsimran and Bedi, Jatin
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
154--159
Euphemisms are mild words or expressions used instead of harsh or direct words while talking to someone to avoid discussing something unpleasant, embarrassing, or offensive. However, they are often ambiguous, thus making it a challenging task. The Third Workshop on Figurative Language Processing, colocated with EMNLP 2022 organized a shared task on Euphemism Detection to better understand euphemisms. We have used the adversarial augmentation technique to construct new data. This augmented data was then trained using two language models: BERT and longformer. To further enhance the overall performance, various combinations of the results obtained using longformer and BERT were passed through a voting ensembler. We achieved an F1 score of 71.5 using the combination of two adversarial longformers, two adversarial BERT, and one non-adversarial BERT.
null
null
10.18653/v1/2022.flp-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,879
inproceedings
rakshit-flanigan-2022-figurativeqa
{F}igurative{QA}: A Test Benchmark for Figurativeness Comprehension for Question Answering
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.23/
Rakshit, Geetanjali and Flanigan, Jeffrey
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
160--166
Figurative language is widespread in human language (Lakoff and Johnson, 2008) posing potential challenges in NLP applications. In this paper, we investigate the effect of figurative language on the task of question answering (QA). We construct FigQA, a test set of 400 yes-no questions with figurative and non-figurative contexts, extracted from product reviews and restaurant reviews. We demonstrate that a state-of-the-art RoBERTa QA model has considerably lower performance in question answering when the contexts are figurative rather than literal, indicating a gap in current models. We propose a general method for improving the performance of QA models by converting the figurative contexts into non-figurative by prompting GPT-3, and demonstrate its effectiveness. Our results indicate a need for building QA models infused with figurative language understanding capabilities.
null
null
10.18653/v1/2022.flp-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,880
inproceedings
keh-2022-exploring
Exploring Euphemism Detection in Few-Shot and Zero-Shot Settings
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.24/
Keh, Sedrick Scott
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
167--172
This work builds upon the Euphemism Detection Shared Task proposed in the EMNLP 2022 FigLang Workshop, and extends it to few-shot and zero-shot settings. We demonstrate a few-shot and zero-shot formulation using the dataset from the shared task, and we conduct experiments in these settings using RoBERTa and GPT-3. Our results show that language models are able to classify euphemistic terms relatively well even on new terms unseen during training, indicating that it is able to capture higher-level concepts related to euphemisms.
null
null
10.18653/v1/2022.flp-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,881
inproceedings
griciute-etal-2022-cusp
On the Cusp of Comprehensibility: Can Language Models Distinguish Between Metaphors and Nonsense?
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.25/
Grici{\={u}}t{\.{e}}, Bernadeta and Tanti, Marc and Donatelli, Lucia
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
173--177
Utterly creative texts can sometimes be difficult to understand, balancing on the edge of comprehensibility. However, good language skills and common sense allow advanced language users both to interpret creative texts and to reject some linguistic input as nonsense. The goal of this paper is to evaluate whether the current language models are also able to make the distinction between a creative language use and nonsense. To test this, we have computed mean rank and pseudo-log-likelihood score (PLL) of metaphorical and nonsensical sentences, and fine-tuned several pretrained models (BERT, RoBERTa) for binary classification between the two categories. There was a significant difference in the mean ranks and PPL scores of the categories, and the classifier reached around 85.5{\%} accuracy. The results raise further questions on what could have let to such satisfactory performance.
null
null
10.18653/v1/2022.flp-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,882
inproceedings
saakyan-etal-2022-report
A Report on the {F}ig{L}ang 2022 Shared Task on Understanding Figurative Language
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.26/
Saakyan, Arkadiy and Chakrabarty, Tuhin and Ghosh, Debanjan and Muresan, Smaranda
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
178--183
We present the results of the Shared Task on Understanding Figurative Language that we conducted as a part of the 3rd Workshop on Figurative Language Processing (FigLang 2022) at EMNLP 2022. The shared task is based on the FLUTE dataset (Chakrabarty et al., 2022), which consists of NLI pairs containing figurative language along with free text explanations for each NLI instance. The task challenged participants to build models that are able to not only predict the right label for a figurative NLI instance, but also generate a convincing free-text explanation. The participants were able to significantly improve upon provided baselines in both automatic and human evaluation settings. We further summarize the submitted systems and discuss the evaluation results.
null
null
10.18653/v1/2022.flp-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,883
inproceedings
lee-etal-2022-report
A Report on the Euphemisms Detection Shared Task
Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.flp-1.27/
Lee, Patrick and Feldman, Anna and Peng, Jing
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
184--190
This paper presents The Shared Task on Euphemism Detection for the Third Workshop on Figurative Language Processing (FigLang 2022) held in conjunction with EMNLP 2022. Participants were invited to investigate the euphemism detection task: given input text, identify whether it contains a euphemism. The input data is a corpus of sentences containing potentially euphemistic terms (PETs) collected from the GloWbE corpus, and are human-annotated as containing either a euphemistic or literal usage of a PET. In this paper, we present the results and analyze the common themes, methods and findings of the participating teams.
null
null
10.18653/v1/2022.flp-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,884
inproceedings
chen-etal-2022-actperfl
{A}ct{P}er{FL}: Active Personalized Federated Learning
Lin, Bill Yuchen and He, Chaoyang and Xie, Chulin and Mireshghallah, Fatemehsadat and Mehrabi, Ninareh and Li, Tian and Soltanolkotabi, Mahdi and Ren, Xiang
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.fl4nlp-1.1/
Chen, Huili and Ding, Jie and Tramel, Eric and Wu, Shuang and Sahu, Anit Kumar and Avestimehr, Salman and Zhang, Tao
Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)
1--5
In the context of personalized federated learning (FL), the critical challenge is to balance local model improvement and global model tuning when the personal and global objectives may not be exactly aligned. Inspired by Bayesian hierarchical models, we develop ActPerFL, a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients' training. Such a balance is derived from the inter-client and intra-client uncertainty quantification. Consequently, ActPerFL can adapt to the underlying clients' heterogeneity with uncertainty-driven local training and model aggregation. With experimental studies on Sent140 and Amazon Alexa audio data, we show that ActPerFL can achieve superior personalization performance compared with the existing counterparts.
null
null
10.18653/v1/2022.fl4nlp-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,886
inproceedings
ro-etal-2022-scaling
Scaling Language Model Size in Cross-Device Federated Learning
Lin, Bill Yuchen and He, Chaoyang and Xie, Chulin and Mireshghallah, Fatemehsadat and Mehrabi, Ninareh and Li, Tian and Soltanolkotabi, Mahdi and Ren, Xiang
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.fl4nlp-1.2/
Ro, Jae and Breiner, Theresa and McConnaughey, Lara and Chen, Mingqing and Suresh, Ananda and Kumar, Shankar and Mathews, Rajiv
Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)
6--20
Most studies in cross-device federated learning focus on small models, due to the server-client communication and on-device computation bottlenecks. In this work, we leverage various techniques for mitigating these bottlenecks to train larger language models in cross-device federated learning. With systematic applications of partial model training, quantization, efficient transfer learning, and communication-efficient optimizers, we are able to train a 21M parameter Transformer that achieves the same perplexity as that of a similarly sized LSTM with $\sim10\times$ smaller client-to-server communication cost and 11{\%} lower perplexity than smaller LSTMs commonly studied in literature.
null
null
10.18653/v1/2022.fl4nlp-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,887
inproceedings
wu-etal-2022-adaptive
Adaptive Differential Privacy for Language Model Training
Lin, Bill Yuchen and He, Chaoyang and Xie, Chulin and Mireshghallah, Fatemehsadat and Mehrabi, Ninareh and Li, Tian and Soltanolkotabi, Mahdi and Ren, Xiang
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.fl4nlp-1.3/
Wu, Xinwei and Gong, Li and Xiong, Deyi
Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)
21--26
Although differential privacy (DP) can protect language models from leaking privacy, its indiscriminative protection on all data points reduces its practical utility. Previous works improve DP training by discriminating privacy and non-privacy data. But these works rely on datasets with prior privacy information, which is not available in real-world scenarios. In this paper, we propose an Adaptive Differential Privacy (ADP) framework for language modeling without resorting to prior privacy information. We estimate the probability that a linguistic item contains privacy based on a language model. We further propose a new Adam algorithm that adjusts the degree of differential privacy noise injected to the language model according to the estimated privacy probabilities. Experiments demonstrate that our ADP improves differentially private language modeling to achieve good protection from canary attackers.
null
null
10.18653/v1/2022.fl4nlp-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,888
inproceedings
melas-kyriazi-wang-2022-intrinsic
Intrinsic Gradient Compression for Scalable and Efficient Federated Learning
Lin, Bill Yuchen and He, Chaoyang and Xie, Chulin and Mireshghallah, Fatemehsadat and Mehrabi, Ninareh and Li, Tian and Soltanolkotabi, Mahdi and Ren, Xiang
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.fl4nlp-1.4/
Melas-Kyriazi, Luke and Wang, Franklyn
Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)
27--41
Federated learning is a rapidly growing area of research, holding the promise of privacy-preserving distributed training on edge devices. The largest barrier to wider adoption of federated learning is the communication cost of model updates, which is accentuated by the fact that many edge devices are bandwidth-constrained. At the same time, within the machine learning theory community, a separate line of research has emerged around optimizing networks within a subspace of the full space of all parameters. The dimension of the smallest subspace for which these methods still yield strong results is called the intrinsic dimension. In this work, we prove a general correspondence between the notions of intrinsic dimension and gradient compressibility, and we show that a family of low-bandwidth federated learning algorithms, which we call intrinsic gradient compression algorithms, naturally emerges from this correspondence. Finally, we conduct large-scale NLP experiments using transformer models with over 100M parameters (GPT-2 and BERT), and show that our method significantly outperforms the state-of-the-art in gradient compression.
null
null
10.18653/v1/2022.fl4nlp-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,889
inproceedings
nguyen-etal-2022-contextualizing
Contextualizing Emerging Trends in Financial News Articles
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.1/
Nguyen, Nhu Khoa and Delahaut, Thierry and Boros, Emanuela and Doucet, Antoine and Lejeune, Ga{\"el
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
1--9
Identifying and exploring emerging trends in news is becoming more essential than ever with many changes occurring around the world due to the global health crises. However, most of the recent research has focused mainly on detecting trends in social media, thus, benefiting from social features (e.g. likes and retweets on Twitter) which helped the task as they can be used to measure the engagement and diffusion rate of content. Yet, formal text data, unlike short social media posts, comes with a longer, less restricted writing format, and thus, more challenging. In this paper, we focus our study on emerging trends detection in financial news articles about Microsoft, collected before and during the start of the COVID-19 pandemic (July 2019 to July 2020). We make the dataset freely available and we also propose a strong baseline (Contextual Leap2Trend) for exploring the dynamics of similarities between pairs of keywords based on topic modeling and term frequency. Finally, we evaluate against a gold standard (Google Trends) and present noteworthy real-world scenarios regarding the influence of the pandemic on Microsoft.
null
null
10.18653/v1/2022.finnlp-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,891
inproceedings
liang-etal-2022-astbert
{A}st{BERT}: Enabling Language Model for Financial Code Understanding with Abstract Syntax Trees
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.2/
Liang, Rong and Zhang, Tiehua and Lu, Yujie and Liu, Yuze and Huang, Zhen and Chen, Xin
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
10--17
Using the pre-trained language models to understand source codes has attracted increasing attention from financial institutions owing to the great potential to uncover financial risks. However, there are several challenges in applying these language models to solve programming language related problems directly. For instance, the shift of domain knowledge between natural language (NL) and programming language (PL) requires understanding the semantic and syntactic information from the data from different perspectives. To this end, we propose the AstBERT model, a pre-trained PL model aiming to better understand the financial codes using the abstract syntax tree (AST). Specifically, we collect a sheer number of source codes (both Java and Python) from the Alipay code repository and incorporate both syntactic and semantic code knowledge into our model through the help of code parsers, in which AST information of the source codes can be interpreted and integrated. We evaluate the performance of the proposed model on three tasks, including code question answering, code clone detection and code refinement. Experiment results show that our AstBERT achieves promising performance on three different downstream tasks.
null
null
10.18653/v1/2022.finnlp-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,892
inproceedings
yan-2022-disentangled
Disentangled Variational Topic Inference for Topic-Accurate Financial Report Generation
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.3/
Yan, Sixing
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
18--24
Automatic generating financial report from a set of news is important but challenging. The financial reports is composed of key points of the news and corresponding inferring and reasoning from specialists in financial domain with professional knowledge. The challenges lie in the effective learning of the extra knowledge that is not well presented in the news, and the misalignment between topic of input news and output knowledge in target reports. In this work, we introduce a disentangled variational topic inference approach to learn two latent variables for news and report, respectively. We use a publicly available dataset to evaluate the proposed approach. The results demonstrate its effectiveness of enhancing the language informativeness and the topic accuracy of the generated financial reports.
null
null
10.18653/v1/2022.finnlp-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,893
inproceedings
kim-etal-2022-toward
Toward Privacy-preserving Text Embedding Similarity with Homomorphic Encryption
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.4/
Kim, Donggyu and Lee, Garam and Oh, Sungwoo
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
25--36
Text embedding is an essential component to build efficient natural language applications based on text similarities such as search engines and chatbots. Certain industries like finance and healthcare demand strict privacy-preserving conditions that user`s data should not be exposed to any potential malicious users even including service providers. From a privacy standpoint, text embeddings seem impossible to be interpreted but there is still a privacy risk that they can be recovered to original texts through inversion attacks. To satisfy such privacy requirements, in this paper, we study a Homomorphic Encryption (HE) based text similarity inference. To validate our method, we perform extensive experiments on two vital text similarity tasks. Through text embedding inversion tests, we prove that the benchmark datasets are vulnerable to inversion attacks and another privacy preserving approach, d{\ensuremath{\chi}}-privacy, a relaxed version of Local Differential Privacy method fails to prevent them. We show that our approach preserves the performance of models compared to that the baseline has degradation up to 10{\%} of scores for the minimum security.
null
null
10.18653/v1/2022.finnlp-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,894
inproceedings
pei-etal-2022-tweetfinsent
{T}weet{F}in{S}ent: A Dataset of Stock Sentiments on {T}witter
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.5/
Pei, Yulong and Mbakwe, Amarachi and Gupta, Akshat and Alamir, Salwa and Lin, Hanxuan and Liu, Xiaomo and Shah, Sameena
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
37--47
Stock sentiment has strong correlations with the stock market but traditional sentiment analysis task classifies sentiment according to having feelings and emotions of good or bad. This definition of sentiment is not an accurate indicator of public opinion about specific stocks. To bridge this gap, we introduce a new task of stock sentiment analysis and present a new dataset for this task named TweetFinSent. In TweetFinSent, tweets are annotated based on if one gained or expected to gain positive or negative return from a stock. Experiments on TweetFinSent with several sentiment analysis models from lexicon-based to transformer-based have been conducted. Experimental results show that TweetFinSent dataset constitutes a challenging problem and there is ample room for improvement on the stock sentiment analysis task. TweetFinSent is available at \url{https://github.com/jpmcair/tweetfinsent}.
null
null
10.18653/v1/2022.finnlp-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,895
inproceedings
pataci-etal-2022-stock
Stock Price Volatility Prediction: A Case Study with {A}uto{ML}
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.6/
Pataci, Hilal and Li, Yunyao and Katsis, Yannis and Zhu, Yada and Popa, Lucian
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
48--57
Accurate prediction of the stock price volatility, the rate at which the price of a stock increases or decreases over a particular period, is an important problem in finance. Inaccurate prediction of stock price volatility might lead to investment risk and financial loss, while accurate prediction might generate significant returns for investors. Several studies investigated stock price volatility prediction in a regression task by using the transcripts of earning calls (quarterly conference calls held by public companies) with Natural Language Processing (NLP) techniques. Existing studies use the entire transcript and this degrades the performance due to noise caused by irrelevant information that might not have a significant impact on stock price volatility. In order to overcome these limitations, by considering stock price volatility prediction as a classification task, we explore several denoising approaches, ranging from general-purpose approaches to techniques specific to finance to remove the noise, and leverage AutoML systems that enable auto-exploration of a wide variety of models. Our preliminary findings indicate that domain-specific denoising approaches provide better results than general-purpose approaches, moreover AutoML systems provide promising results.
null
null
10.18653/v1/2022.finnlp-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,896
inproceedings
pataci-etal-2022-digicall
{D}igi{C}all: A Benchmark for Measuring the Maturity of Digital Strategy through Company Earning Calls
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.7/
Pataci, Hilal and Sun, Kexuan and Ravichandran, T.
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
58--67
Digital transformation reinvents companies, their vision and strategy, organizational structure, processes, capabilities, and culture, and enables the development of new or enhanced products and services delivered to customers more efficiently. Organizations, by formalizing their digital strategy attempt to plan for their digital transformations and accelerate their company growth. Understanding how successful a company is in its digital transformation starts with accurate measurement of its digital maturity levels. However, existing approaches to measuring organizations' digital strategy have low accuracy levels and this leads to inconsistent results, and also does not provide resources (data) for future research to improve. In order to measure the digital strategy maturity of companies, we leverage the state-of-the-art NLP models on unstructured data (earning call transcripts), and reach the state-of-the-art levels (94{\%}) for this task. We release 3.691 earning call transcripts and also annotated data set, labeled particularly for the digital strategy maturity by linguists. Our work provides an empirical baseline for research in industry and management science.
null
null
10.18653/v1/2022.finnlp-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,897
inproceedings
li-etal-2022-learning-better
Learning Better Intent Representations for Financial Open Intent Classification
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.8/
Li, Xianzhi and Aitken, Will and Zhu, Xiaodan and Thomas, Stephen W.
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
68--77
With the recent surge of NLP technologies in the financial domain, banks and other financial entities have adopted virtual agents (VA) to assist customers. A challenging problem for VAs in this domain is determining a user`s reason or intent for contacting the VA, especially when the intent was unseen or open during the VA`s training. One method for handling open intents is adaptive decision boundary (ADB) post-processing, which learns tight decision boundaries from intent representations to separate known and open intents. We propose incorporating two methods for supervised pre-training of intent representations: prefix tuning and fine-tuning just the last layer of a large language model (LLM). With this proposal, our accuracy is 1.63{\%} - 2.07{\%} higher than the prior state-of-the-art ADB method for open intent classification on the banking77 benchmark amongst others. Notably, we only supplement the original ADB model with 0.1{\%} additional trainable parameters. Ablation studies also determine that our method yields better results than full fine-tuning the entire model. We hypothesize that our findings could stimulate a new optimal method of downstream tuning that combines parameter efficient tuning modules with fine-tuning a subset of the base model`s layers.
null
null
10.18653/v1/2022.finnlp-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,898
inproceedings
balakrishnan-etal-2022-exploring
Exploring Robustness of Prefix Tuning in Noisy Data: A Case Study in Financial Sentiment Analysis
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.9/
Balakrishnan, Sudhandar and Fang, Yihao and Zhu, Xiaodan
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
78--88
The invention of transformer-based models such as BERT, GPT, and RoBERTa has enabled researchers and financial companies to finetune these powerful models and use them in different downstream tasks to achieve state-of-the-art performance. Recently, a lightweight alternative (approximately 0.1{\%} - 3{\%} of the original model parameters) to fine-tuning, known as prefix tuning has been introduced. This method freezes the model parameters and only updates the prefix to achieve performance comparable to full fine-tuning. Prefix tuning enables researchers and financial practitioners to achieve similar results with much fewer parameters. In this paper, we explore the robustness of prefix tuning when facing noisy data. Our experiments demonstrate that fine-tuning is more robust to noise than prefix tuning{---}the latter method faces a significant decrease in performance on most corrupted data sets with increasing noise levels. Furthermore, prefix tuning has high variances on the F1 scores compared to fine-tuning in many corruption methods. We strongly advocate that caution should be carefully taken when applying the state-of-the-art prefix tuning method to noisy data.
null
null
10.18653/v1/2022.finnlp-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,899
inproceedings
kazemian-etal-2022-taxonomical
A Taxonomical {NLP} Blueprint to Support Financial Decision Making through Information-Centred Interactions
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.10/
Kazemian, Siavash and Munteanu, Cosmin and Penn, Gerald
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
89--98
Investment management professionals (IMPs) often make decisions after manual analysis of text transcripts of central banks' conferences or companies' earning calls. Their current software tools, while interactive, largely leave users unassisted in using these transcripts. A key component to designing speech and NLP techniques for this community is to qualitatively characterize their perceptions of AI as well as their legitimate needs so as to (1) better apply existing NLP methods, (2) direct future research and (3) correct IMPs' perceptions of what AI is capable of. This paper presents such a study, through a contextual inquiry with eleven IMPs, uncovering their information practices when using such transcripts. We then propose a taxonomy of user requirements and usability criteria to support IMP decision making, and validate the taxonomy through participatory design workshops with four IMPs. Our investigation suggests that: (1) IMPs view visualization methods and natural language processing algorithms primarily as time-saving tools that are incapable of enhancing either discovery or interpretation and (2) their existing software falls well short of the state of the art in both visualization and NLP.
null
null
10.18653/v1/2022.finnlp-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,900
inproceedings
chen-etal-2022-overview
Overview of the {F}in{NLP}-2022 {ERAI} Task: Evaluating the Rationales of Amateur Investors
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.11/
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
99--103
This paper provides an overview of the shared task, Evaluating the Rationales of Amateur Investors (ERAI), in FinNLP-2022 at EMNLP-2022. This shared task aims to sort out investment opinions that would lead to higher profit from social platforms. We obtained 19 registered teams; 9 teams submitted their results for final evaluation, and 8 teams submitted papers to share their methods. The discussed directions are various: prompting, fine-tuning, translation system comparison, and tailor-made neural network architectures. We provide details of the task settings, data statistics, participants' results, and fine-grained analysis.
null
null
10.18653/v1/2022.finnlp-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,901
inproceedings
wiriyathammabhum-2022-promptshots
{P}rompt{S}hots at the {F}in{NLP}-2022 {ERAI} Task: Pairwise Comparison and Unsupervised Ranking
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.12/
Wiriyathammabhum, Peratham
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
104--110
This report describes our PromptShots submissions to a shared task on Evaluating the Rationales of Amateur Investors (ERAI). We participated in both pairwise comparison and unsupervised ranking tasks. For pairwise comparison, we employed instruction-based models based on T5-small and OpenAI InstructGPT language models. Surprisingly, we observed OpenAI InstructGPT language model few-shot trained on Chinese data works best in our submissions, ranking 3rd on the maximal loss (ML) pairwise accuracy. This model works better than training on the Google translated English data by a large margin, where the English few-shot trained InstructGPT model even performs worse than an instruction-based T5-small model finetuned on the English data. However, all instruction-based submissions do not perform well on the maximal potential profit (MPP) pairwise accuracy where there are more data and learning signals. The Chinese few-shot trained InstructGPT model still performs best in our setting. For unsupervised ranking, we utilized many language models, including many financial-specific ones, and Bayesian lexicons unsupervised-learned on both Chinese and English words using a method-of-moments estimator. All our submissions rank best in the MPP ranking, from 1st to 3rd. However, they all do not perform well for ML scoring. Therefore, both MPP and ML scores need different treatments since we treated MPP and ML using the same formula. Our only difference is the treatment of market sentiment lexicons.
null
null
10.18653/v1/2022.finnlp-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,902
inproceedings
ghosh-naskar-2022-lipi-finnlp
{LIPI} at the {F}in{NLP}-2022 {ERAI} Task: Ensembling Sentence Transformers for Assessing Maximum Possible Profit and Loss from Online Financial Posts
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.13/
Ghosh, Sohom and Naskar, Sudip Kumar
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
111--115
Using insights from social media for making investment decisions has become mainstream. However, in the current era of information ex- plosion, it is essential to mine high-quality so- cial media posts. The FinNLP-2022 ERAI task deals with assessing Maximum Possible Profit (MPP) and Maximum Loss (ML) from social me- dia posts relating to finance. In this paper, we present our team LIPI`s approach. We ensem- bled a range of Sentence Transformers to quan- tify these posts. Unlike other teams with vary- ing performances across different metrics, our system performs consistently well. Our code is available here \url{https://github.com/sohomghosh/LIPI_ERAI_} FinNLP{\_}EMNLP- 2022/
null
null
10.18653/v1/2022.finnlp-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,903
inproceedings
lyu-etal-2022-dcu-ml
{DCU}-{ML} at the {F}in{NLP}-2022 {ERAI} Task: Investigating the Transferability of Sentiment Analysis Data for Evaluating Rationales of Investors
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.14/
Lyu, Chenyang and Ji, Tianbo and Zhou, Liting
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
116--121
In this paper, we describe our system for the FinNLP-2022 shared task: Evaluating the Rationales of Amateur Investors (ERAI). The ERAI shared tasks focuses on mining profitable information from financial texts by predicting the possible Maximal Potential Profit (MPP) and Maximal Loss (ML) based on the posts from amateur investors. There are two sub-tasks in ERAI: Pairwise Comparison and Unsupervised Rank, both target on the prediction of MPP and ML. To tackle the two tasks, we frame this task as a text-pair classification task where the input consists of two documents and the output is the label of whether the first document will lead to higher MPP or lower ML. Specifically, we propose to take advantage of the transferability of Sentiment Analysis data with an assumption that a more positive text will lead to higher MPP or higher ML to facilitate the prediction of MPP and ML. In experiment on the ERAI blind test set, our systems trained on Sentiment Analysis data and ERAI training data ranked 1st and 8th in ML and MPP pairwise comparison respectively. Code available in this link.
null
null
10.18653/v1/2022.finnlp-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,904
inproceedings
zou-etal-2022-uoa
{UOA} at the {F}in{NLP}-2022 {ERAI} Task: Leveraging the Class Label Description for Financial Opinion Mining
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.15/
Zou, Jinan and Cao, Haiyao and Liu, Yanxi and Liu, Lingqiao and Abbasnejad, Ehsan and Shi, Javen Qinfeng
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
122--126
Evaluating the Rationales of Amateur Investors (ERAI) is a task about mining expert-like viewpoints from social media. This paper summarizes our solutions to the ERAI shared task, which is co-located with the FinNLP workshop at EMNLP 2022. There are 2 sub-tasks in ERAI. Sub-task 1 is a pair-wised comparison task, where we propose a BERT-based pre-trained model projecting opinion pairs in a common space for classification. Sub-task 2 is an unsupervised learning task ranking the opinions' maximal potential profit (MPP) and maximal loss (ML), where our model leverages the regression method and multi-layer perceptron to rank the MPP and ML values. The proposed approaches achieve competitive accuracy of 54.02{\%} on ML Accuracy and 51.72{\%} on MPP Accuracy for pairwise tasks, also 12.35{\%} and -9.39{\%} regression unsupervised ranking task for MPP and ML.
null
null
10.18653/v1/2022.finnlp-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,905
inproceedings
qin-etal-2022-aiml
ai{ML} at the {F}in{NLP}-2022 {ERAI} Task: Combining Classification and Regression Tasks for Financial Opinion Mining
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.16/
Qin, Zhaoxuan and Zou, Jinan and Luo, Qiaoyang and Cao, Haiyao and Jiao, Yang
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
127--131
Identifying posts of high financial quality from opinions is of extraordinary significance for investors. Hence, this paper focuses on evaluating the rationales of amateur investors (ERAI) in a shared task, and we present our solutions. The pairwise comparison task aims at extracting the post that will trigger higher MPP and ML values from pairs of posts. The goal of the unsupervised ranking task is to find the top 10{\%} of posts with higher MPP and ML values. We initially model the shared task as text classification and regression problems. We then propose a multi-learning approach applied by financial domain pre-trained models and multiple linear classifiers for factor combinations to integrate better relationships and information between training data. The official results have proved that our method achieves 48.28{\%} and 52.87{\%} for MPP and ML accuracy on pairwise tasks, 14.02{\%} and -4.17{\%} regarding unsupervised ranking tasks for MPP and ML. Our source code is available.
null
null
10.18653/v1/2022.finnlp-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,906
inproceedings
zhuang-ren-2022-yet
Yet at the {F}in{NLP}-2022 {ERAI} Task: Modified models for evaluating the Rationales of Amateur Investors
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.17/
Zhuang, Yan and Ren, Fuji
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
132--135
The financial reports usually reveal the recent development of the company and often cause the volatility in the company`s share price. The opinions causing higher maximal potential profit and lower maximal loss can help the amateur investors choose rational strategies. FinNLP-2022 ERAI task aims to quantify the opinions' potentials of leading higher maximal potential profit and lower maximal loss. In this paper, different strategies were applied to solve the ERAI tasks. Valinna {\textquoteleft}RoBERTa-wwm' showed excellent performance and helped us rank second in {\textquoteleft}MPP' label prediction task. After integrating some tricks, the modified {\textquoteleft}RoBERTa-wwm' outperformed all other models in {\textquoteleft}ML' ranking task.
null
null
10.18653/v1/2022.finnlp-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,907
inproceedings
trust-minghim-2022-ldpp
{LDPP} at the {F}in{NLP}-2022 {ERAI} Task: Determinantal Point Processes and Variational Auto-encoders for Identifying High-Quality Opinions from a pool of Social Media Posts
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.18/
Trust, Paul and Minghim, Rosane
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
136--140
Social media and online forums have made it easier for people to share their views and opinions on various topics in society. In this paper, we focus on posts discussing investment related topics. When it comes to investment , people can now easily share their opinions about online traded items and also provide rationales to support their arguments on social media. However, there are millions of posts to read with potential of having some posts from amateur investors or completely unrelated posts. Identifying the most important posts that could lead to higher maximal potential profit (MPP) and lower maximal loss for investment is not a trivial task. In this paper, propose to use determinantal point processes and variational autoencoders to identify high quality posts from the given rationales. Experimental results suggest that our method mines quality posts compared to random selection and also latent variable modeling improves improves the quality of selected posts.
null
null
10.18653/v1/2022.finnlp-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,908
inproceedings
gon-etal-2022-jetsons
Jetsons at the {F}in{NLP}-2022 {ERAI} Task: {BERT}-{C}hinese for mining high {MPP} posts
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.19/
Gon, Alolika and Zha, Sihan and Rallabandi, Sai Krishna and Dakle, Parag Pravin and Raghavan, Preethi
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
141--146
In this paper, we discuss the various approaches by the \textit{Jetsons} team for the {\textquotedblleft}Pairwise Comparison{\textquotedblright} sub-task of the ERAI shared task to compare financial opinions for profitability and loss. Our BERT-Chinese model considers a pair of opinions and predicts the one with a higher maximum potential profit (MPP) with 62.07{\%} accuracy. We analyze the performance of our approaches on both the MPP and maximal loss (ML) problems and deeply dive into why BERT-Chinese outperforms other models.
null
null
10.18653/v1/2022.finnlp-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,909
inproceedings
li-etal-2022-stock
No Stock is an Island: Learning Internal and Relational Attributes of Stocks with Contrastive Learning
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.20/
Li, Shicheng and Li, Wei and Zhang, Zhiyuan and Bao, Ruihan and Harimoto, Keiko and Sun, Xu
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
147--153
Previous work has demonstrated the viability of applying deep learning techniques in the financial area. Recently, the task of stock embedding learning has been drawing attention from the research community, which aims to represent the characteristics of stocks with distributed vectors that can be used in various financial analysis scenarios. Existing approaches for learning stock embeddings either require expert knowledge, or mainly focus on the textual part of information corresponding to individual temporal movements. In this paper, we propose to model stock properties as the combination of internal attributes and relational attributes, which takes into consideration both the time-invariant properties of individual stocks and their movement patterns in relation to the market. To learn the two types of attributes from financial news and transaction data, we design several training objectives based on contrastive learning to extract and separate the long-term and temporary information in the data that are able to counter the inherent randomness of the stock market. Experiments and further analyses on portfolio optimization reveal the effectiveness of our method in extracting comprehensive stock information from various data sources.
null
null
10.18653/v1/2022.finnlp-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,910
inproceedings
sharpe-decker-2022-prospectus
Prospectus Language and {IPO} Performance
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.21/
Sharpe, Jared and Decker, Keith
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
154--162
Pricing a firm`s Initial Public Offering (IPO) has historically been very difficult, with high average returns on the first-day of trading. Furthermore, IPO withdrawal, the event in which companies who file to go public ultimately rescind the application before the offering, is an equally challenging prediction problem. This research utilizes word embedding techniques to evaluate existing theories concerning firm sentiment on first-day trading performance and the probability of withdrawal, which has not yet been explored empirically. The results suggest that firms attempting to go public experience a decreased probability of withdrawal with the increased presence of positive, litigious, and uncertain language in their initial prospectus, while the increased presence of strong modular language leads to an increased probability of withdrawal. The results also suggest that frequent or large adjustments in the strong modular language of subsequent filings leads to smaller first-day returns.
null
null
10.18653/v1/2022.finnlp-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,911
inproceedings
alhamzeh-etal-2022-time
It`s Time to Reason: Annotating Argumentation Structures in Financial Earnings Calls: The {F}in{A}rg Dataset
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.22/
Alhamzeh, Alaa and Fonck, Romain and Versm{\'ee, Erwan and Egyed-Zsigmond, El{\"od and Kosch, Harald and Brunie, Lionel
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
163--169
With the goal of reasoning on the financial textual data, we present in this paper, a novel approach for annotating arguments, their components and relations in the transcripts of earnings conference calls (ECCs). The proposed scheme is driven from the argumentation theory at the micro-structure level of discourse. We further conduct a manual annotation study with four annotators on 136 documents. We obtained inter-annotator agreement of $lpha_{U}$ = 0.70 for argument components and $lpha$ = 0.81 for argument relations. The final created corpus, with the size of 804 documents, as well as the annotation guidelines are publicly available for researchers in the domains of computational argumentation, finance and FinNLP.
null
null
10.18653/v1/2022.finnlp-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,912
inproceedings
khaldi-etal-2022-teacher
How Can a Teacher Make Learning From Sparse Data Softer? Application to Business Relation Extraction
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.23/
Khaldi, Hadjer and Benamara, Farah and Pradel, Camille and Aussenac-Gilles, Nathalie
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
170--177
Business Relation Extraction between market entities is a challenging information extraction task that suffers from data imbalance due to the over-representation of negative relations (also known as No-relation or Others) compared to positive relations that corresponds to the taxonomy of relations of interest. This paper proposes a novel solution to tackle this problem, relying on binary soft labels supervision generated by an approach based on knowledge distillation. When evaluated on a business relation extraction dataset, the results suggest that the proposed approach improves the overall performance, beating state-of-the art solutions for data imbalance. In particular, it improves the extraction of under-represented relations as well as the detection of false negatives.
null
null
10.18653/v1/2022.finnlp-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,913
inproceedings
zou-etal-2022-astock
Astock: A New Dataset and Automated Stock Trading based on Stock-specific News Analyzing Model
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.24/
Zou, Jinan and Cao, Haiyao and Liu, Lingqiao and Lin, Yuhao and Abbasnejad, Ehsan and Shi, Javen Qinfeng
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
178--186
Natural Language Processing (NLP) demonstrates a great potential to support financial decision-making by analyzing the text from social media or news outlets. In this work, we build a platform to study the NLP-aided stock auto-trading algorithms systematically. In contrast to the previous work, our platform is characterized by three features: (1) We provide financial news for each specific stock. (2) We provide various stock factors for each stock. (3) We evaluate performance from more financial-relevant metrics. Such a design allows us to develop and evaluate NLP-aided stock auto-trading algorithms in a more realistic setting. In addition to designing an evaluation platform and dataset collection, we also made a technical contribution by proposing a system to automatically learn a good feature representation from various input information. The key to our algorithm is a method called semantic role labeling Pooling (SRLP), which leverages Semantic Role Labeling (SRL) to create a compact representation of each news paragraph. Based on SRLP, we further incorporate other stock factors to make the final prediction. In addition, we propose a self-supervised learning strategy based on SRLP to enhance the out-of-distribution generalization performance of our system. Through our experimental study, we show that the proposed method achieves better performance and outperforms all the baselines' annualized rate of return as well as the maximum drawdown of the CSI300 index and XIN9 index on real trading. Our Astock dataset and code are available at \url{https://github.com/JinanZou/Astock}.
null
null
10.18653/v1/2022.finnlp-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,914
inproceedings
arno-etal-2022-next
Next-Year Bankruptcy Prediction from Textual Data: Benchmark and Baselines
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.25/
Arno, Henri and Mulier, Klaas and Baeck, Joke and Demeester, Thomas
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
187--195
Models for bankruptcy prediction are useful in several real-world scenarios, and multiple research contributions have been devoted to the task, based on structured (numerical) as well as unstructured (textual) data. However, the lack of a common benchmark dataset and evaluation strategy impedes the objective comparison between models. This paper introduces such a benchmark for the unstructured data scenario, based on novel and established datasets, in order to stimulate further research into the task. We describe and evaluate several classical and neural baseline models, and discuss benefits and flaws of different strategies. In particular, we find that a lightweight bag-of-words model based on static in-domain word representations obtains surprisingly good results, especially when taking textual data from several years into account. These results are critically assessed, and discussed in light of particular aspects of the data and the task. All code to replicate the data and experimental results will be released.
null
null
10.18653/v1/2022.finnlp-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,915
inproceedings
ruan-etal-2022-adak
{A}da{K}-{NER}: An Adaptive Top-K Approach for Named Entity Recognition with Incomplete Annotations
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.26/
Ruan, Hongtao and Zheng, Liying and Hu, Peixian
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
196--202
State-of-the-art Named Entity Recognition (NER) models rely heavily on large amounts of fully annotated training data. However, accessible data are often incompletely annotated since the annotators usually lack comprehensive knowledge in the target domain. Normally the unannotated tokens are regarded as non-entities by default, while we underline that these tokens could either be non-entities or part of any entity. Here, we study NER modeling with incomplete annotated data where only a fraction of the named entities are labeled, and the unlabeled tokens are equivalently multi-labeled by every possible label. Taking multi-labeled tokens into account, the numerous possible paths can distract the training model from the gold path (ground truth label sequence), and thus hinders the learning ability. In this paper, we propose AdaK-NER, named the adaptive top-K approach, to help the model focus on a smaller feasible region where the gold path is more likely to be located. We demonstrate the superiority of our approach through extensive experiments on both English and Chinese datasets, averagely improving 2{\%} in F-score on the CoNLL-2003 and over 10{\%} on two Chinese datasets compared with the prior state-of-the-art works.
null
null
10.18653/v1/2022.finnlp-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,916
inproceedings
seroyizhko-etal-2022-sentiment
A Sentiment and Emotion Annotated Dataset for Bitcoin Price Forecasting Based on {R}eddit Posts
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.27/
Seroyizhko, Pavlo and Zhexenova, Zhanel and Shafiq, Muhammad Zohaib and Merizzi, Fabio and Galassi, Andrea and Ruggeri, Federico
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
203--210
Cryptocurrencies have gained enormous momentum in finance and are nowadays commonly adopted as a medium of exchange for online payments. After recent events during which GameStop`s stocks were believed to be influenced by WallStreetBets subReddit, Reddit has become a very hot topic on the cryptocurrency market. The influence of public opinions on cryptocurrency price trends has inspired researchers on exploring solutions that integrate such information in crypto price change forecasting. A popular integration technique regards representing social media opinions via sentiment features. However, this research direction is still in its infancy, where a limited number of publicly available datasets with sentiment annotations exists. We propose a novel Bitcoin Reddit Sentiment Dataset, a ready-to-use dataset annotated with state-of-the-art sentiment and emotion recognition. The dataset contains pre-processed Reddit posts and comments about Bitcoin from several domain-related subReddits along with Bitcoin`s financial data. We evaluate several widely adopted neural architectures for crypto price change forecasting. Our results show controversial benefits of sentiment and emotion features advocating for more sophisticated social media integration techniques. We make our dataset publicly available for research.
null
null
10.18653/v1/2022.finnlp-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,917
inproceedings
kang-el-maarouf-2022-finsim4
{F}in{S}im4-{ESG} Shared Task: Learning Semantic Similarities for the Financial Domain. Extended edition to {ESG} insights
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.28/
Kang, Juyeon and El Maarouf, Ismail
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
211--217
This paper describes FinSim4-ESG 1 shared task organized in the 4th FinNLP workshopwhich is held in conjunction with the IJCAI-ECAI-2022 confer- enceThis year, the FinSim4 is extended to the Environment, Social and Government (ESG) insights and proposes two subtasks, one for ESG Taxonomy Enrichment and the other for Sustainable Sentence Prediction. Among the 28 teams registered to the shared task, a total of 8 teams submitted their systems results and 6 teams also submitted a paper to describe their method. The winner of each subtask shows good performance results of 0.85{\%} and 0.95{\%} in terms of accuracy, respectively.
null
null
10.18653/v1/2022.finnlp-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,918
inproceedings
linhares-pontes-etal-2022-using
Using Contextual Sentence Analysis Models to Recognize {ESG} Concepts
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.29/
Linhares Pontes, Elvys and Ben Jannet, Mohamed and Moreno, Jose G. and Doucet, Antoine
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
218--223
This paper summarizes the joint participation of the Trading Central Labs and the L3i laboratory of the University of La Rochelle on both sub-tasks of the \textit{Shared Task FinSim-4} evaluation campaign. The first sub-task aims to enrich the {\textquoteleft}Fortia ESG taxonomy' with new lexicon entries while the second one aims to classify sentences to either {\textquoteleft}sustainable' or {\textquoteleft}unsustainable' with respect to ESG (Environment, Social and Governance) related factors. For the first sub-task, we proposed a model based on pre-trained Sentence-BERT models to project sentences and concepts in a common space in order to better represent ESG concepts. The official task results show that our system yields a significant performance improvement compared to the baseline and outperforms all other submissions on the first sub-task. For the second sub-task, we combine the RoBERTa model with a feed-forward multi-layer perceptron in order to extract the context of sentences and classify them. Our model achieved high accuracy scores (over 92{\%}) and was ranked among the top 5 systems.
null
null
10.18653/v1/2022.finnlp-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,919
inproceedings
tian-etal-2022-automatic
Automatic Term and Sentence Classification Via Augmented Term and Pre-trained language model in {ESG} Taxonomy texts
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.30/
Tian, Ke and Zhang, Zepeng and Chen, Hua
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
224--227
In this paper, we present our solutions to the FinSim4 Shared Task which is co-located with the FinNLP workshop at IJCAI-2022. This new edition of FinSim4-ESG is extended to the {\textquotedblleft}Environment, Social and Governance (ESG){\textquotedblright} related issues in the financial domain. There are two sub-tasks in the FinSim4 shared task. The goal of sub-task1 is to develop a model to predict correctly a list of given terms from ESG taxonomy domain into the most relevant concepts. The aim of subtask2 is to design a system that can automatically classify the ESG Taxonomy text sentence into sustainable or unsustainable class. We have developed different classifiers to automatically classify the terms and sentences with augmented term and pre-trained language models: tf-idf vector, word2vec, Bert, Distill-Bert, Albert, Roberta. The result dashboard shows that our proposed methods yield a significant performance improvement compared to the baseline which ranked 1st in the subtask2 and 2rd of mean rank in the subtask1.
null
null
10.18653/v1/2022.finnlp-1.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,920
inproceedings
koloski-etal-2022-knowledge
Knowledge informed sustainability detection from short financial texts
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.31/
Koloski, Boshko and Montariol, Syrielle and Purver, Matthew and Pollak, Senja
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
228--234
There is a global trend for responsible investing and the need for developing automated methods for analyzing and Environmental, Social and Governance (ESG) related elements in financial texts is raising. In this work we propose a solution to the FinSim4-ESG task, consisting of binary classification of sentences into sustainable or unsustainable. We propose a novel knowledge-based latent heterogeneous representation that is based on knowledge from taxonomies and knowledge graphs and multiple contemporary document representations. We hypothesize that an approach based on a combination of knowledge and document representations can introduce significant improvement over conventional document representation approaches. We consider ensembles on classifier as well on representation level late-fusion and early fusion. The proposed approaches achieve competitive accuracy of 89 and are 5.85 behind the best achieved score.
null
null
10.18653/v1/2022.finnlp-1.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,921
inproceedings
goel-etal-2022-tcs
{TCS} {WITM} 2022@{F}in{S}im4-{ESG}: Augmenting {BERT} with Linguistic and Semantic features for {ESG} data classification
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.32/
Goel, Tushar and Chauhan, Vipul and Sangwan, Suyash and Verma, Ishan and Dasgupta, Tirthankar and Dey, Lipika
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
235--242
Advanced neural network architectures have provided several opportunities to develop systems to automatically capture information from domain-specific unstructured text sources. The FinSim4-ESG shared task, collocated with the FinNLP workshop, proposed two sub-tasks. In sub-task1, the challenge was to design systems that could utilize contextual word embeddings along with sustainability resources to elaborate an ESG taxonomy. In the second sub-task, participants were asked to design a system that could classify sentences into sustainable or unsustainable sentences. In this paper, we utilize semantic similarity features along with BERT embeddings to segregate domain terms into a fixed number of class labels. The proposed model not only considers the contextual BERT embeddings but also incorporates Word2Vec, cosine, and Jaccard similarity which gives word-level importance to the model. For sentence classification, several linguistic elements along with BERT embeddings were used as classification features. We have shown a detailed ablation study for the proposed models.
null
null
10.18653/v1/2022.finnlp-1.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,922
inproceedings
ghosh-naskar-2022-ranking
Ranking Environment, Social And Governance Related Concepts And Assessing Sustainability Aspect of Financial Texts
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.33/
Ghosh, Sohom and Naskar, Sudip Kumar
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
243--249
Understanding Environmental, Social, and Governance (ESG) factors related to financial products has become extremely important for investors. However, manually screening through the corporate policies and reports to understand their sustainability aspect is extremely tedious. In this paper, we propose solutions to two such problems which were released as shared tasks of the FinNLP workshop of the IJCAI-2022 conference. Firstly, we train a Sentence Transformers based model which automatically ranks ESG related concepts for a given unknown term. Secondly, we fine-tune a RoBERTa model to classify financial texts as sustainable or not. Out of 26 registered teams, our team ranked 4th in sub-task 1 and 3rd in sub-task 2. The source code can be accessed from \url{https://github.com/sohomghosh/Finsim4_ESG}
null
null
10.18653/v1/2022.finnlp-1.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,923
inproceedings
dakle-etal-2022-using
Using Transformer-based Models for Taxonomy Enrichment and Sentence Classification
Chen, Chung-Chi and Huang, Hen-Hsen and Takamura, Hiroya and Chen, Hsin-Hsi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.finnlp-1.34/
Dakle, Parag Pravin and Patil, Shrikumar and Rallabandi, Sai Krishna and Hegde, Chaitra and Raghavan, Preethi
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
250--258
In this paper, we present a system that addresses the taxonomy enrichment problem for Environment, Social and Governance issues in the financial domain, as well as classifying sentences as sustainable or unsustainable, for FinSim4-ESG, a shared task for the FinNLP workshop at IJCAI-2022. We first created a derived dataset for taxonomy enrichment by using a sentence-BERT-based paraphrase detector (Reimers and Gurevych, 2019) (on the train set) to create positive and negative term-concept pairs. We then model the problem by fine-tuning the sentence-BERT-based paraphrase detector on this derived dataset, and use it as the encoder, and use a Logistic Regression classifier as the decoder, resulting in test Accuracy: 0.6 and Avg. Rank: 1.97. In case of the sentence classification task, the best-performing classifier (Accuracy: 0.92) consists of a pre-trained RoBERTa model (Liu et al., 2019a) as the encoder and a Feed Forward Neural Network classifier as the decoder.
null
null
10.18653/v1/2022.finnlp-1.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,924
inproceedings
dai-etal-2022-whole
{\textquotedblleft}Is Whole Word Masking Always Better for {C}hinese {BERT}?{\textquotedblright}: Probing on {C}hinese Grammatical Error Correction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.1/
Dai, Yong and Li, Linyang and Zhou, Cong and Feng, Zhangyin and Zhao, Enbo and Qiu, Xipeng and Li, Piji and Tang, Duyu
Findings of the Association for Computational Linguistics: ACL 2022
1--8
Whole word masking (WWM), which masks all subwords corresponding to a word at once, makes a better English BERT model. For the Chinese language, however, there is no subword because each token is an atomic character. The meaning of a word in Chinese is different in that a word is a compositional unit consisting of multiple characters. Such difference motivates us to investigate whether WWM leads to better context understanding ability for Chinese BERT. To achieve this, we introduce two probing tasks related to grammatical error correction and ask pretrained models to revise or insert tokens in a masked language modeling manner. We construct a dataset including labels for 19,075 tokens in 10,448 sentences. We train three Chinese BERT models with standard character-level masking (CLM), WWM, and a combination of CLM and WWM, respectively. Our major findings are as follows: First, when one character needs to be inserted or replaced, the model trained with CLM performs the best. Second, when more than one character needs to be handled, WWM is the key to better performance. Finally, when being fine-tuned on sentence-level downstream tasks, models trained with different masking strategies perform comparably.
null
null
10.18653/v1/2022.findings-acl.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,926
inproceedings
wang-etal-2022-compilable
Compilable Neural Code Generation with Compiler Feedback
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.2/
Wang, Xin and Wang, Yasheng and Wan, Yao and Mi, Fei and Li, Yitong and Zhou, Pingyi and Liu, Jin and Wu, Hao and Jiang, Xin and Liu, Qun
Findings of the Association for Computational Linguistics: ACL 2022
9--19
Automatically generating compilable programs with (or without) natural language descriptions has always been a touchstone problem for computational linguistics and automated software engineering. Existing deep-learning approaches model code generation as text generation, either constrained by grammar structures in decoder, or driven by pre-trained language models on large-scale code corpus (e.g., CodeGPT, PLBART, and CodeT5). However, few of them account for compilability of the generated programs. To improve compilability of the generated programs, this paper proposes COMPCODER, a three-stage pipeline utilizing compiler feedback for compilable code generation, including language model fine-tuning, compilability reinforcement, and compilability discrimination. Comprehensive experiments on two code generation tasks demonstrate the effectiveness of our proposed approach, improving the success rate of compilation from 44.18 to 89.18 in code completion on average and from 70.3 to 96.2 in text-to-code generation, respectively, when comparing with the state-of-the-art CodeGPT.
null
null
10.18653/v1/2022.findings-acl.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,927
inproceedings
zhang-etal-2022-towards
Towards Unifying the Label Space for Aspect- and Sentence-based Sentiment Analysis
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.3/
Zhang, Yiming and Zhang, Min and Wu, Sai and Zhao, Junbo
Findings of the Association for Computational Linguistics: ACL 2022
20--30
The aspect-based sentiment analysis (ABSA) is a fine-grained task that aims to determine the sentiment polarity towards targeted aspect terms occurring in the sentence. The development of the ABSA task is very much hindered by the lack of annotated data. To tackle this, the prior works have studied the possibility of utilizing the sentiment analysis (SA) datasets to assist in training the ABSA model, primarily via pretraining or multi-task learning. In this article, we follow this line, and for the first time, we manage to apply the Pseudo-Label (PL) method to merge the two homogeneous tasks. While it seems straightforward to use generated pseudo labels to handle this case of label granularity unification for two highly related tasks, we identify its major challenge in this paper and propose a novel framework, dubbed as Dual-granularity Pseudo Labeling (DPL). Further, similar to PL, we regard the DPL as a general framework capable of combining other prior methods in the literature. Through extensive experiments, DPL has achieved state-of-the-art performance on standard benchmarks surpassing the prior work significantly.
null
null
10.18653/v1/2022.findings-acl.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,928
inproceedings
biju-etal-2022-input
Input-specific Attention Subnetworks for Adversarial Detection
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.4/
Biju, Emil and Sriram, Anirudh and Kumar, Pratyush and Khapra, Mitesh
Findings of the Association for Computational Linguistics: ACL 2022
31--44
Self-attention heads are characteristic of Transformer models and have been well studied for interpretability and pruning. In this work, we demonstrate an altogether different utility of attention heads, namely for adversarial detection. Specifically, we propose a method to construct input-specific attention subnetworks (IAS) from which we extract three features to discriminate between authentic and adversarial inputs. The resultant detector significantly improves (by over 7.5{\%}) the state-of-the-art adversarial detection accuracy for the BERT encoder on 10 NLU datasets with 11 different adversarial attack types. We also demonstrate that our method (a) is more accurate for larger models which are likely to have more spurious correlations and thus vulnerable to adversarial attack, and (b) performs well even with modest training sets of adversarial examples.
null
null
10.18653/v1/2022.findings-acl.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,929
inproceedings
chia-etal-2022-relationprompt
{R}elation{P}rompt: Leveraging Prompts to Generate Synthetic Data for Zero-Shot Relation Triplet Extraction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.5/
Chia, Yew Ken and Bing, Lidong and Poria, Soujanya and Si, Luo
Findings of the Association for Computational Linguistics: ACL 2022
45--57
Despite the importance of relation extraction in building and representing knowledge, less research is focused on generalizing to unseen relations types. We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE) to encourage further research in low-resource relation extraction methods. Given an input sentence, each extracted triplet consists of the head entity, relation label, and tail entity where the relation label is not seen at the training stage. To solve ZeroRTE, we propose to synthesize relation examples by prompting language models to generate structured texts. Concretely, we unify language model prompts and structured text approaches to design a structured prompt template for generating synthetic relation samples when conditioning on relation label prompts (RelationPrompt). To overcome the limitation for extracting multiple relation triplets in a sentence, we design a novel Triplet Search Decoding method. Experiments on FewRel and Wiki-ZSL datasets show the efficacy of RelationPrompt for the ZeroRTE task and zero-shot relation classification. Our code and data are available at github.com/declare-lab/RelationPrompt.
null
null
10.18653/v1/2022.findings-acl.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,930
inproceedings
lee-etal-2022-pre
Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.6/
Lee, En-Shiun Annie and Thillainathan, Sarubi and Nayak, Shravan and Ranathunga, Surangika and Adelani, David Ifeoluwa and Su, Ruisi and McCarthy, Arya D.
Findings of the Association for Computational Linguistics: ACL 2022
58--67
What can pre-trained multilingual sequence-to-sequence models like mBART contribute to translating low-resource languages? We conduct a thorough empirical experiment in 10 languages to ascertain this, considering five factors: (1) the amount of fine-tuning data, (2) the noise in the fine-tuning data, (3) the amount of pre-training data in the model, (4) the impact of domain mismatch, and (5) language typology. In addition to yielding several heuristics, the experiments form a framework for evaluating the data sensitivities of machine translation systems. While mBART is robust to domain differences, its translations for unseen and typologically distant languages remain below 3.0 BLEU. In answer to our title`s question, mBART is not a low-resource panacea; we therefore encourage shifting the emphasis from new models to new data.
null
null
10.18653/v1/2022.findings-acl.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,931
inproceedings
cai-etal-2022-multi
Multi-Scale Distribution Deep Variational Autoencoder for Explanation Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.7/
Cai, ZeFeng and Wang, Linlin and de Melo, Gerard and Sun, Fei and He, Liang
Findings of the Association for Computational Linguistics: ACL 2022
68--78
Generating explanations for recommender systems is essential for improving their transparency, as users often wish to understand the reason for receiving a specified recommendation. Previous methods mainly focus on improving the generation quality, but often produce generic explanations that fail to incorporate user and item specific details. To resolve this problem, we present Multi-Scale Distribution Deep Variational Autoencoders (MVAE).These are deep hierarchical VAEs with a prior network that eliminates noise while retaining meaningful signals in the input, coupled with a recognition network serving as the source of information to guide the learning of the prior network. Further, the Multi-scale distribution Learning Framework (MLF) along with a Target Tracking Kullback-Leibler divergence (TKL) mechanism are proposed to employ multi KL divergences at different scales for more effective learning. Extensive empirical experiments demonstrate that our methods can generate explanations with concrete input-specific contents.
null
null
10.18653/v1/2022.findings-acl.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,932
inproceedings
zhou-etal-2022-dual
Dual Context-Guided Continuous Prompt Tuning for Few-Shot Learning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.8/
Zhou, Jie and Tian, Le and Yu, Houjin and Xiao, Zhou and Su, Hui and Zhou, Jie
Findings of the Association for Computational Linguistics: ACL 2022
79--84
Prompt-based paradigm has shown its competitive performance in many NLP tasks. However, its success heavily depends on prompt design, and the effectiveness varies upon the model and training data. In this paper, we propose a novel dual context-guided continuous prompt (DCCP) tuning method. To explore the rich contextual information in language structure and close the gap between discrete prompt tuning and continuous prompt tuning, DCCP introduces two auxiliary training objectives and constructs input in a pair-wise fashion. Experimental results demonstrate that our method is applicable to many NLP tasks, and can often outperform existing prompt tuning methods by a large margin in the few-shot setting.
null
null
10.18653/v1/2022.findings-acl.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,933
inproceedings
huang-etal-2022-extract
Extract-Select: A Span Selection Framework for Nested Named Entity Recognition with Generative Adversarial Training
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.9/
Huang, Peixin and Zhao, Xiang and Hu, Minghao and Fang, Yang and Li, Xinyi and Xiao, Weidong
Findings of the Association for Computational Linguistics: ACL 2022
85--96
Nested named entity recognition (NER) is a task in which named entities may overlap with each other. Span-based approaches regard nested NER as a two-stage span enumeration and classification task, thus having the innate ability to handle this task. However, they face the problems of error propagation, ignorance of span boundary, difficulty in long entity recognition and requirement on large-scale annotated data. In this paper, we propose Extract-Select, a span selection framework for nested NER, to tackle these problems. Firstly, we introduce a span selection framework in which nested entities with different input categories would be separately extracted by the extractor, thus naturally avoiding error propagation in two-stage span-based approaches. In the inference phase, the trained extractor selects final results specific to the given entity category. Secondly, we propose a hybrid selection strategy in the extractor, which not only makes full use of span boundary but also improves the ability of long entity recognition. Thirdly, we design a discriminator to evaluate the extraction result, and train both extractor and discriminator with generative adversarial training (GAT). The use of GAT greatly alleviates the stress on the dataset size. Experimental results on four benchmark datasets demonstrate that Extract-Select outperforms competitive nested NER models, obtaining state-of-the-art results. The proposed model also performs well when less labeled data are given, proving the effectiveness of GAT.
null
null
10.18653/v1/2022.findings-acl.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,934
inproceedings
fang-etal-2022-controlled
Controlled Text Generation Using Dictionary Prior in Variational Autoencoders
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.10/
Fang, Xianghong and Li, Jian and Shang, Lifeng and Jiang, Xin and Liu, Qun and Yeung, Dit-Yan
Findings of the Association for Computational Linguistics: ACL 2022
97--111
While variational autoencoders (VAEs) have been widely applied in text generation tasks, they are troubled by two challenges: insufficient representation capacity and poor controllability. The former results from the posterior collapse and restrictive assumption, which impede better representation learning. The latter arises as continuous latent variables in traditional formulations hinder VAEs from interpretability and controllability. In this paper, we propose Dictionary Prior (DPrior), a new data-driven prior that enjoys the merits of expressivity and controllability. To facilitate controlled text generation with DPrior, we propose to employ contrastive learning to separate the latent space into several parts. Extensive experiments on both language modeling and controlled text generation demonstrate the effectiveness of the proposed approach.
null
null
10.18653/v1/2022.findings-acl.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,935
inproceedings
yang-etal-2022-challenges
Challenges to Open-Domain Constituency Parsing
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.11/
Yang, Sen and Cui, Leyang and Ning, Ruoxi and Wu, Di and Zhang, Yue
Findings of the Association for Computational Linguistics: ACL 2022
112--127
Neural constituency parsers have reached practical performance on news-domain benchmarks. However, their generalization ability to other domains remains weak. Existing findings on cross-domain constituency parsing are only made on a limited number of domains. Tracking this, we manually annotate a high-quality constituency treebank containing five domains. We analyze challenges to open-domain constituency parsing using a set of linguistic features on various strong constituency parsers. Primarily, we find that 1) BERT significantly increases parsers' cross-domain performance by reducing their sensitivity on the domain-variant features.2) Compared with single metrics such as unigram distribution and OOV rate, challenges to open-domain constituency parsing arise from complex features, including cross-domain lexical and constituent structure variations.
null
null
10.18653/v1/2022.findings-acl.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,936
inproceedings
ye-etal-2022-going
Going {\textquotedblleft}Deeper{\textquotedblright}: Structured Sememe Prediction via Transformer with Tree Attention
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.12/
Ye, Yining and Qi, Fanchao and Liu, Zhiyuan and Sun, Maosong
Findings of the Association for Computational Linguistics: ACL 2022
128--138
Sememe knowledge bases (SKBs), which annotate words with the smallest semantic units (i.e., sememes), have proven beneficial to many NLP tasks. Building an SKB is very time-consuming and labor-intensive. Therefore, some studies have tried to automate the building process by predicting sememes for the unannotated words. However, all existing sememe prediction studies ignore the hierarchical structures of sememes, which are important in the sememe-based semantic description system. In this work, we tackle the structured sememe prediction problem for the first time, which is aimed at predicting a sememe tree with hierarchical structures rather than a set of sememes. We design a sememe tree generation model based on Transformer with adjusted attention mechanism, which shows its superiority over the baselines in experiments. We also conduct a series of quantitative and qualitative analyses of the effectiveness of our model. All the code and data of this paper are available at \url{https://github.com/thunlp/STG}.
null
null
10.18653/v1/2022.findings-acl.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,937
inproceedings
zhou-etal-2022-table
Table-based Fact Verification with Self-adaptive Mixture of Experts
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.13/
Zhou, Yuxuan and Liu, Xien and Zhou, Kaiyin and Wu, Ji
Findings of the Association for Computational Linguistics: ACL 2022
139--149
The table-based fact verification task has recently gained widespread attention and yet remains to be a very challenging problem. It inherently requires informative reasoning over natural language together with different numerical and logical reasoning on tables (e.g., count, superlative, comparative). Considering that, we exploit mixture-of-experts and present in this paper a new method: Self-adaptive Mixture-of-Experts Network (SaMoE). Specifically, we have developed a mixture-of-experts neural network to recognize and execute different types of reasoning{---}the network is composed of multiple experts, each handling a specific part of the semantics for reasoning, whereas a management module is applied to decide the contribution of each expert network to the verification result. A self-adaptive method is developed to teach the management module combining results of different experts more efficiently without external knowledge. The experimental results illustrate that our framework achieves 85.1{\%} accuracy on the benchmark dataset TabFact, comparable with the previous state-of-the-art models. We hope our framework can serve as a new baseline for table-based verification. Our code is available at \url{https://github.com/THUMLP/SaMoE}.
null
null
10.18653/v1/2022.findings-acl.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,938
inproceedings
xiang-etal-2022-investigating
Investigating Data Variance in Evaluations of Automatic Machine Translation Metrics
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.14/
Xiang, Jiannan and Li, Huayang and Liu, Yahui and Liu, Lemao and Huang, Guoping and Lian, Defu and Shi, Shuming
Findings of the Association for Computational Linguistics: ACL 2022
150--157
Current practices in metric evaluation focus on one single dataset, e.g., Newstest dataset in each year`s WMT Metrics Shared Task. However, in this paper, we qualitatively and quantitatively show that the performances of metrics are sensitive to data. The ranking of metrics varies when the evaluation is conducted on different datasets. Then this paper further investigates two potential hypotheses, i.e., insignificant data points and the deviation of i.i.d assumption, which may take responsibility for the issue of data variance. In conclusion, our findings suggest that when evaluating automatic translation metrics, researchers should take data variance into account and be cautious to report the results on unreliable datasets, because it may leads to inconsistent results with most of the other datasets.
null
null
10.18653/v1/2022.findings-acl.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,939
inproceedings
qi-etal-2022-sememe
Sememe Prediction for {B}abel{N}et Synsets using Multilingual and Multimodal Information
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.15/
Qi, Fanchao and Lv, Chuancheng and Liu, Zhiyuan and Meng, Xiaojun and Sun, Maosong and Zheng, Hai-Tao
Findings of the Association for Computational Linguistics: ACL 2022
158--168
In linguistics, a sememe is defined as the minimum semantic unit of languages. Sememe knowledge bases (KBs), which are built by manually annotating words with sememes, have been successfully applied to various NLP tasks. However, existing sememe KBs only cover a few languages, which hinders the wide utilization of sememes. To address this issue, the task of sememe prediction for BabelNet synsets (SPBS) is presented, aiming to build a multilingual sememe KB based on BabelNet, a multilingual encyclopedia dictionary. By automatically predicting sememes for a BabelNet synset, the words in many languages in the synset would obtain sememe annotations simultaneously. However, previous SPBS methods have not taken full advantage of the abundant information in BabelNet. In this paper, we utilize the multilingual synonyms, multilingual glosses and images in BabelNet for SPBS. We design a multimodal information fusion model to encode and combine this information for sememe prediction. Experimental results show the substantial outperformance of our model over previous methods (about 10 MAP and F1 scores). All the code and data of this paper can be obtained at \url{https://github.com/thunlp/MSGI}.
null
null
10.18653/v1/2022.findings-acl.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,940
inproceedings
wang-etal-2022-query
Query and Extract: Refining Event Extraction as Type-oriented Binary Decoding
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.16/
Wang, Sijia and Yu, Mo and Chang, Shiyu and Sun, Lichao and Huang, Lifu
Findings of the Association for Computational Linguistics: ACL 2022
169--182
Event extraction is typically modeled as a multi-class classification problem where event types and argument roles are treated as atomic symbols. These approaches are usually limited to a set of pre-defined types. We propose a novel event extraction framework that uses event types and argument roles as natural language queries to extract candidate triggers and arguments from the input text. With the rich semantics in the queries, our framework benefits from the attention mechanisms to better capture the semantic correlation between the event types or argument roles and the input text. Furthermore, the query-and-extract formulation allows our approach to leverage all available event annotations from various ontologies as a unified model. Experiments on ACE and ERE demonstrate that our approach achieves state-of-the-art performance on each dataset and significantly outperforms existing methods on zero-shot event extraction.
null
null
10.18653/v1/2022.findings-acl.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,941
inproceedings
yao-etal-2022-leven
{LEVEN}: A Large-Scale {C}hinese Legal Event Detection Dataset
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.17/
Yao, Feng and Xiao, Chaojun and Wang, Xiaozhi and Liu, Zhiyuan and Hou, Lei and Tu, Cunchao and Li, Juanzi and Liu, Yun and Shen, Weixing and Sun, Maosong
Findings of the Association for Computational Linguistics: ACL 2022
183--201
Recognizing facts is the most fundamental step in making judgments, hence detecting events in the legal documents is important to legal case analysis tasks. However, existing Legal Event Detection (LED) datasets only concern incomprehensive event types and have limited annotated data, which restricts the development of LED methods and their downstream applications. To alleviate these issues, we present LEVEN a large-scale Chinese LEgal eVENt detection dataset, with 8,116 legal documents and 150,977 human-annotated event mentions in 108 event types. Not only charge-related events, LEVEN also covers general events, which are critical for legal case understanding but neglected in existing LED datasets. To our knowledge, LEVEN is the largest LED dataset and has dozens of times the data scale of others, which shall significantly promote the training and evaluation of LED methods. The results of extensive experiments indicate that LED is challenging and needs further effort. Moreover, we simply utilize legal events as side information to promote downstream applications. The method achieves improvements of average 2.2 points precision in low-resource judgment prediction, and 1.5 points mean average precision in unsupervised case retrieval, which suggests the fundamentality of LED. The source code and dataset can be obtained from \url{https://github.com/thunlp/LEVEN}.
null
null
10.18653/v1/2022.findings-acl.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,942
inproceedings
wallace-etal-2022-analyzing
Analyzing Dynamic Adversarial Training Data in the Limit
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.18/
Wallace, Eric and Williams, Adina and Jia, Robin and Kiela, Douwe
Findings of the Association for Computational Linguistics: ACL 2022
202--217
To create models that are robust across a wide range of test inputs, training datasets should include diverse examples that span numerous phenomena. Dynamic adversarial data collection (DADC), where annotators craft examples that challenge continually improving models, holds promise as an approach for generating such diverse training sets. Prior work has shown that running DADC over 1-3 rounds can help models fix some error types, but it does not necessarily lead to better generalization beyond adversarial test data. We argue that running DADC over many rounds maximizes its training-time benefits, as the different rounds can together cover many of the task-relevant phenomena. We present the first study of longer-term DADC, where we collect 20 rounds of NLI examples for a small set of premise paragraphs, with both adversarial and non-adversarial approaches. Models trained on DADC examples make 26{\%} fewer errors on our expert-curated test set compared to models trained on non-adversarial data. Our analysis shows that DADC yields examples that are more difficult, more lexically and syntactically diverse, and contain fewer annotation artifacts compared to non-adversarial examples.
null
null
10.18653/v1/2022.findings-acl.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,943
inproceedings
young-etal-2022-abductionrules
{A}bduction{R}ules: Training Transformers to Explain Unexpected Inputs
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.19/
Young, Nathan and Bao, Qiming and Bensemann, Joshua and Witbrock, Michael
Findings of the Association for Computational Linguistics: ACL 2022
218--227
Transformers have recently been shown to be capable of reliably performing logical reasoning over facts and rules expressed in natural language, but abductive reasoning - inference to the best explanation of an unexpected observation - has been underexplored despite significant applications to scientific discovery, common-sense reasoning, and model interpretability. This paper presents AbductionRules, a group of natural language datasets designed to train and test generalisable abduction over natural-language knowledge bases. We use these datasets to finetune pretrained Transformers and discuss their performance, finding that our models learned generalisable abductive techniques but also learned to exploit the structure of our data. Finally, we discuss the viability of this approach to abductive reasoning and ways in which it may be improved in future work.
null
null
10.18653/v1/2022.findings-acl.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,944
inproceedings
mehrafarin-etal-2022-importance
On the Importance of Data Size in Probing Fine-tuned Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.20/
Mehrafarin, Houman and Rajaee, Sara and Pilehvar, Mohammad Taher
Findings of the Association for Computational Linguistics: ACL 2022
228--238
Several studies have investigated the reasons behind the effectiveness of fine-tuning, usually through the lens of probing. However, these studies often neglect the role of the size of the dataset on which the model is fine-tuned. In this paper, we highlight the importance of this factor and its undeniable role in probing performance. We show that the extent of encoded linguistic knowledge depends on the number of fine-tuning samples. The analysis also reveals that larger training data mainly affects higher layers, and that the extent of this change is a factor of the number of iterations updating the model during fine-tuning rather than the diversity of the training samples. Finally, we show through a set of experiments that fine-tuning data size affects the recoverability of the changes made to the model`s linguistic knowledge.
null
null
10.18653/v1/2022.findings-acl.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,945
inproceedings
nesterov-etal-2022-ruccon
{R}u{CC}o{N}: Clinical Concept Normalization in {R}ussian
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.21/
Nesterov, Alexandr and Zubkova, Galina and Miftahutdinov, Zulfat and Kokh, Vladimir and Tutubalina, Elena and Shelmanov, Artem and Alekseev, Anton and Avetisian, Manvel and Chertok, Andrey and Nikolenko, Sergey
Findings of the Association for Computational Linguistics: ACL 2022
239--245
We present RuCCoN, a new dataset for clinical concept normalization in Russian manually annotated by medical professionals. It contains over 16,028 entity mentions manually linked to over 2,409 unique concepts from the Russian language part of the UMLS ontology. We provide train/test splits for different settings (stratified, zero-shot, and CUI-less) and present strong baselines obtained with state-of-the-art models such as SapBERT. At present, Russian medical NLP is lacking in both datasets and trained models, and we view this work as an important step towards filling this gap. Our dataset and annotation guidelines are available at \url{https://github.com/AIRI-Institute/RuCCoN}.
null
null
10.18653/v1/2022.findings-acl.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,946
inproceedings
tan-etal-2022-sentence
A Sentence is Worth 128 Pseudo Tokens: A Semantic-Aware Contrastive Learning Framework for Sentence Embeddings
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.22/
Tan, Haochen and Shao, Wei and Wu, Han and Yang, Ke and Song, Linqi
Findings of the Association for Computational Linguistics: ACL 2022
246--256
Contrastive learning has shown great potential in unsupervised sentence embedding tasks, e.g., SimCSE (CITATION).However, these existing solutions are heavily affected by superficial features like the length of sentences or syntactic structures. In this paper, we propose a semantic-aware contrastive learning framework for sentence embeddings, termed Pseudo-Token BERT (PT-BERT), which is able to explore the pseudo-token space (i.e., latent semantic space) representation of a sentence while eliminating the impact of superficial features such as sentence length and syntax. Specifically, we introduce an additional pseudo token embedding layer independent of the BERT encoder to map each sentence into a sequence of pseudo tokens in a fixed length. Leveraging these pseudo sequences, we are able to construct same-length positive and negative pairs based on the attention mechanism to perform contrastive learning. In addition, we utilize both the gradient-updating and momentum-updating encoders to encode instances while dynamically maintaining an additional queue to store the representation of sentence embeddings, enhancing the encoder`s learning performance for negative examples. Experiments show that our model outperforms the state-of-the-art baselines on six standard semantic textual similarity (STS) tasks. Furthermore, experiments on alignments and uniformity losses, as well as hard examples with different sentence lengths and syntax, consistently verify the effectiveness of our method.
null
null
10.18653/v1/2022.findings-acl.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,947
inproceedings
xie-etal-2022-eider
Eider: Empowering Document-level Relation Extraction with Efficient Evidence Extraction and Inference-stage Fusion
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.23/
Xie, Yiqing and Shen, Jiaming and Li, Sha and Mao, Yuning and Han, Jiawei
Findings of the Association for Computational Linguistics: ACL 2022
257--268
Document-level relation extraction (DocRE) aims to extract semantic relations among entity pairs in a document. Typical DocRE methods blindly take the full document as input, while a subset of the sentences in the document, noted as the evidence, are often sufficient for humans to predict the relation of an entity pair. In this paper, we propose an evidence-enhanced framework, Eider, that empowers DocRE by efficiently extracting evidence and effectively fusing the extracted evidence in inference. We first jointly train an RE model with a lightweight evidence extraction model, which is efficient in both memory and runtime. Empirically, even training the evidence model on silver labels constructed by our heuristic rules can lead to better RE performance. We further design a simple yet effective inference process that makes RE predictions on both extracted evidence and the full document, then fuses the predictions through a blending layer. This allows Eider to focus on important sentences while still having access to the complete information in the document. Extensive experiments show that Eider outperforms state-of-the-art methods on three benchmark datasets (e.g., by 1.37/1.26 Ign F1/F1 on DocRED).
null
null
10.18653/v1/2022.findings-acl.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,948
inproceedings
maurya-desarkar-2022-meta
Meta-X$_{NLG}$: A Meta-Learning Approach Based on Language Clustering for Zero-Shot Cross-Lingual Transfer and Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.24/
Maurya, Kaushal and Desarkar, Maunendra
Findings of the Association for Computational Linguistics: ACL 2022
269--284
Recently, the NLP community has witnessed a rapid advancement in multilingual and cross-lingual transfer research where the supervision is transferred from high-resource languages (HRLs) to low-resource languages (LRLs). However, the cross-lingual transfer is not uniform across languages, particularly in the zero-shot setting. Towards this goal, one promising research direction is to learn shareable structures across multiple tasks with limited annotated data. The downstream multilingual applications may benefit from such a learning setup as most of the languages across the globe are low-resource and share some structures with other languages. In this paper, we propose a novel meta-learning framework (called Meta-X$_{NLG}$) to learn shareable structures from typologically diverse languages based on meta-learning and language clustering. This is a step towards uniform cross-lingual transfer for unseen languages. We first cluster the languages based on language representations and identify the centroid language of each cluster. Then, a meta-learning algorithm is trained with all centroid languages and evaluated on the other languages in the zero-shot setting. We demonstrate the effectiveness of this modeling on two NLG tasks (Abstractive Text Summarization and Question Generation), 5 popular datasets and 30 typologically diverse languages. Consistent improvements over strong baselines demonstrate the efficacy of the proposed framework. The careful design of the model makes this end-to-end NLG setup less vulnerable to the accidental translation problem, which is a prominent concern in zero-shot cross-lingual NLG tasks.
null
null
10.18653/v1/2022.findings-acl.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,949
inproceedings
cheng-zhang-2022-mr
{MR}-{P}: A Parallel Decoding Algorithm for Iterative Refinement Non-Autoregressive Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.25/
Cheng, Hao and Zhang, Zhihua
Findings of the Association for Computational Linguistics: ACL 2022
285--296
Non-autoregressive translation (NAT) predicts all the target tokens in parallel and significantly speeds up the inference process. The Conditional Masked Language Model (CMLM) is a strong baseline of NAT. It decodes with the Mask-Predict algorithm which iteratively refines the output. Most works about CMLM focus on the model structure and the training objective. However, the decoding algorithm is equally important. We propose a simple, effective, and easy-to-implement decoding algorithm that we call MaskRepeat-Predict (MR-P). The MR-P algorithm gives higher priority to consecutive repeated tokens when selecting tokens to mask for the next iteration and stops the iteration after target tokens converge. We conduct extensive experiments on six translation directions with varying data sizes. The results show that MR-P significantly improves the performance with the same model parameters. Specifically, we achieve a BLEU increase of 1.39 points in the WMT`14 En-De translation task.
null
null
10.18653/v1/2022.findings-acl.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,950
inproceedings
huang-etal-2022-open
Open Relation Modeling: Learning to Define Relations between Entities
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.26/
Huang, Jie and Chang, Kevin and Xiong, Jinjun and Hwu, Wen-mei
Findings of the Association for Computational Linguistics: ACL 2022
297--308
Relations between entities can be represented by different instances, e.g., a sentence containing both entities or a fact in a Knowledge Graph (KG). However, these instances may not well capture the general relations between entities, may be difficult to understand by humans, even may not be found due to the incompleteness of the knowledge source. In this paper, we introduce the Open Relation Modeling problem - given two entities, generate a coherent sentence describing the relation between them. To solve this problem, we propose to teach machines to generate definition-like relation descriptions by letting them learn from defining entities. Specifically, we fine-tune Pre-trained Language Models (PLMs) to produce definitions conditioned on extracted entity pairs. To help PLMs reason between entities and provide additional relational knowledge to PLMs for open relation modeling, we incorporate reasoning paths in KGs and include a reasoning path selection mechanism. Experimental results show that our model can generate concise but informative relation descriptions that capture the representative characteristics of entities.
null
null
10.18653/v1/2022.findings-acl.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,951
inproceedings
zhang-etal-2022-slot
A Slot Is Not Built in One Utterance: Spoken Language Dialogs with Sub-Slots
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.27/
Zhang, Sai and Hu, Yuwei and Wu, Yuchuan and Wu, Jiaman and Li, Yongbin and Sun, Jian and Yuan, Caixia and Wang, Xiaojie
Findings of the Association for Computational Linguistics: ACL 2022
309--321
A slot value might be provided segment by segment over multiple-turn interactions in a dialog, especially for some important information such as phone numbers and names. It is a common phenomenon in daily life, but little attention has been paid to it in previous work. To fill the gap, this paper defines a new task named Sub-Slot based Task-Oriented Dialog (SSTOD) and builds a Chinese dialog dataset SSD for boosting research on SSTOD. The dataset includes a total of 40K dialogs and 500K utterances from four different domains: Chinese names, phone numbers, ID numbers and license plate numbers. The data is well annotated with sub-slot values, slot values, dialog states and actions. We find some new linguistic phenomena and interactive manners in SSTOD which raise critical challenges of building dialog agents for the task. We test three state-of-the-art dialog models on SSTOD and find they cannot handle the task well on any of the four domains. We also investigate an improved model by involving slot knowledge in a plug-in manner. More work should be done to meet the new challenges raised from SSTOD which widely exists in real-life applications. The dataset and code are publicly available via \url{https://github.com/shunjiu/SSTOD}.
null
null
10.18653/v1/2022.findings-acl.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,952
inproceedings
mo-etal-2022-towards
Towards Transparent Interactive Semantic Parsing via Step-by-Step Correction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.28/
Mo, Lingbo and Lewis, Ashley and Sun, Huan and White, Michael
Findings of the Association for Computational Linguistics: ACL 2022
322--342
Existing studies on semantic parsing focus on mapping a natural-language utterance to a logical form (LF) in one turn. However, because natural language may contain ambiguity and variability, this is a difficult challenge. In this work, we investigate an interactive semantic parsing framework that explains the predicted LF step by step in natural language and enables the user to make corrections through natural-language feedback for individual steps. We focus on question answering over knowledge bases (KBQA) as an instantiation of our framework, aiming to increase the transparency of the parsing process and help the user trust the final answer. We construct INSPIRED, a crowdsourced dialogue dataset derived from the ComplexWebQuestions dataset. Our experiments show that this framework has the potential to greatly improve overall parse accuracy. Furthermore, we develop a pipeline for dialogue simulation to evaluate our framework w.r.t. a variety of state-of-the-art KBQA models without further crowdsourcing effort. The results demonstrate that our framework promises to be effective across such models.
null
null
10.18653/v1/2022.findings-acl.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,953
inproceedings
li-etal-2022-miner
{MINER}: Multi-Interest Matching Network for News Recommendation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.29/
Li, Jian and Zhu, Jieming and Bi, Qiwei and Cai, Guohao and Shang, Lifeng and Dong, Zhenhua and Jiang, Xin and Liu, Qun
Findings of the Association for Computational Linguistics: ACL 2022
343--352
Personalized news recommendation is an essential technique to help users find interested news. Accurately matching user`s interests and candidate news is the key to news recommendation. Most existing methods learn a single user embedding from user`s historical behaviors to represent the reading interest. However, user interest is usually diverse and may not be adequately modeled by a single user embedding. In this paper, we propose a poly attention scheme to learn multiple interest vectors for each user, which encodes the different aspects of user interest. We further propose a disagreement regularization to make the learned interests vectors more diverse. Moreover, we design a category-aware attention weighting strategy that incorporates the news category information as explicit interest signals into the attention mechanism. Extensive experiments on the MIND news recommendation benchmark demonstrate that our approach significantly outperforms existing state-of-the-art methods.
null
null
10.18653/v1/2022.findings-acl.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,954
inproceedings
wu-etal-2022-ksam
{KSAM}: Infusing Multi-Source Knowledge into Dialogue Generation via Knowledge Source Aware Multi-Head Decoding
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.30/
Wu, Sixing and Li, Ying and Zhang, Dawei and Wu, Zhonghai
Findings of the Association for Computational Linguistics: ACL 2022
353--363
Knowledge-enhanced methods have bridged the gap between human beings and machines in generating dialogue responses. However, most previous works solely seek knowledge from a single source, and thus they often fail to obtain available knowledge because of the insufficient coverage of a single knowledge source. To this end, infusing knowledge from multiple sources becomes a trend. This paper proposes a novel approach Knowledge Source Aware Multi-Head Decoding, KSAM, to infuse multi-source knowledge into dialogue generation more efficiently. Rather than following the traditional single decoder paradigm, KSAM uses multiple independent source-aware decoder heads to alleviate three challenging problems in infusing multi-source knowledge, namely, the diversity among different knowledge sources, the indefinite knowledge alignment issue, and the insufficient flexibility/scalability in knowledge usage. Experiments on a Chinese multi-source knowledge-aligned dataset demonstrate the superior performance of KSAM against various competitive approaches.
null
null
10.18653/v1/2022.findings-acl.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,955
inproceedings
bergman-diab-2022-towards
Towards Responsible Natural Language Annotation for the Varieties of {A}rabic
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.31/
Bergman, A. and Diab, Mona
Findings of the Association for Computational Linguistics: ACL 2022
364--371
When building NLP models, there is a tendency to aim for broader coverage, often overlooking cultural and (socio)linguistic nuance. In this position paper, we make the case for care and attention to such nuances, particularly in dataset annotation, as well as the inclusion of cultural and linguistic expertise in the process. We present a playbook for responsible dataset creation for polyglossic, multidialectal languages. This work is informed by a study on Arabic annotation of social media content.
null
null
10.18653/v1/2022.findings-acl.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,956
inproceedings
bose-etal-2022-dynamically
Dynamically Refined Regularization for Improving Cross-corpora Hate Speech Detection
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.32/
Bose, Tulika and Aletras, Nikolaos and Illina, Irina and Fohr, Dominique
Findings of the Association for Computational Linguistics: ACL 2022
372--382
Hate speech classifiers exhibit substantial performance degradation when evaluated on datasets different from the source. This is due to learning spurious correlations between words that are not necessarily relevant to hateful language, and hate speech labels from the training corpus. Previous work has attempted to mitigate this problem by regularizing specific terms from pre-defined static dictionaries. While this has been demonstrated to improve the generalizability of classifiers, the coverage of such methods is limited and the dictionaries require regular manual updates from human experts. In this paper, we propose to automatically identify and reduce spurious correlations using attribution methods with dynamic refinement of the list of terms that need to be regularized during training. Our approach is flexible and improves the cross-corpora performance over previous work independently and in combination with pre-defined dictionaries.
null
null
10.18653/v1/2022.findings-acl.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,957
inproceedings
tuan-etal-2022-towards
Towards Large-Scale Interpretable Knowledge Graph Reasoning for Dialogue Systems
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.33/
Tuan, Yi-Lin and Beygi, Sajjad and Fazel-Zarandi, Maryam and Gao, Qiaozi and Cervone, Alessandra and Wang, William Yang
Findings of the Association for Computational Linguistics: ACL 2022
383--395
Users interacting with voice assistants today need to phrase their requests in a very specific manner to elicit an appropriate response. This limits the user experience, and is partly due to the lack of reasoning capabilities of dialogue platforms and the hand-crafted rules that require extensive labor. One possible solution to improve user experience and relieve the manual efforts of designers is to build an end-to-end dialogue system that can do reasoning itself while perceiving user`s utterances. In this work, we propose a novel method to incorporate the knowledge reasoning capability into dialog systems in a more scalable and generalizable manner. Our proposed method allows a single transformer model to directly walk on a large-scale knowledge graph to generate responses. To the best of our knowledge, this is the first work to have transformer models generate responses by reasoning over differentiable knowledge graphs. We investigate the reasoning abilities of the proposed method on both task-oriented and domain-specific chit-chat dialogues. Empirical results show that this method can effectively and efficiently incorporate a knowledge graph into a dialogue system with fully-interpretable reasoning paths.
null
null
10.18653/v1/2022.findings-acl.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,958
inproceedings
zhang-etal-2022-mderank
{MDER}ank: A Masked Document Embedding Rank Approach for Unsupervised Keyphrase Extraction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.34/
Zhang, Linhan and Chen, Qian and Wang, Wen and Deng, Chong and Zhang, ShiLiang and Li, Bing and Wang, Wei and Cao, Xin
Findings of the Association for Computational Linguistics: ACL 2022
396--409
Keyphrase extraction (KPE) automatically extracts phrases in a document that provide a concise summary of the core content, which benefits downstream information retrieval and NLP tasks. Previous state-of-the-art methods select candidate keyphrases based on the similarity between learned representations of the candidates and the document. They suffer performance degradation on long documents due to discrepancy between sequence lengths which causes mismatch between representations of keyphrase candidates and the document. In this work, we propose a novel unsupervised embedding-based KPE approach, Masked Document Embedding Rank (MDERank), to address this problem by leveraging a mask strategy and ranking candidates by the similarity between embeddings of the source document and the masked document. We further develop a KPE-oriented BERT (KPEBERT) model by proposing a novel self-supervised contrastive learning method, which is more compatible to MDERank than vanilla BERT. Comprehensive evaluations on six KPE benchmarks demonstrate that the proposed MDERank outperforms state-of-the-art unsupervised KPE approach by average 1.80 $F1@15$ improvement. MDERank further benefits from KPEBERT and overall achieves average 3.53 $F1@15$ improvement over SIFRank.
null
null
10.18653/v1/2022.findings-acl.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,959
inproceedings
xiang-etal-2022-visualizing
Visualizing the Relationship Between Encoded Linguistic Information and Task Performance
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.35/
Xiang, Jiannan and Li, Huayang and Lian, Defu and Huang, Guoping and Watanabe, Taro and Liu, Lemao
Findings of the Association for Computational Linguistics: ACL 2022
410--422
Probing is popular to analyze whether linguistic information can be captured by a well-trained deep neural model, but it is hard to answer how the change of the encoded linguistic information will affect task performance. To this end, we study the dynamic relationship between the encoded linguistic information and task performance from the viewpoint of Pareto Optimality. Its key idea is to obtain a set of models which are Pareto-optimal in terms of both objectives. From this viewpoint, we propose a method to optimize the Pareto-optimal models by formalizing it as a multi-objective optimization problem. We conduct experiments on two popular NLP tasks, i.e., machine translation and language modeling, and investigate the relationship between several kinds of linguistic information and task performances. Experimental results demonstrate that the proposed method is better than a baseline method. Our empirical findings suggest that some syntactic information is helpful for NLP tasks whereas encoding more syntactic information does not necessarily lead to better performance, because the model architecture is also an important factor.
null
null
10.18653/v1/2022.findings-acl.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,960
inproceedings
hua-wang-2022-efficient
Efficient Argument Structure Extraction with Transfer Learning and Active Learning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.36/
Hua, Xinyu and Wang, Lu
Findings of the Association for Computational Linguistics: ACL 2022
423--437
The automation of extracting argument structures faces a pair of challenges on (1) encoding long-term contexts to facilitate comprehensive understanding, and (2) improving data efficiency since constructing high-quality argument structures is time-consuming. In this work, we propose a novel context-aware Transformer-based argument structure prediction model which, on five different domains, significantly outperforms models that rely on features or only encode limited contexts. To tackle the difficulty of data annotation, we examine two complementary methods: (i) transfer learning to leverage existing annotated data to boost model performance in a new target domain, and (ii) active learning to strategically identify a small amount of samples for annotation. We further propose model-independent sample acquisition strategies, which can be generalized to diverse domains. With extensive experiments, we show that our simple-yet-effective acquisition strategies yield competitive results against three strong comparisons. Combined with transfer learning, substantial F1 score boost (5-25) can be further achieved during the early iterations of active learning across domains.
null
null
10.18653/v1/2022.findings-acl.36
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,961
inproceedings
lee-etal-2022-plug
Plug-and-Play Adaptation for Continuously-updated {QA}
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.37/
Lee, Kyungjae and Han, Wookje and Hwang, Seung-won and Lee, Hwaran and Park, Joonsuk and Lee, Sang-Woo
Findings of the Association for Computational Linguistics: ACL 2022
438--447
Language models (LMs) have shown great potential as implicit knowledge bases (KBs). And for their practical use, knowledge in LMs need to be updated periodically. However, existing tasks to assess LMs' efficacy as KBs do not adequately consider multiple large-scale updates. To this end, we first propose a novel task{---}Continuously-updated QA (CuQA){---}in which multiple large-scale updates are made to LMs, and the performance is measured with respect to the success in adding and updating knowledge while retaining existing knowledge. We then present LMs with plug-in modules that effectively handle the updates. Experiments conducted on zsRE QA and NQ datasets show that our method outperforms existing approaches. We find that our method is 4x more effective in terms of updates/forgets ratio, compared to a fine-tuning baseline.
null
null
10.18653/v1/2022.findings-acl.37
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,962
inproceedings
qin-song-2022-reinforced
Reinforced Cross-modal Alignment for Radiology Report Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.38/
Qin, Han and Song, Yan
Findings of the Association for Computational Linguistics: ACL 2022
448--458
Medical images are widely used in clinical decision-making, where writing radiology reports is a potential application that can be enhanced by automatic solutions to alleviate physicians' workload. In general, radiology report generation is an image-text task, where cross-modal mappings between images and texts play an important role in generating high-quality reports. Although previous studies attempt to facilitate the alignment via the co-attention mechanism under supervised settings, they suffer from lacking valid and accurate correspondences due to no annotation of such alignment. In this paper, we propose an approach with reinforcement learning (RL) over a cross-modal memory (CMM) to better align visual and textual features for radiology report generation. In detail, a shared memory is used to record the mappings between visual and textual information, and the proposed reinforced algorithm is performed to learn the signal from the reports to guide the cross-modal alignment even though such reports are not directly related to how images and texts are mapped. Experimental results on two English radiology report datasets, i.e., IU X-Ray and MIMIC-CXR, show the effectiveness of our approach, where the state-of-the-art results are achieved. We further conduct human evaluation and case study which confirm the validity of the reinforced algorithm in our approach.
null
null
10.18653/v1/2022.findings-acl.38
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,963
inproceedings
li-etal-2022-works
What Works and Doesn`t Work, A Deep Decoder for Neural Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.39/
Li, Zuchao and Wang, Yiran and Utiyama, Masao and Sumita, Eiichiro and Zhao, Hai and Watanabe, Taro
Findings of the Association for Computational Linguistics: ACL 2022
459--471
Deep learning has demonstrated performance advantages in a wide range of natural language processing tasks, including neural machine translation (NMT). Transformer NMT models are typically strengthened by deeper encoder layers, but deepening their decoder layers usually results in failure. In this paper, we first identify the cause of the failure of the deep decoder in the Transformer model. Inspired by this discovery, we then propose approaches to improving it, with respect to model structure and model training, to make the deep decoder practical in NMT. Specifically, with respect to model structure, we propose a cross-attention drop mechanism to allow the decoder layers to perform their own different roles, to reduce the difficulty of deep-decoder learning. For model training, we propose a collapse reducing training approach to improve the stability and effectiveness of deep-decoder training. We experimentally evaluated our proposed Transformer NMT model structure modification and novel training methods on several popular machine translation benchmarks. The results showed that deepening the NMT model by increasing the number of decoder layers successfully prevented the deepened decoder from degrading to an unconditional language model. In contrast to prior work on deepening an NMT model on the encoder, our method can deepen the model on both the encoder and decoder at the same time, resulting in a deeper model and improved performance.
null
null
10.18653/v1/2022.findings-acl.39
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,964