entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | wang-etal-2022-promda | {P}rom{DA}: Prompt-based Data Augmentation for Low-Resource {NLU} Tasks | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.292/ | Wang, Yufei and Xu, Can and Sun, Qingfeng and Hu, Huang and Tao, Chongyang and Geng, Xiubo and Jiang, Daxin | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4242--4255 | This paper focuses on the Data Augmentation for low-resource Natural Language Understanding (NLU) tasks. We propose Prompt-based Data Augmentation model (PromDA) which only trains small-scale Soft Prompt (i.e., a set of trainable vectors) in the frozen Pre-trained Language Models (PLMs). This avoids human effort in collecting unlabeled in-domain data and maintains the quality of generated synthetic data. In addition, PromDA generates synthetic data via two different views and filters out the low-quality data using NLU models. Experiments on four benchmarks show that synthetic data produced by PromDA successfully boost up the performance of NLU models which consistently outperform several competitive baseline models, including a state-of-the-art semi-supervised model using unlabeled in-domain data. The synthetic data from PromDA are also complementary with unlabeled in-domain data. The NLU models can be further improved when they are combined for training. | null | null | 10.18653/v1/2022.acl-long.292 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,958 |
inproceedings | zheng-lapata-2022-disentangled | Disentangled Sequence to Sequence Learning for Compositional Generalization | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.293/ | Zheng, Hao and Lapata, Mirella | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4256--4268 | There is mounting evidence that existing neural network models, in particular the very popular sequence-to-sequence architecture, struggle to systematically generalize to unseen compositions of seen components. We demonstrate that one of the reasons hindering compositional generalization relates to representations being entangled. We propose an extension to sequence-to-sequence models which encourage disentanglement by adaptively re-encoding (at each time step) the source input. Specifically, we condition the source representations on the newly decoded target context which makes it easier for the encoder to exploit specialized information for each prediction rather than capturing it all in a single forward pass. Experimental results on semantic parsing and machine translation empirically show that our proposal delivers more disentangled representations and better generalization. | null | null | 10.18653/v1/2022.acl-long.293 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,959 |
inproceedings | yu-etal-2022-rst | {RST} Discourse Parsing with Second-Stage {EDU}-Level Pre-training | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.294/ | Yu, Nan and Zhang, Meishan and Fu, Guohong and Zhang, Min | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4269--4280 | Pre-trained language models (PLMs) have shown great potentials in natural language processing (NLP) including rhetorical structure theory (RST) discourse parsing. Current PLMs are obtained by sentence-level pre-training, which is different from the basic processing unit, i.e. element discourse unit (EDU).To this end, we propose a second-stage EDU-level pre-training approach in this work, which presents two novel tasks to learn effective EDU representations continually based on well pre-trained language models. Concretely, the two tasks are (1) next EDU prediction (NEP) and (2) discourse marker prediction (DMP).We take a state-of-the-art transition-based neural parser as baseline, and adopt it with a light bi-gram EDU modification to effectively explore the EDU-level pre-trained EDU representation. Experimental results on a benckmark dataset show that our method is highly effective,leading a 2.1-point improvement in F1-score. All codes and pre-trained models will be released publicly to facilitate future studies. | null | null | 10.18653/v1/2022.acl-long.294 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,960 |
inproceedings | wang-etal-2022-simkgc | {S}im{KGC}: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.295/ | Wang, Liang and Zhao, Wei and Wei, Zhuoyu and Liu, Jingming | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4281--4294 | Knowledge graph completion (KGC) aims to reason over known facts and infer the missing links. Text-based methods such as KGBERT (Yao et al., 2019) learn entity representations from natural language descriptions, and have the potential for inductive KGC. However, the performance of text-based methods still largely lag behind graph embedding-based methods like TransE (Bordes et al., 2013) and RotatE (Sun et al., 2019b). In this paper, we identify that the key issue is efficient contrastive learning. To improve the learning efficiency, we introduce three types of negatives: in-batch negatives, pre-batch negatives, and self-negatives which act as a simple form of hard negatives. Combined with InfoNCE loss, our proposed model SimKGC can substantially outperform embedding-based methods on several benchmark datasets. In terms of mean reciprocal rank (MRR), we advance the state-of-the-art by +19{\%} on WN18RR, +6.8{\%} on the Wikidata5M transductive setting, and +22{\%} on the Wikidata5M inductive setting. Thorough analyses are conducted to gain insights into each component. Our code is available at \url{https://github.com/intfloat/SimKGC} . | null | null | 10.18653/v1/2022.acl-long.295 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,961 |
inproceedings | eberle-etal-2022-transformer | Do Transformer Models Show Similar Attention Patterns to Task-Specific Human Gaze? | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.296/ | Eberle, Oliver and Brandl, Stephanie and Pilot, Jonas and S{\o}gaard, Anders | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4295--4309 | Learned self-attention functions in state-of-the-art NLP models often correlate with human attention. We investigate whether self-attention in large-scale pre-trained language models is as predictive of human eye fixation patterns during task-reading as classical cognitive models of human attention. We compare attention functions across two task-specific reading datasets for sentiment analysis and relation extraction. We find the predictiveness of large-scale pre-trained self-attention for human attention depends on {\textquoteleft}what is in the tail', e.g., the syntactic nature of rare contexts. Further, we observe that task-specific fine-tuning does not increase the correlation with human task-specific reading. Through an input reduction experiment we give complementary insights on the sparsity and fidelity trade-off, showing that lower-entropy attention vectors are more faithful. | null | null | 10.18653/v1/2022.acl-long.296 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,962 |
inproceedings | chalkidis-etal-2022-lexglue | {L}ex{GLUE}: A Benchmark Dataset for Legal Language Understanding in {E}nglish | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.297/ | Chalkidis, Ilias and Jana, Abhik and Hartung, Dirk and Bommarito, Michael and Androutsopoulos, Ion and Katz, Daniel and Aletras, Nikolaos | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4310--4330 | Laws and their interpretations, legal arguments and agreements are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size. Natural language understanding (NLU) technologies can be a valuable tool to support legal practitioners in these endeavors. Their usefulness, however, largely depends on whether current state-of-the-art models can generalize across various tasks in the legal domain. To answer this currently open question, we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way. We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks. | null | null | 10.18653/v1/2022.acl-long.297 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,963 |
inproceedings | campolungo-etal-2022-dibimt | {D}i{B}i{MT}: A Novel Benchmark for Measuring {W}ord {S}ense {D}isambiguation Biases in {M}achine {T}ranslation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.298/ | Campolungo, Niccol{\`o} and Martelli, Federico and Saina, Francesco and Navigli, Roberto | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4331--4352 | Lexical ambiguity poses one of the greatest challenges in the field of Machine Translation. Over the last few decades, multiple efforts have been undertaken to investigate incorrect translations caused by the polysemous nature of words. Within this body of research, some studies have posited that models pick up semantic biases existing in the training data, thus producing translation errors. In this paper, we present DiBiMT, the first entirely manually-curated evaluation benchmark which enables an extensive study of semantic biases in Machine Translation of nominal and verbal words in five different language combinations, namely, English and one or other of the following languages: Chinese, German, Italian, Russian and Spanish. Furthermore, we test state-of-the-art Machine Translation systems, both commercial and non-commercial ones, against our new test bed and provide a thorough statistical and linguistic analysis of the results. We release DiBiMT at \url{https://nlp.uniroma1.it/dibimt} as a closed benchmark with a public leaderboard. | null | null | 10.18653/v1/2022.acl-long.298 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,964 |
inproceedings | li-etal-2022-improving | Improving Word Translation via Two-Stage Contrastive Learning | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.299/ | Li, Yaoyiran and Liu, Fangyu and Collier, Nigel and Korhonen, Anna and Vuli{\'c}, Ivan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4353--4374 | Word translation or bilingual lexicon induction (BLI) is a key cross-lingual task, aiming to bridge the lexical gap between different languages. In this work, we propose a robust and effective two-stage contrastive learning framework for the BLI task. At Stage C1, we propose to refine standard cross-lingual linear maps between static word embeddings (WEs) via a contrastive learning objective; we also show how to integrate it into the self-learning procedure for even more refined cross-lingual maps. In Stage C2, we conduct BLI-oriented contrastive fine-tuning of mBERT, unlocking its word translation capability. We also show that static WEs induced from the {\textquoteleft}C2-tuned' mBERT complement static WEs from Stage C1. Comprehensive experiments on standard BLI datasets for diverse languages and different experimental setups demonstrate substantial gains achieved by our framework. While the BLI method from Stage C1 already yields substantial gains over all state-of-the-art BLI methods in our comparison, even stronger improvements are met with the full two-stage framework: e.g., we report gains for 112/112 BLI setups, spanning 28 language pairs. | null | null | 10.18653/v1/2022.acl-long.299 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,965 |
inproceedings | liang-etal-2022-scheduled | Scheduled Multi-task Learning for Neural Chat Translation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.300/ | Liang, Yunlong and Meng, Fandong and Xu, Jinan and Chen, Yufeng and Zhou, Jie | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4375--4388 | Neural Chat Translation (NCT) aims to translate conversational text into different languages. Existing methods mainly focus on modeling the bilingual dialogue characteristics (e.g., coherence) to improve chat translation via multi-task learning on small-scale chat translation data. Although the NCT models have achieved impressive success, it is still far from satisfactory due to insufficient chat translation data and simple joint training manners. To address the above issues, we propose a scheduled multi-task learning framework for NCT. Specifically, we devise a three-stage training framework to incorporate the large-scale in-domain chat translation data into training by adding a second pre-training stage between the original pre-training and fine-tuning stages. Further, we investigate where and how to schedule the dialogue-related auxiliary tasks in multiple training stages to effectively enhance the main chat translation task. Extensive experiments on four language directions (English-Chinese and English-German) verify the effectiveness and superiority of the proposed approach. Additionally, we will make the large-scale in-domain paired bilingual dialogue dataset publicly available for the research community. | null | null | 10.18653/v1/2022.acl-long.300 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,966 |
inproceedings | chalkidis-etal-2022-fairlex | {F}air{L}ex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.301/ | Chalkidis, Ilias and Pasini, Tommaso and Zhang, Sheng and Tomada, Letizia and Schwemer, Sebastian and S{\o}gaard, Anders | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4389--4406 | We present a benchmark suite of four datasets for evaluating the fairness of pre-trained language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Switzerland, and China), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP. | null | null | 10.18653/v1/2022.acl-long.301 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,967 |
inproceedings | song-etal-2022-towards | Towards Abstractive Grounded Summarization of Podcast Transcripts | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.302/ | Song, Kaiqiang and Li, Chen and Wang, Xiaoyang and Yu, Dong and Liu, Fei | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4407--4418 | Podcasts have shown a recent rise in popularity. Summarization of podcasts is of practical benefit to both content providers and consumers. It helps people quickly decide whether they will listen to a podcast and/or reduces the cognitive load of content providers to write summaries. Nevertheless, podcast summarization faces significant challenges including factual inconsistencies of summaries with respect to the inputs. The problem is exacerbated by speech disfluencies and recognition errors in transcripts of spoken language. In this paper, we explore a novel abstractive summarization method to alleviate these issues. Our approach learns to produce an abstractive summary while grounding summary segments in specific regions of the transcript to allow for full inspection of summary details. We conduct a series of analyses of the proposed approach on a large podcast dataset and show that the approach can achieve promising results. Grounded summaries bring clear benefits in locating the summary and transcript segments that contain inconsistent information, and hence improve summarization quality in terms of automatic and human evaluation. | null | null | 10.18653/v1/2022.acl-long.302 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,968 |
inproceedings | loukas-etal-2022-finer | {F}i{NER}: Financial Numeric Entity Recognition for {XBRL} Tagging | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.303/ | Loukas, Lefteris and Fergadiotis, Manos and Chalkidis, Ilias and Spyropoulou, Eirini and Malakasiotis, Prodromos and Androutsopoulos, Ion and Paliouras, Georgios | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4419--4431 | Publicly traded companies are required to submit periodic reports with eXtensive Business Reporting Language (XBRL) word-level tags. Manually tagging the reports is tedious and costly. We, therefore, introduce XBRL tagging as a new entity extraction task for the financial domain and release FiNER-139, a dataset of 1.1M sentences with gold XBRL tags. Unlike typical entity extraction datasets, FiNER-139 uses a much larger label set of 139 entity types. Most annotated tokens are numeric, with the correct tag per token depending mostly on context, rather than the token itself. We show that subword fragmentation of numeric expressions harms BERT`s performance, allowing word-level BILSTMs to perform better. To improve BERT`s performance, we propose two simple and effective solutions that replace numeric expressions with pseudo-tokens reflecting original token shapes and numeric magnitudes. We also experiment with FIN-BERT, an existing BERT model for the financial domain, and release our own BERT (SEC-BERT), pre-trained on financial filings, which performs best. Through data and error analysis, we finally identify possible limitations to inspire future work on XBRL tagging. | null | null | 10.18653/v1/2022.acl-long.303 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,969 |
inproceedings | li-etal-2022-keywords | Keywords and Instances: A Hierarchical Contrastive Learning Framework Unifying Hybrid Granularities for Text Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.304/ | Li, Mingzhe and Lin, XieXiong and Chen, Xiuying and Chang, Jinxiong and Zhang, Qishen and Wang, Feng and Wang, Taifeng and Liu, Zhongyi and Chu, Wei and Zhao, Dongyan and Yan, Rui | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4432--4441 | Contrastive learning has achieved impressive success in generation tasks to militate the {\textquotedblleft}exposure bias{\textquotedblright} problem and discriminatively exploit the different quality of references. Existing works mostly focus on contrastive learning on the instance-level without discriminating the contribution of each word, while keywords are the gist of the text and dominant the constrained mapping relationships. Hence, in this work, we propose a hierarchical contrastive learning mechanism, which can unify hybrid granularities semantic meaning in the input text. Concretely, we first propose a keyword graph via contrastive correlations of positive-negative pairs to iteratively polish the keyword representations. Then, we construct intra-contrasts within instance-level and keyword-level, where we assume words are sampled nodes from a sentence distribution. Finally, to bridge the gap between independent contrast levels and tackle the common contrast vanishing problem, we propose an inter-contrast mechanism that measures the discrepancy between contrastive keyword nodes respectively to the instance distribution. Experiments demonstrate that our model outperforms competitive baselines on paraphrasing, dialogue generation, and storytelling tasks. | null | null | 10.18653/v1/2022.acl-long.304 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,970 |
inproceedings | kim-etal-2022-ept | {EPT}-{X}: An Expression-Pointer Transformer model that generates e{X}planations for numbers | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.305/ | Kim, Bugeun and Ki, Kyung Seo and Rhim, Sangkyu and Gweon, Gahgene | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4442--4458 | In this paper, we propose a neural model EPT-X (Expression-Pointer Transformer with Explanations), which utilizes natural language explanations to solve an algebraic word problem. To enhance the explainability of the encoding process of a neural model, EPT-X adopts the concepts of plausibility and faithfulness which are drawn from math word problem solving strategies by humans. A plausible explanation is one that includes contextual information for the numbers and variables that appear in a given math word problem. A faithful explanation is one that accurately represents the reasoning process behind the model`s solution equation. The EPT-X model yields an average baseline performance of 69.59{\%} on our PEN dataset and produces explanations with quality that is comparable to human output. The contribution of this work is two-fold. (1) EPT-X model: An explainable neural model that sets a baseline for algebraic word problem solving task, in terms of model`s correctness, plausibility, and faithfulness. (2) New dataset: We release a novel dataset PEN (Problems with Explanations for Numbers), which expands the existing datasets by attaching explanations to each number/variable. | null | null | 10.18653/v1/2022.acl-long.305 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,971 |
inproceedings | kiesel-etal-2022-identifying | Identifying the Human Values behind Arguments | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.306/ | Kiesel, Johannes and Alshomary, Milad and Handke, Nicolas and Cai, Xiaoni and Wachsmuth, Henning and Stein, Benno | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4459--4471 | This paper studies the (often implicit) human values behind natural language arguments, such as to have freedom of thought or to be broadminded. Values are commonly accepted answers to why some option is desirable in the ethical sense and are thus essential both in real-world argumentation and theoretical argumentation frameworks. However, their large variety has been a major obstacle to modeling them in argument mining. To overcome this obstacle, we contribute an operationalization of human values, namely a multi-level taxonomy with 54 values that is in line with psychological research. Moreover, we provide a dataset of 5270 arguments from four geographical cultures, manually annotated for human values. First experiments with the automatic classification of human values are promising, with F$_1$-scores up to 0.81 and 0.25 on average. | null | null | 10.18653/v1/2022.acl-long.306 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,972 |
inproceedings | gashteovski-etal-2022-benchie | {B}ench{IE}: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.307/ | Gashteovski, Kiril and Yu, Mingying and Kotnis, Bhushan and Lawrence, Carolin and Niepert, Mathias and Glava{\v{s}}, Goran | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4472--4490 | Intrinsic evaluations of OIE systems are carried out either manually{---}with human evaluators judging the correctness of extractions{---}or automatically, on standardized benchmarks. The latter, while much more cost-effective, is less reliable, primarily because of the incompleteness of the existing OIE benchmarks: the ground truth extractions do not include all acceptable variants of the same fact, leading to unreliable assessment of the models' performance. Moreover, the existing OIE benchmarks are available for English only. In this work, we introduce BenchIE: a benchmark and evaluation framework for comprehensive evaluation of OIE systems for English, Chinese, and German. In contrast to existing OIE benchmarks, BenchIE is fact-based, i.e., it takes into account informational equivalence of extractions: our gold standard consists of \textit{fact synsets}, clusters in which we exhaustively list all acceptable surface forms of the same fact. Moreover, having in mind common downstream applications for OIE, we make BenchIE multi-faceted; i.e., we create benchmark variants that focus on different facets of OIE evaluation, e.g., compactness or minimality of extractions. We benchmark several state-of-the-art OIE systems using BenchIE and demonstrate that these systems are significantly less effective than indicated by existing OIE benchmarks. We make BenchIE (data and evaluation code) publicly available. | null | null | 10.18653/v1/2022.acl-long.307 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,973 |
inproceedings | pan-etal-2022-leveraging | Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.308/ | Pan, Xichen and Chen, Peiyu and Gong, Yichen and Zhou, Helong and Wang, Xinbing and Lin, Zhouhan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4491--4503 | Training Transformer-based models demands a large amount of data, while obtaining aligned and labelled data in multimodality is rather cost-demanding, especially for audio-visual speech recognition (AVSR). Thus it makes a lot of sense to make use of unlabelled unimodal data. On the other side, although the effectiveness of large-scale self-supervised learning is well established in both audio and visual modalities, how to integrate those pre-trained models into a multimodal scenario remains underexplored. In this work, we successfully leverage unimodal self-supervised learning to promote the multimodal AVSR. In particular, audio and visual front-ends are trained on large-scale unimodal datasets, then we integrate components of both front-ends into a larger multimodal framework which learns to recognize parallel audio-visual data into characters through a combination of CTC and seq2seq decoding. We show that both components inherited from unimodal self-supervised learning cooperate well, resulting in that the multimodal framework yields competitive results through fine-tuning. Our model is experimentally validated on both word-level and sentence-level tasks. Especially, even without an external language model, our proposed model raises the state-of-the-art performances on the widely accepted Lip Reading Sentences 2 (LRS2) dataset by a large margin, with a relative improvement of 30{\%}. | null | null | 10.18653/v1/2022.acl-long.308 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,974 |
inproceedings | ravaut-etal-2022-summareranker | {S}umma{R}eranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.309/ | Ravaut, Mathieu and Joty, Shafiq and Chen, Nancy | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4504--4524 | Sequence-to-sequence neural networks have recently achieved great success in abstractive summarization, especially through fine-tuning large pre-trained language models on the downstream dataset. These models are typically decoded with beam search to generate a unique summary. However, the search space is very large, and with the exposure bias, such decoding is not optimal. In this paper, we show that it is possible to directly train a second-stage model performing re-ranking on a set of summary candidates. Our mixture-of-experts SummaReranker learns to select a better candidate and consistently improves the performance of the base model. With a base PEGASUS, we push ROUGE scores by 5.44{\%} on CNN- DailyMail (47.16 ROUGE-1), 1.31{\%} on XSum (48.12 ROUGE-1) and 9.34{\%} on Reddit TIFU (29.83 ROUGE-1), reaching a new state-of-the-art. Our code and checkpoints will be available at \url{https://github.com/ntunlp/SummaReranker}. | null | null | 10.18653/v1/2022.acl-long.309 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,975 |
inproceedings | wu-etal-2022-understanding | Understanding Multimodal Procedural Knowledge by Sequencing Multimodal Instructional Manuals | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.310/ | Wu, Te-Lin and Spangher, Alex and Alipoormolabashi, Pegah and Freedman, Marjorie and Weischedel, Ralph and Peng, Nanyun | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4525--4542 | The ability to sequence unordered events is evidence of comprehension and reasoning about real world tasks/procedures. It is essential for applications such as task planning and multi-source instruction summarization. It often requires thorough understanding of temporal common sense and multimodal information, since these procedures are often conveyed by a combination of texts and images. While humans are capable of reasoning about and sequencing unordered procedural instructions, the extent to which the current machine learning methods possess such capability is still an open question. In this work, we benchmark models' capability of reasoning over and sequencing unordered multimodal instructions by curating datasets from online instructional manuals and collecting comprehensive human annotations. We find current state-of-the-art models not only perform significantly worse than humans but also seem incapable of efficiently utilizing multimodal information. To improve machines' performance on multimodal event sequencing, we propose sequence-aware pretraining techniques exploiting the sequential alignment properties of both texts and images, resulting in {\ensuremath{>}} 5{\%} improvements on perfect match ratio. | null | null | 10.18653/v1/2022.acl-long.310 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,976 |
inproceedings | sheng-etal-2022-zoom | Zoom Out and Observe: News Environment Perception for Fake News Detection | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.311/ | Sheng, Qiang and Cao, Juan and Zhang, Xueyao and Li, Rundong and Wang, Danding and Zhu, Yongchun | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4543--4556 | Fake news detection is crucial for preventing the dissemination of misinformation on social media. To differentiate fake news from real ones, existing methods observe the language patterns of the news post and {\textquotedblleft}zoom in{\textquotedblright} to verify its content with knowledge sources or check its readers' replies. However, these methods neglect the information in the external news environment where a fake news post is created and disseminated. The news environment represents recent mainstream media opinion and public attention, which is an important inspiration of fake news fabrication because fake news is often designed to ride the wave of popular events and catch public attention with unexpected novel content for greater exposure and spread. To capture the environmental signals of news posts, we {\textquotedblleft}zoom out{\textquotedblright} to observe the news environment and propose the News Environment Perception Framework (NEP). For each post, we construct its macro and micro news environment from recent mainstream news. Then we design a popularity-oriented and a novelty-oriented module to perceive useful signals and further assist final prediction. Experiments on our newly built datasets show that the NEP can efficiently improve the performance of basic fake news detectors. | null | null | 10.18653/v1/2022.acl-long.311 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,977 |
inproceedings | lupo-etal-2022-divide | Divide and Rule: Effective Pre-Training for Context-Aware Multi-Encoder Translation Models | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.312/ | Lupo, Lorenzo and Dinarelli, Marco and Besacier, Laurent | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4557--4572 | Multi-encoder models are a broad family of context-aware neural machine translation systems that aim to improve translation quality by encoding document-level contextual information alongside the current sentence. The context encoding is undertaken by contextual parameters, trained on document-level data. In this work, we discuss the difficulty of training these parameters effectively, due to the sparsity of the words in need of context (i.e., the training signal), and their relevant context. We propose to pre-train the contextual parameters over split sentence pairs, which makes an efficient use of the available data for two reasons. Firstly, it increases the contextual training signal by breaking intra-sentential syntactic relations, and thus pushing the model to search the context for disambiguating clues more frequently. Secondly, it eases the retrieval of relevant context, since context segments become shorter. We propose four different splitting methods, and evaluate our approach with BLEU and contrastive test sets. Results show that it consistently improves learning of contextual parameters, both in low and high resource settings. | null | null | 10.18653/v1/2022.acl-long.312 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,978 |
inproceedings | liu-etal-2022-saliency | Saliency as Evidence: Event Detection with Trigger Saliency Attribution | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.313/ | Liu, Jian and Chen, Yufeng and Xu, Jinan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4573--4585 | Event detection (ED) is a critical subtask of event extraction that seeks to identify event triggers of certain types in texts. Despite significant advances in ED, existing methods typically follow a {\textquotedblleft}one model fits all types{\textquotedblright} approach, which sees no differences between event types and often results in a quite skewed performance. Finding the causes of skewed performance is crucial for the robustness of an ED model, but to date there has been little exploration of this problem. This research examines the issue in depth and presents a new concept termed trigger salience attribution, which can explicitly quantify the underlying patterns of events. On this foundation, we develop a new training mechanism for ED, which can distinguish between trigger-dependent and context-dependent types and achieve promising performance on two benchmarks. Finally, by highlighting many distinct characteristics of trigger-dependent and context-dependent types, our work may promote more research into this problem. | null | null | 10.18653/v1/2022.acl-long.313 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,979 |
inproceedings | campagnano-etal-2022-srl4e | {SRL4E} {--} {S}emantic {R}ole {L}abeling for {E}motions: {A} Unified Evaluation Framework | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.314/ | Campagnano, Cesare and Conia, Simone and Navigli, Roberto | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4586--4601 | In the field of sentiment analysis, several studies have highlighted that a single sentence may express multiple, sometimes contrasting, sentiments and emotions, each with its own experiencer, target and/or cause. To this end, over the past few years researchers have started to collect and annotate data manually, in order to investigate the capabilities of automatic systems not only to distinguish between emotions, but also to capture their semantic constituents. However, currently available gold datasets are heterogeneous in size, domain, format, splits, emotion categories and role labels, making comparisons across different works difficult and hampering progress in the area. In this paper, we tackle this issue and present a unified evaluation framework focused on Semantic Role Labeling for Emotions (SRL4E), in which we unify several datasets tagged with emotions and semantic roles by using a common labeling scheme. We use SRL4E as a benchmark to evaluate how modern pretrained language models perform and analyze where we currently stand in this task, hoping to provide the tools to facilitate studies in this complex area. | null | null | 10.18653/v1/2022.acl-long.314 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,980 |
inproceedings | gubelmann-handschuh-2022-context | Context Matters: A Pragmatic Study of {PLM}s' Negation Understanding | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.315/ | Gubelmann, Reto and Handschuh, Siegfried | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4602--4621 | In linguistics, there are two main perspectives on negation: a semantic and a pragmatic view. So far, research in NLP on negation has almost exclusively adhered to the semantic view. In this article, we adopt the pragmatic paradigm to conduct a study of negation understanding focusing on transformer-based PLMs. Our results differ from previous, semantics-based studies and therefore help to contribute a more comprehensive {--} and, given the results, much more optimistic {--} picture of the PLMs' negation understanding. | null | null | 10.18653/v1/2022.acl-long.315 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,981 |
inproceedings | conia-navigli-2022-probing | Probing for Predicate Argument Structures in Pretrained Language Models | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.316/ | Conia, Simone and Navigli, Roberto | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4622--4632 | Thanks to the effectiveness and wide availability of modern pretrained language models (PLMs), recently proposed approaches have achieved remarkable results in dependency- and span-based, multilingual and cross-lingual Semantic Role Labeling (SRL). These results have prompted researchers to investigate the inner workings of modern PLMs with the aim of understanding how, where, and to what extent they encode information about SRL. In this paper, we follow this line of research and probe for predicate argument structures in PLMs. Our study shows that PLMs do encode semantic structures directly into the contextualized representation of a predicate, and also provides insights into the correlation between predicate senses and their structures, the degree of transferability between nominal and verbal structures, and how such structures are encoded across languages. Finally, we look at the practical implications of such insights and demonstrate the benefits of embedding predicate argument structure information into an SRL model. | null | null | 10.18653/v1/2022.acl-long.316 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,982 |
inproceedings | huang-etal-2022-multilingual-generative | Multilingual Generative Language Models for Zero-Shot Cross-Lingual Event Argument Extraction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.317/ | Huang, Kuan-Hao and Hsu, I-Hung and Natarajan, Prem and Chang, Kai-Wei and Peng, Nanyun | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4633--4646 | We present a study on leveraging multilingual pre-trained generative language models for zero-shot cross-lingual event argument extraction (EAE). By formulating EAE as a language generation task, our method effectively encodes event structures and captures the dependencies between arguments. We design language-agnostic templates to represent the event argument structures, which are compatible with any language, hence facilitating the cross-lingual transfer. Our proposed model finetunes multilingual pre-trained generative language models to generate sentences that fill in the language-agnostic template with arguments extracted from the input passage. The model is trained on source languages and is then directly applied to target languages for event argument extraction. Experiments demonstrate that the proposed model outperforms the current state-of-the-art models on zero-shot cross-lingual EAE. Comprehensive studies and error analyses are presented to better understand the advantages and the current limitations of using generative language models for zero-shot cross-lingual transfer EAE. | null | null | 10.18653/v1/2022.acl-long.317 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,983 |
inproceedings | tsakalidis-etal-2022-identifying | Identifying Moments of Change from Longitudinal User Text | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.318/ | Tsakalidis, Adam and Nanni, Federico and Hills, Anthony and Chim, Jenny and Song, Jiayu and Liakata, Maria | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4647--4660 | Identifying changes in individuals' behaviour and mood, as observed via content shared on online platforms, is increasingly gaining importance. Most research to-date on this topic focuses on either: (a) identifying individuals at risk or with a certain mental health condition given a batch of posts or (b) providing equivalent labels at the post level. A disadvantage of such work is the lack of a strong temporal component and the inability to make longitudinal assessments following an individual`s trajectory and allowing timely interventions. Here we define a new task, that of identifying moments of change in individuals on the basis of their shared content online. The changes we consider are sudden shifts in mood (switches) or gradual mood progression (escalations). We have created detailed guidelines for capturing moments of change and a corpus of 500 manually annotated user timelines (18.7K posts). We have developed a variety of baseline models drawing inspiration from related tasks and show that the best performance is obtained through context aware sequential modelling. We also introduce new metrics for capturing rare events in temporal windows. | null | null | 10.18653/v1/2022.acl-long.318 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,984 |
inproceedings | su-etal-2022-multi | Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.319/ | Su, Yixuan and Shu, Lei and Mansimov, Elman and Gupta, Arshit and Cai, Deng and Lai, Yi-An and Zhang, Yi | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4661--4676 | Pre-trained language models have been recently shown to benefit task-oriented dialogue (TOD) systems. Despite their success, existing methods often formulate this task as a cascaded generation problem which can lead to error accumulation across different sub-tasks and greater data annotation overhead. In this study, we present PPTOD, a unified plug-and-play model for task-oriented dialogue. In addition, we introduce a new dialogue multi-task pre-training strategy that allows the model to learn the primary TOD task completion skills from heterogeneous dialog corpora. We extensively test our model on three benchmark TOD tasks, including end-to-end dialogue modelling, dialogue state tracking, and intent classification. Experimental results show that PPTOD achieves new state of the art on all evaluated tasks in both high-resource and low-resource scenarios. Furthermore, comparisons against previous SOTA methods show that the responses generated by PPTOD are more factually correct and semantically coherent as judged by human annotators. | null | null | 10.18653/v1/2022.acl-long.319 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,985 |
inproceedings | hu-etal-2022-graph | Graph Enhanced Contrastive Learning for Radiology Findings Summarization | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.320/ | Hu, Jinpeng and Li, Zhuo and Chen, Zhihong and Li, Zhen and Wan, Xiang and Chang, Tsung-Hui | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4677--4688 | The impression section of a radiology report summarizes the most prominent observation from the findings section and is the most important section for radiologists to communicate to physicians. Summarizing findings is time-consuming and can be prone to error for inexperienced radiologists, and thus automatic impression generation has attracted substantial attention. With the encoder-decoder framework, most previous studies explore incorporating extra knowledge (e.g., static pre-defined clinical ontologies or extra background information). Yet, they encode such knowledge by a separate encoder to treat it as an extra input to their models, which is limited in leveraging their relations with the original findings. To address the limitation, we propose a unified framework for exploiting both extra knowledge and the original findings in an integrated way so that the critical information (i.e., key words and their relations) can be extracted in an appropriate way to facilitate impression generation. In detail, for each input findings, it is encoded by a text encoder and a graph is constructed through its entities and dependency tree. Then, a graph encoder (e.g., graph neural networks (GNNs)) is adopted to model relation information in the constructed graph. Finally, to emphasize the key words in the findings, contrastive learning is introduced to map positive samples (constructed by masking non-key words) closer and push apart negative ones (constructed by masking key words). The experimental results on two datasets, OpenI and MIMIC-CXR, confirm the effectiveness of our proposed method, where the state-of-the-art results are achieved. | null | null | 10.18653/v1/2022.acl-long.320 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,986 |
inproceedings | liu-etal-2022-semi | Semi-Supervised Formality Style Transfer with Consistency Training | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.321/ | Liu, Ao and Wang, An and Okazaki, Naoaki | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4689--4701 | Formality style transfer (FST) is a task that involves paraphrasing an informal sentence into a formal one without altering its meaning. To address the data-scarcity problem of existing parallel datasets, previous studies tend to adopt a cycle-reconstruction scheme to utilize additional unlabeled data, where the FST model mainly benefits from target-side unlabeled sentences. In this work, we propose a simple yet effective semi-supervised framework to better utilize source-side unlabeled sentences based on consistency training. Specifically, our approach augments pseudo-parallel data obtained from a source-side informal sentence by enforcing the model to generate similar outputs for its perturbed version. Moreover, we empirically examined the effects of various data perturbation methods and propose effective data filtering strategies to improve our framework. Experimental results on the GYAFC benchmark demonstrate that our approach can achieve state-of-the-art results, even with less than 40{\%} of the parallel data. | null | null | 10.18653/v1/2022.acl-long.321 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,987 |
inproceedings | chai-etal-2022-cross | Cross-Lingual Ability of Multilingual Masked Language Models: A Study of Language Structure | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.322/ | Chai, Yuan and Liang, Yaobo and Duan, Nan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4702--4712 | Multilingual pre-trained language models, such as mBERT and XLM-R, have shown impressive cross-lingual ability. Surprisingly, both of them use multilingual masked language model (MLM) without any cross-lingual supervision or aligned data. Despite the encouraging results, we still lack a clear understanding of why cross-lingual ability could emerge from multilingual MLM. In our work, we argue that cross-language ability comes from the commonality between languages. Specifically, we study three language properties: constituent order, composition and word co-occurrence. First, we create an artificial language by modifying property in source language. Then we study the contribution of modified property through the change of cross-language transfer results on target language. We conduct experiments on six languages and two cross-lingual NLP tasks (textual entailment, sentence retrieval). Our main conclusion is that the contribution of constituent order and word co-occurrence is limited, while the composition is more crucial to the success of cross-linguistic transfer. | null | null | 10.18653/v1/2022.acl-long.322 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,988 |
inproceedings | su-etal-2022-rare | Rare and Zero-shot Word Sense Disambiguation using {Z}-Reweighting | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.323/ | Su, Ying and Zhang, Hongming and Song, Yangqiu and Zhang, Tong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4713--4723 | Word sense disambiguation (WSD) is a crucial problem in the natural language processing (NLP) community. Current methods achieve decent performance by utilizing supervised learning and large pre-trained language models. However, the imbalanced training dataset leads to poor performance on rare senses and zero-shot senses. There are more training instances and senses for words with top frequency ranks than those with low frequency ranks in the training dataset. We investigate the statistical relation between word frequency rank and word sense number distribution. Based on the relation, we propose a Z-reweighting method on the word level to adjust the training on the imbalanced dataset. The experiments show that the Z-reweighting strategy achieves performance gain on the standard English all words WSD benchmark. Moreover, the strategy can help models generalize better on rare and zero-shot senses. | null | null | 10.18653/v1/2022.acl-long.323 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,989 |
inproceedings | maru-etal-2022-nibbling | {N}ibbling at the Hard Core of {W}ord {S}ense {D}isambiguation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.324/ | Maru, Marco and Conia, Simone and Bevilacqua, Michele and Navigli, Roberto | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4724--4737 | With state-of-the-art systems having finally attained estimated human performance, Word Sense Disambiguation (WSD) has now joined the array of Natural Language Processing tasks that have seemingly been solved, thanks to the vast amounts of knowledge encoded into Transformer-based pre-trained language models. And yet, if we look below the surface of raw figures, it is easy to realize that current approaches still make trivial mistakes that a human would never make. In this work, we provide evidence showing why the F1 score metric should not simply be taken at face value and present an exhaustive analysis of the errors that seven of the most representative state-of-the-art systems for English all-words WSD make on traditional evaluation benchmarks. In addition, we produce and release a collection of test sets featuring (a) an amended version of the standard evaluation benchmark that fixes its lexical and semantic inaccuracies, (b) 42D, a challenge set devised to assess the resilience of systems with respect to least frequent word senses and senses not seen at training time, and (c) hardEN, a challenge set made up solely of instances which none of the investigated state-of-the-art systems can solve. We make all of the test sets and model predictions available to the research community at \url{https://github.com/SapienzaNLP/wsd-hard-benchmark}. | null | null | 10.18653/v1/2022.acl-long.324 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,990 |
inproceedings | eyal-etal-2022-large | Large Scale Substitution-based Word Sense Induction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.325/ | Eyal, Matan and Sadde, Shoval and Taub-Tabib, Hillel and Goldberg, Yoav | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4738--4752 | We present a word-sense induction method based on pre-trained masked language models (MLMs), which can cheaply scale to large vocabularies and large corpora. The result is a corpus which is sense-tagged according to a corpus-derived sense inventory and where each sense is associated with indicative words. Evaluation on English Wikipedia that was sense-tagged using our method shows that both the induced senses, and the per-instance sense assignment, are of high quality even compared to WSD methods, such as Babelfy. Furthermore, by training a static word embeddings algorithm on the sense-tagged corpus, we obtain high-quality static senseful embeddings. These outperform existing senseful embeddings methods on the WiC dataset and on a new outlier detection dataset we developed. The data driven nature of the algorithm allows to induce corpora-specific senses, which may not appear in standard sense inventories, as we demonstrate using a case study on the scientific domain. | null | null | 10.18653/v1/2022.acl-long.325 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,991 |
inproceedings | briakou-carpuat-2022-synthetic | Can Synthetic Translations Improve Bitext Quality? | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.326/ | Briakou, Eleftheria and Carpuat, Marine | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4753--4766 | Synthetic translations have been used for a wide range of NLP tasks primarily as a means of data augmentation. This work explores, instead, how synthetic translations can be used to revise potentially imperfect reference translations in mined bitext. We find that synthetic samples can improve bitext quality without any additional bilingual supervision when they replace the originals based on a semantic equivalence classifier that helps mitigate NMT noise. The improved quality of the revised bitext is confirmed intrinsically via human evaluation and extrinsically through bilingual induction and MT tasks. | null | null | 10.18653/v1/2022.acl-long.326 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,992 |
inproceedings | shen-etal-2022-unsupervised | Unsupervised Dependency Graph Network | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.327/ | Shen, Yikang and Tan, Shawn and Sordoni, Alessandro and Li, Peng and Zhou, Jie and Courville, Aaron | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4767--4784 | Recent work has identified properties of pretrained self-attention models that mirror those of dependency parse structures. In particular, some self-attention heads correspond well to individual dependency types. Inspired by these developments, we propose a new competitive mechanism that encourages these attention heads to model different dependency relations. We introduce a new model, the Unsupervised Dependency Graph Network (UDGN), that can induce dependency structures from raw corpora and the masked language modeling task. Experiment results show that UDGN achieves very strong unsupervised dependency parsing performance without gold POS tags and any other external information. The competitive gated heads show a strong correlation with human-annotated dependency types. Furthermore, the UDGN can also achieve competitive performance on masked language modeling and sentence textual similarity tasks. | null | null | 10.18653/v1/2022.acl-long.327 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,993 |
inproceedings | wang-etal-2022-wikidiverse | {W}iki{D}iverse: A Multimodal Entity Linking Dataset with Diversified Contextual Topics and Entity Types | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.328/ | Wang, Xuwu and Tian, Junfeng and Gui, Min and Li, Zhixu and Wang, Rui and Yan, Ming and Chen, Lihan and Xiao, Yanghua | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4785--4797 | Multimodal Entity Linking (MEL) which aims at linking mentions with multimodal contexts to the referent entities from a knowledge base (e.g., Wikipedia), is an essential task for many multimodal applications. Although much attention has been paid to MEL, the shortcomings of existing MEL datasets including limited contextual topics and entity types, simplified mention ambiguity, and restricted availability, have caused great obstacles to the research and application of MEL. In this paper, we present WikiDiverse, a high-quality human-annotated MEL dataset with diversified contextual topics and entity types from Wikinews, which uses Wikipedia as the corresponding knowledge base. A well-tailored annotation procedure is adopted to ensure the quality of the dataset. Based on WikiDiverse, a sequence of well-designed MEL models with intra-modality and inter-modality attentions are implemented, which utilize the visual information of images more adequately than existing MEL models do. Extensive experimental analyses are conducted to investigate the contributions of different modalities in terms of MEL, facilitating the future research on this task. | null | null | 10.18653/v1/2022.acl-long.328 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,994 |
inproceedings | meng-etal-2022-rewire | Rewire-then-Probe: A Contrastive Recipe for Probing Biomedical Knowledge of Pre-trained Language Models | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.329/ | Meng, Zaiqiao and Liu, Fangyu and Shareghi, Ehsan and Su, Yixuan and Collins, Charlotte and Collier, Nigel | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4798--4810 | Knowledge probing is crucial for understanding the knowledge transfer mechanism behind the pre-trained language models (PLMs). Despite the growing progress of probing knowledge for PLMs in the general domain, specialised areas such as the biomedical domain are vastly under-explored. To facilitate this, we release a well-curated biomedical knowledge probing benchmark, MedLAMA, constructed based on the Unified Medical Language System (UMLS) Metathesaurus. We test a wide spectrum of state-of-the-art PLMs and probing approaches on our benchmark, reaching at most 3{\%} of acc@10. While highlighting various sources of domain-specific challenges that amount to this underwhelming performance, we illustrate that the underlying PLMs have a higher potential for probing tasks. To achieve this, we propose Contrastive-Probe, a novel self-supervised contrastive probing approach, that adjusts the underlying PLMs without using any probing data. While Contrastive-Probe pushes the acc@10 to 28{\%}, the performance gap still remains notable. Our human expert evaluation suggests that the probing performance of our Contrastive-Probe is still under-estimated as UMLS still does not include the full spectrum of factual knowledge. We hope MedLAMA and Contrastive-Probe facilitate further developments of more suited probing techniques for this domain. Our code and dataset are publicly available at \url{https://github.com/cambridgeltl/medlama}. | null | null | 10.18653/v1/2022.acl-long.329 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,995 |
inproceedings | zhao-etal-2022-fine | Fine- and Coarse-Granularity Hybrid Self-Attention for Efficient {BERT} | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.330/ | Zhao, Jing and Wang, Yifan and Bao, Junwei and Wu, Youzheng and He, Xiaodong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4811--4820 | Transformer-based pre-trained models, such as BERT, have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, deploying these models can be prohibitively costly, as the standard self-attention mechanism of the Transformer suffers from quadratic computational cost in the input sequence length. To confront this, we propose FCA, a fine- and coarse-granularity hybrid self-attention that reduces the computation cost through progressively shortening the computational sequence length in self-attention. Specifically, FCA conducts an attention-based scoring strategy to determine the informativeness of tokens at each layer. Then, the informative tokens serve as the fine-granularity computing units in self-attention and the uninformative tokens are replaced with one or several clusters as the coarse-granularity computing units in self-attention. Experiments on the standard GLUE benchmark show that BERT with FCA achieves 2x reduction in FLOPs over original BERT with {\ensuremath{<}}1{\%} loss in accuracy. We show that FCA offers a significantly better trade-off between accuracy and FLOPs compared to prior methods. | null | null | 10.18653/v1/2022.acl-long.330 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,996 |
inproceedings | tao-etal-2022-compression | Compression of Generative Pre-trained Language Models via Quantization | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.331/ | Tao, Chaofan and Hou, Lu and Zhang, Wei and Shang, Lifeng and Jiang, Xin and Liu, Qun and Luo, Ping and Wong, Ngai | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4821--4836 | The increasing size of generative Pre-trained Language Models (PLMs) have greatly increased the demand for model compression. Despite various methods to compress BERT or its variants, there are few attempts to compress generative PLMs, and the underlying difficulty remains unclear. In this paper, we compress generative PLMs by quantization. We find that previous quantization methods fail on generative tasks due to the homogeneous word embeddings caused by reduced capacity and the varied distribution of weights. Correspondingly, we propose a token-level contrastive distillation to learn distinguishable word embeddings, and a module-wise dynamic scaling to make quantizers adaptive to different modules. Empirical results on various tasks show that our proposed method outperforms the state-of-the-art compression methods on generative PLMs by a clear margin. With comparable performance with the full-precision models, we achieve 14.4x and 13.4x compression rate on GPT-2 and BART, respectively. | null | null | 10.18653/v1/2022.acl-long.331 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,997 |
inproceedings | liang-etal-2022-visual | Visual-Language Navigation Pretraining via Prompt-based Environmental Self-exploration | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.332/ | Liang, Xiwen and Zhu, Fengda and Lingling, Li and Xu, Hang and Liang, Xiaodan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4837--4851 | Vision-language navigation (VLN) is a challenging task due to its large searching space in the environment. To address this problem, previous works have proposed some methods of fine-tuning a large model that pretrained on large-scale datasets. However, the conventional fine-tuning methods require extra human-labeled navigation data and lack self-exploration capabilities in environments, which hinders their generalization of unseen scenes. To improve the ability of fast cross-domain adaptation, we propose Prompt-based Environmental Self-exploration (ProbES), which can self-explore the environments by sampling trajectories and automatically generates structured instructions via a large-scale cross-modal pretrained model (CLIP). Our method fully utilizes the knowledge learned from CLIP to build an in-domain dataset by self-exploration without human labeling. Unlike the conventional approach of fine-tuning, we introduce prompt tuning to achieve fast adaptation for language embeddings, which substantially improves the learning efficiency by leveraging prior knowledge. By automatically synthesizing trajectory-instruction pairs in any environment without human supervision and instruction prompt tuning, our model can adapt to diverse vision-language navigation tasks, including VLN and REVERIE. Both qualitative and quantitative results show that our ProbES significantly improves the generalization ability of the navigation model. | null | null | 10.18653/v1/2022.acl-long.332 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,998 |
inproceedings | chen-etal-2022-dialogved | {D}ialog{VED}: A Pre-trained Latent Variable Encoder-Decoder Model for Dialog Response Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.333/ | Chen, Wei and Gong, Yeyun and Wang, Song and Yao, Bolun and Qi, Weizhen and Wei, Zhongyu and Hu, Xiaowu and Zhou, Bartuer and Mao, Yi and Chen, Weizhu and Cheng, Biao and Duan, Nan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4852--4864 | Dialog response generation in open domain is an important research topic where the main challenge is to generate relevant and diverse responses. In this paper, we propose a new dialog pre-training framework called DialogVED, which introduces continuous latent variables into the enhanced encoder-decoder pre-training framework to increase the relevance and diversity of responses. With the help of a large dialog corpus (Reddit), we pre-train the model using the following 4 tasks, used in training language models (LMs) and Variational Autoencoders (VAEs) literature: 1) masked language model; 2) response generation; 3) bag-of-words prediction; and 4) KL divergence reduction. We also add additional parameters to model the turn structure in dialogs to improve the performance of the pre-trained model. We conduct experiments on PersonaChat, DailyDialog, and DSTC7-AVSD benchmarks for response generation. Experimental results show that our model achieves the new state-of-the-art results on all these datasets. | null | null | 10.18653/v1/2022.acl-long.333 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,999 |
inproceedings | chen-etal-2022-contextual | Contextual Fine-to-Coarse Distillation for Coarse-grained Response Selection in Open-Domain Conversations | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.334/ | Chen, Wei and Gong, Yeyun and Xu, Can and Hu, Huang and Yao, Bolun and Wei, Zhongyu and Fan, Zhihao and Hu, Xiaowu and Zhou, Bartuer and Cheng, Biao and Jiang, Daxin and Duan, Nan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4865--4877 | We study the problem of coarse-grained response selection in retrieval-based dialogue systems. The problem is equally important with fine-grained response selection, but is less explored in existing literature. In this paper, we propose a Contextual Fine-to-Coarse (CFC) distilled model for coarse-grained response selection in open-domain conversations. In our CFC model, dense representations of query, candidate contexts and responses is learned based on the multi-tower architecture using contextual matching, and richer knowledge learned from the one-tower architecture (fine-grained) is distilled into the multi-tower architecture (coarse-grained) to enhance the performance of the retriever. To evaluate the performance of the proposed model, we construct two new datasets based on the Reddit comments dump and Twitter corpus. Extensive experimental results on the two datasets show that the proposed method achieves huge improvement over all evaluation metrics compared with traditional baseline methods. | null | null | 10.18653/v1/2022.acl-long.334 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,000 |
inproceedings | wang-etal-2022-textomics | Textomics: A Dataset for Genomics Data Summary Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.335/ | Wang, Mu-Chun and Liu, Zixuan and Wang, Sheng | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4878--4891 | Summarizing biomedical discovery from genomics data using natural languages is an essential step in biomedical research but is mostly done manually. Here, we introduce Textomics, a novel dataset of genomics data description, which contains 22,273 pairs of genomics data matrices and their summaries. Each summary is written by the researchers who generated the data and associated with a scientific paper. Based on this dataset, we study two novel tasks: generating textual summary from a genomics data matrix and vice versa. Inspired by the successful applications of $k$ nearest neighbors in modeling genomics data, we propose a $k$NN-Vec2Text model to address these tasks and observe substantial improvement on our dataset. We further illustrate how Textomics can be used to advance other applications, including evaluating scientific paper embeddings and generating masked templates for scientific paper understanding. Textomics serves as the first benchmark for generating textual summaries for genomics data and we envision it will be broadly applied to other biomedical and natural language processing applications. | null | null | 10.18653/v1/2022.acl-long.335 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,001 |
inproceedings | zhang-etal-2022-contrastive | A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple-wise Perspective in Angular Space | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.336/ | Zhang, Yuhao and Zhu, Hongji and Wang, Yongliang and Xu, Nan and Li, Xiaobo and Zhao, Binqiang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4892--4903 | Learning high-quality sentence representations is a fundamental problem of natural language processing which could benefit a wide range of downstream tasks. Though the BERT-like pre-trained language models have achieved great success, using their sentence representations directly often results in poor performance on the semantic textual similarity task. Recently, several contrastive learning methods have been proposed for learning sentence representations and have shown promising results. However, most of them focus on the constitution of positive and negative representation pairs and pay little attention to the training objective like NT-Xent, which is not sufficient enough to acquire the discriminating power and is unable to model the partial order of semantics between sentences. So in this paper, we propose a new method ArcCSE, with training objectives designed to enhance the pairwise discriminative power and model the entailment relation of triplet sentences. We conduct extensive experiments which demonstrate that our approach outperforms the previous state-of-the-art on diverse sentence related tasks, including STS and SentEval. | null | null | 10.18653/v1/2022.acl-long.336 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,002 |
inproceedings | ye-etal-2022-packed | Packed Levitated Marker for Entity and Relation Extraction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.337/ | Ye, Deming and Lin, Yankai and Li, Peng and Sun, Maosong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4904--4917 | Recent entity and relation extraction works focus on investigating how to obtain a better span representation from the pre-trained encoder. However, a major limitation of existing works is that they ignore the interrelation between spans (pairs). In this work, we propose a novel span representation approach, named Packed Levitated Markers (PL-Marker), to consider the interrelation between the spans (pairs) by strategically packing the markers in the encoder. In particular, we propose a neighborhood-oriented packing strategy, which considers the neighbor spans integrally to better model the entity boundary information. Furthermore, for those more complicated span pair classification tasks, we design a subject-oriented packing strategy, which packs each subject and all its objects to model the interrelation between the same-subject span pairs. The experimental results show that, with the enhanced marker feature, our model advances baselines on six NER benchmarks, and obtains a 4.1{\%}-4.3{\%} strict relation F1 improvement with higher speed over previous state-of-the-art models on ACE04 and ACE05. Our code and models are publicly available at \url{https://github.com/thunlp/PL-Marker} | null | null | 10.18653/v1/2022.acl-long.337 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,003 |
inproceedings | yang-etal-2022-interpretable | An Interpretable Neuro-Symbolic Reasoning Framework for Task-Oriented Dialogue Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.338/ | Yang, Shiquan and Zhang, Rui and Erfani, Sarah and Lau, Jey Han | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4918--4935 | We study the interpretability issue of task-oriented dialogue systems in this paper. Previously, most neural-based task-oriented dialogue systems employ an implicit reasoning strategy that makes the model predictions uninterpretable to humans. To obtain a transparent reasoning process, we introduce neuro-symbolic to perform explicit reasoning that justifies model decisions by reasoning chains. Since deriving reasoning chains requires multi-hop reasoning for task-oriented dialogues, existing neuro-symbolic approaches would induce error propagation due to the one-phase design. To overcome this, we propose a two-phase approach that consists of a hypothesis generator and a reasoner. We first obtain multiple hypotheses, i.e., potential operations to perform the desired task, through the hypothesis generator. Each hypothesis is then verified by the reasoner, and the valid one is selected to conduct the final prediction. The whole system is trained by exploiting raw textual dialogues without using any reasoning chain annotations. Experimental studies on two public benchmark datasets demonstrate that the proposed approach not only achieves better results, but also introduces an interpretable decision process. | null | null | 10.18653/v1/2022.acl-long.338 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,004 |
inproceedings | nie-etal-2022-impact | Impact of Evaluation Methodologies on Code Summarization | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.339/ | Nie, Pengyu and Zhang, Jiyang and Li, Junyi Jessy and Mooney, Ray and Gligoric, Milos | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4936--4960 | There has been a growing interest in developing machine learning (ML) models for code summarization tasks, e.g., comment generation and method naming. Despite substantial increase in the effectiveness of ML models, the evaluation methodologies, i.e., the way people split datasets into training, validation, and test sets, were not well studied. Specifically, no prior work on code summarization considered the timestamps of code and comments during evaluation. This may lead to evaluations that are inconsistent with the intended use cases. In this paper, we introduce the time-segmented evaluation methodology, which is novel to the code summarization research community, and compare it with the mixed-project and cross-project methodologies that have been commonly used. Each methodology can be mapped to some use cases, and the time-segmented methodology should be adopted in the evaluation of ML models for code summarization. To assess the impact of methodologies, we collect a dataset of (code, comment) pairs with timestamps to train and evaluate several recent ML models for code summarization. Our experiments show that different methodologies lead to conflicting evaluation results. We invite the community to expand the set of methodologies used in evaluations. | null | null | 10.18653/v1/2022.acl-long.339 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,005 |
inproceedings | yu-etal-2022-kg | {KG}-{F}i{D}: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.340/ | Yu, Donghan and Zhu, Chenguang and Fang, Yuwei and Yu, Wenhao and Wang, Shuohang and Xu, Yichong and Ren, Xiang and Yang, Yiming and Zeng, Michael | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4961--4974 | Current Open-Domain Question Answering (ODQA) models typically include a retrieving module and a reading module, where the retriever selects potentially relevant passages from open-source documents for a given question, and the reader produces an answer based on the retrieved passages. The recently proposed Fusion-in-Decoder (FiD) framework is a representative example, which is built on top of a dense passage retriever and a generative reader, achieving the state-of-the-art performance. In this paper we further improve the FiD approach by introducing a knowledge-enhanced version, namely KG-FiD. Our new model uses a knowledge graph to establish the structural relationship among the retrieved passages, and a graph neural network (GNN) to re-rank the passages and select only a top few for further processing. Our experiments on common ODQA benchmark datasets (Natural Questions and TriviaQA) demonstrate that KG-FiD can achieve comparable or better performance in answer prediction than FiD, with less than 40{\%} of the computation cost. | null | null | 10.18653/v1/2022.acl-long.340 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,006 |
inproceedings | holur-etal-2022-side | Which side are you on? Insider-Outsider classification in conspiracy-theoretic social media | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.341/ | Holur, Pavan and Wang, Tianyi and Shahsavari, Shadi and Tangherlini, Timothy and Roychowdhury, Vwani | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4975--4987 | Social media is a breeding ground for threat narratives and related conspiracy theories. In these, an \textit{outside} group threatens the integrity of an \textit{inside} group, leading to the emergence of sharply defined group identities: \textit{Insiders} {--} agents with whom the authors identify and \textit{Outsiders} {--} agents who threaten the insiders. Inferring the members of these groups constitutes a challenging new NLP task: (i) Information is distributed over many poorly-constructed posts; (ii) Threats and threat agents are highly contextual, with the same post potentially having multiple agents assigned to membership in either group; (iii) An agent`s identity is often implicit and transitive; and (iv) Phrases used to imply Outsider status often do not follow common negative sentiment patterns. To address these challenges, we define a novel Insider-Outsider classification task. Because we are not aware of any appropriate existing datasets or attendant models, we introduce a labeled dataset (CT5K) and design a model (NP2IO) to address this task. NP2IO leverages pretrained language modeling to classify Insiders and Outsiders. NP2IO is shown to be robust, generalizing to noun phrases not seen during training, and exceeding the performance of non-trivial baseline models by 20{\%}. | null | null | 10.18653/v1/2022.acl-long.341 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,007 |
inproceedings | le-ferrand-etal-2022-learning | Learning From Failure: Data Capture in an {A}ustralian Aboriginal Community | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.342/ | Le Ferrand, Eric and Bird, Steven and Besacier, Laurent | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4988--4998 | Most low resource language technology development is premised on the need to collect data for training statistical models. When we follow the typical process of recording and transcribing text for small Indigenous languages, we hit up against the so-called {\textquotedblleft}transcription bottleneck.{\textquotedblright} Therefore it is worth exploring new ways of engaging with speakers which generate data while avoiding the transcription bottleneck. We have deployed a prototype app for speakers to use for confirming system guesses in an approach to transcription based on word spotting. However, in the process of testing the app we encountered many new problems for engagement with speakers. This paper presents a close-up study of the process of deploying data capture technology on the ground in an Australian Aboriginal community. We reflect on our interactions with participants and draw lessons that apply to anyone seeking to develop methods for language data collection in an Indigenous community. | null | null | 10.18653/v1/2022.acl-long.342 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,008 |
inproceedings | wang-pan-2022-deep | Deep Inductive Logic Reasoning for Multi-Hop Reading Comprehension | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.343/ | Wang, Wenya and Pan, Sinno | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4999--5009 | Multi-hop reading comprehension requires an ability to reason across multiple documents. On the one hand, deep learning approaches only implicitly encode query-related information into distributed embeddings which fail to uncover the discrete relational reasoning process to infer the correct answer. On the other hand, logic-based approaches provide interpretable rules to infer the target answer, but mostly work on structured data where entities and relations are well-defined. In this paper, we propose a deep-learning based inductive logic reasoning method that firstly extracts query-related (candidate-related) information, and then conducts logic reasoning among the filtered information by inducing feasible rules that entail the target relation. The reasoning process is accomplished via attentive memories with novel differentiable logic operators. To demonstrate the effectiveness of our model, we evaluate it on two reading comprehension datasets, namely WikiHop and MedHop. | null | null | 10.18653/v1/2022.acl-long.343 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,009 |
inproceedings | ghosal-etal-2022-cicero | {CICERO}: A Dataset for Contextualized Commonsense Inference in Dialogues | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.344/ | Ghosal, Deepanway and Shen, Siqi and Majumder, Navonil and Mihalcea, Rada and Poria, Soujanya | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5010--5028 | This paper addresses the problem of dialogue reasoning with contextualized commonsense inference. We curate CICERO, a dataset of dyadic conversations with five types of utterance-level reasoning-based inferences: cause, subsequent event, prerequisite, motivation, and emotional reaction. The dataset contains 53,105 of such inferences from 5,672 dialogues. We use this dataset to solve relevant generative and discriminative tasks: generation of cause and subsequent event; generation of prerequisite, motivation, and listener`s emotional reaction; and selection of plausible alternatives. Our results ascertain the value of such dialogue-centric commonsense knowledge datasets. It is our hope that CICERO will open new research avenues into commonsense-based dialogue reasoning. | null | null | 10.18653/v1/2022.acl-long.344 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,010 |
inproceedings | chan-etal-2022-comparative | A Comparative Study of Faithfulness Metrics for Model Interpretability Methods | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.345/ | Chan, Chun Sik and Kong, Huanqi and Guanqing, Liang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5029--5038 | Interpretable methods to reveal the internal reasoning processes behind machine learning models have attracted increasing attention in recent years. To quantify the extent to which the identified interpretations truly reflect the intrinsic decision-making mechanisms, various faithfulness evaluation metrics have been proposed. However, we find that different faithfulness metrics show conflicting preferences when comparing different interpretations. Motivated by this observation, we aim to conduct a comprehensive and comparative study of the widely adopted faithfulness metrics. In particular, we introduce two assessment dimensions, namely diagnosticity and complexity. Diagnosticity refers to the degree to which the faithfulness metric favors relatively faithful interpretations over randomly generated ones, and complexity is measured by the average number of model forward passes. According to the experimental results, we find that sufficiency and comprehensiveness metrics have higher diagnosticity and lower complexity than the other faithfulness metrics. | null | null | 10.18653/v1/2022.acl-long.345 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,011 |
inproceedings | vu-etal-2022-spot | {SP}o{T}: Better Frozen Model Adaptation through Soft Prompt Transfer | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.346/ | Vu, Tu and Lester, Brian and Constant, Noah and Al-Rfou{'}, Rami and Cer, Daniel | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5039--5059 | There has been growing interest in parameter-efficient methods to apply pre-trained language models to downstream tasks. Building on the Prompt Tuning approach of Lester et al. (2021), which learns task-specific soft prompts to condition a frozen pre-trained model to perform different tasks, we propose a novel prompt-based transfer learning approach called SPoT: Soft Prompt Transfer. SPoT first learns a prompt on one or more source tasks and then uses it to initialize the prompt for a target task. We show that SPoT significantly boosts the performance of Prompt Tuning across many tasks. More remarkably, across all model sizes, SPoT matches or outperforms standard Model Tuning (which fine-tunes all model parameters) on the SuperGLUE benchmark, while using up to 27,000{\texttimes} fewer task-specific parameters. To understand where SPoT is most effective, we conduct a large-scale study on task transferability with 26 NLP tasks in 160 combinations, and demonstrate that many tasks can benefit each other via prompt transfer. Finally, we propose an efficient retrieval approach that interprets task prompts as task embeddings to identify similar tasks and predict the most transferable source tasks for a novel target task. | null | null | 10.18653/v1/2022.acl-long.346 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,012 |
inproceedings | zhu-etal-2022-pass | Pass off Fish Eyes for Pearls: Attacking Model Selection of Pre-trained Models | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.347/ | Zhu, Biru and Qin, Yujia and Qi, Fanchao and Deng, Yangdong and Liu, Zhiyuan and Sun, Maosong and Gu, Ming | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5060--5072 | Selecting an appropriate pre-trained model (PTM) for a specific downstream task typically requires significant efforts of fine-tuning. To accelerate this process, researchers propose feature-based model selection (FMS) methods, which assess PTMs' transferability to a specific task in a fast way without fine-tuning. In this work, we argue that current FMS methods are vulnerable, as the assessment mainly relies on the static features extracted from PTMs. However, such features are derived without training PTMs on downstream tasks, and are not necessarily reliable indicators for the PTM`s transferability. To validate our viewpoints, we design two methods to evaluate the robustness of FMS: (1) model disguise attack, which post-trains an inferior PTM with a contrastive objective, and (2) evaluation data selection, which selects a subset of the data points for FMS evaluation based on K-means clustering. Experimental results prove that both methods can successfully make FMS mistakenly judge the transferability of PTMs. Moreover, we find that these two methods can further be combined with the backdoor attack to misguide the FMS to select poisoned models. To the best of our knowledge, this is the first work to demonstrate the defects of current FMS algorithms and evaluate their potential security risks. By identifying previously unseen risks of FMS, our study indicates new directions for improving the robustness of FMS. | null | null | 10.18653/v1/2022.acl-long.347 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,013 |
inproceedings | zhao-etal-2022-educational | Educational Question Generation of Children Storybooks via Question Type Distribution Learning and Event-centric Summarization | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.348/ | Zhao, Zhenjie and Hou, Yufang and Wang, Dakuo and Yu, Mo and Liu, Chengzhong and Ma, Xiaojuan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5073--5085 | Generating educational questions of fairytales or storybooks is vital for improving children`s literacy ability. However, it is challenging to generate questions that capture the interesting aspects of a fairytale story with educational meaningfulness. In this paper, we propose a novel question generation method that first learns the question type distribution of an input story paragraph, and then summarizes salient events which can be used to generate high-cognitive-demand questions. To train the event-centric summarizer, we finetune a pre-trained transformer-based sequence-to-sequence model using silver samples composed by educational question-answer pairs. On a newly proposed educational question-answering dataset FairytaleQA, we show good performance of our method on both automatic and human evaluation metrics. Our work indicates the necessity of decomposing question type distribution learning and event-centric summary generation for educational question generation. | null | null | 10.18653/v1/2022.acl-long.348 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,014 |
inproceedings | gu-etal-2022-hetermpc | {H}eter{MPC}: A Heterogeneous Graph Neural Network for Response Generation in Multi-Party Conversations | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.349/ | Gu, Jia-Chen and Tan, Chao-Hong and Tao, Chongyang and Ling, Zhen-Hua and Hu, Huang and Geng, Xiubo and Jiang, Daxin | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5086--5097 | Recently, various response generation models for two-party conversations have achieved impressive improvements, but less effort has been paid to multi-party conversations (MPCs) which are more practical and complicated. Compared with a two-party conversation where a dialogue context is a sequence of utterances, building a response generation model for MPCs is more challenging, since there exist complicated context structures and the generated responses heavily rely on both interlocutors (i.e., speaker and addressee) and history utterances. To address these challenges, we present HeterMPC, a heterogeneous graph-based neural network for response generation in MPCs which models the semantics of utterances and interlocutors simultaneously with two types of nodes in a graph. Besides, we also design six types of meta relations with node-edge-type-dependent parameters to characterize the heterogeneous interactions within the graph. Through multi-hop updating, HeterMPC can adequately utilize the structural knowledge of conversations for response generation. Experimental results on the Ubuntu Internet Relay Chat (IRC) channel benchmark show that HeterMPC outperforms various baseline models for response generation in MPCs. | null | null | 10.18653/v1/2022.acl-long.349 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,015 |
inproceedings | otmakhova-etal-2022-patient | The patient is more dead than alive: exploring the current state of the multi-document summarisation of the biomedical literature | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.350/ | Otmakhova, Yulia and Verspoor, Karin and Baldwin, Timothy and Lau, Jey Han | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5098--5111 | Although multi-document summarisation (MDS) of the biomedical literature is a highly valuable task that has recently attracted substantial interest, evaluation of the quality of biomedical summaries lacks consistency and transparency. In this paper, we examine the summaries generated by two current models in order to understand the deficiencies of existing evaluation approaches in the context of the challenges that arise in the MDS task. Based on this analysis, we propose a new approach to human evaluation and identify several challenges that must be overcome to develop effective biomedical MDS systems. | null | null | 10.18653/v1/2022.acl-long.350 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,016 |
inproceedings | parnell-etal-2022-multi | A Multi-Document Coverage Reward for {RELAX}ed Multi-Document Summarization | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.351/ | Parnell, Jacob and Jauregi Unanue, Inigo and Piccardi, Massimo | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5112--5128 | Multi-document summarization (MDS) has made significant progress in recent years, in part facilitated by the availability of new, dedicated datasets and capacious language models. However, a standing limitation of these models is that they are trained against limited references and with plain maximum-likelihood objectives. As for many other generative tasks, reinforcement learning (RL) offers the potential to improve the training of MDS models; yet, it requires a carefully-designed reward that can ensure appropriate leverage of both the reference summaries and the input documents. For this reason, in this paper we propose fine-tuning an MDS baseline with a reward that balances a reference-based metric such as ROUGE with coverage of the input documents. To implement the approach, we utilize RELAX (Grathwohl et al., 2018), a contemporary gradient estimator which is both low-variance and unbiased, and we fine-tune the baseline in a few-shot style for both stability and computational efficiency. Experimental results over the Multi-News and WCEP MDS datasets show significant improvements of up to +0.95 pp average ROUGE score and +3.17 pp METEOR score over the baseline, and competitive results with the literature. In addition, they show that the coverage of the input documents is increased, and evenly across all documents. | null | null | 10.18653/v1/2022.acl-long.351 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,017 |
inproceedings | zhou-etal-2022-knn | {KNN}-Contrastive Learning for Out-of-Domain Intent Classification | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.352/ | Zhou, Yunhua and Liu, Peiju and Qiu, Xipeng | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5129--5141 | The Out-of-Domain (OOD) intent classification is a basic and challenging task for dialogue systems. Previous methods commonly restrict the region (in feature space) of In-domain (IND) intent features to be compact or simply-connected implicitly, which assumes no OOD intents reside, to learn discriminative semantic features. Then the distribution of the IND intent features is often assumed to obey a hypothetical distribution (Gaussian mostly) and samples outside this distribution are regarded as OOD samples. In this paper, we start from the nature of OOD intent classification and explore its optimization objective. We further propose a simple yet effective method, named KNN-contrastive learning. Our approach utilizes k-nearest neighbors (KNN) of IND intents to learn discriminative semantic features that are more conducive to OOD detection. Notably, the density-based novelty detection algorithm is so well-grounded in the essence of our method that it is reasonable to use it as the OOD detection algorithm without making any requirements for the feature distribution. Extensive experiments on four public datasets show that our approach can not only enhance the OOD detection performance substantially but also improve the IND intent classification while requiring no restrictions on feature distribution. | null | null | 10.18653/v1/2022.acl-long.352 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,018 |
inproceedings | zhu-etal-2022-neural | A Neural Network Architecture for Program Understanding Inspired by Human Behaviors | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.353/ | Zhu, Renyu and Yuan, Lei and Li, Xiang and Gao, Ming and Cai, Wenyuan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5142--5153 | Program understanding is a fundamental task in program language processing. Despite the success, existing works fail to take human behaviors as reference in understanding programs. In this paper, we consider human behaviors and propose the PGNN-EK model that consists of two main components. On the one hand, inspired by the {\textquotedblleft}divide-and-conquer{\textquotedblright} reading behaviors of humans, we present a partitioning-based graph neural network model PGNN on the upgraded AST of codes. On the other hand, to characterize human behaviors of resorting to other resources to help code comprehension, we transform raw codes with external knowledge and apply pre-training techniques for information extraction. Finally, we combine the two embeddings generated from the two components to output code embeddings. We conduct extensive experiments to show the superior performance of PGNN-EK on the code summarization and code clone detection tasks. In particular, to show the generalization ability of our model, we release a new dataset that is more challenging for code clone detection and could advance the development of the community. Our codes and data are publicly available at \url{https://github.com/RecklessRonan/PGNN-EK}. | null | null | 10.18653/v1/2022.acl-long.353 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,019 |
inproceedings | park-etal-2022-faviq | {F}a{VIQ}: {FA}ct Verification from Information-seeking Questions | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.354/ | Park, Jungsoo and Min, Sewon and Kang, Jaewoo and Zettlemoyer, Luke and Hajishirzi, Hannaneh | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5154--5166 | Despite significant interest in developing general purpose fact checking models, it is challenging to construct a large-scale fact verification dataset with realistic real-world claims. Existing claims are either authored by crowdworkers, thereby introducing subtle biases thatare difficult to control for, or manually verified by professional fact checkers, causing them to be expensive and limited in scale. In this paper, we construct a large-scale challenging fact verification dataset called FAVIQ, consisting of 188k claims derived from an existing corpus of ambiguous information-seeking questions. The ambiguities in the questions enable automatically constructing true and false claims that reflect user confusions (e.g., the year of the movie being filmed vs. being released). Claims in FAVIQ are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification. Our experiments show that the state-of-the-art models are far from solving our new task. Moreover, training on our data helps in professional fact-checking, outperforming models trained on the widely used dataset FEVER or in-domain data by up to 17{\%} absolute. Altogether, our data will serve as a challenging benchmark for natural language understanding and support future progress in professional fact checking. | null | null | 10.18653/v1/2022.acl-long.354 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,020 |
inproceedings | gao-etal-2022-simulating | Simulating Bandit Learning from User Feedback for Extractive Question Answering | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.355/ | Gao, Ge and Choi, Eunsol and Artzi, Yoav | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5167--5179 | We study learning from user feedback for extractive question answering by simulating feedback using supervised data. We cast the problem as contextual bandit learning, and analyze the characteristics of several learning scenarios with focus on reducing data annotation. We show that systems initially trained on few examples can dramatically improve given feedback from users on model-predicted answers, and that one can use existing datasets to deploy systems in new domains without any annotation effort, but instead improving the system on-the-fly via user feedback. | null | null | 10.18653/v1/2022.acl-long.355 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,021 |
inproceedings | xu-etal-2022-beyond | Beyond Goldfish Memory: Long-Term Open-Domain Conversation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.356/ | Xu, Jing and Szlam, Arthur and Weston, Jason | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5180--5197 | Despite recent improvements in open-domain dialogue models, state of the art models are trained and evaluated on short conversations with little context. In contrast, the long-term conversation setting has hardly been studied. In this work we collect and release a human-human dataset consisting of multiple chat sessions whereby the speaking partners learn about each other`s interests and discuss the things they have learnt from past sessions. We show how existing models trained on existing datasets perform poorly in this long-term conversation setting in both automatic and human evaluations, and we study long-context models that can perform much better. In particular, we find retrieval-augmented methods and methods with an ability to summarize and recall previous conversations outperform the standard encoder-decoder architectures currently considered state of the art. | null | null | 10.18653/v1/2022.acl-long.356 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,022 |
inproceedings | subramanian-etal-2022-reclip | {R}e{CLIP}: A Strong Zero-Shot Baseline for Referring Expression Comprehension | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.357/ | Subramanian, Sanjay and Merrill, William and Darrell, Trevor and Gardner, Matt and Singh, Sameer and Rohrbach, Anna | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5198--5215 | Training a referring expression comprehension (ReC) model for a new visual domain requires collecting referring expressions, and potentially corresponding bounding boxes, for images in the domain. While large-scale pre-trained models are useful for image classification across domains, it remains unclear if they can be applied in a zero-shot manner to more complex tasks like ReC. We present ReCLIP, a simple but strong \textit{zero-shot} baseline that repurposes CLIP, a state-of-the-art large-scale model, for ReC. Motivated by the close connection between ReC and CLIP`s contrastive pre-training objective, the first component of ReCLIP is a region-scoring method that isolates object proposals via cropping and blurring, and passes them to CLIP. However, through controlled experiments on a synthetic dataset, we find that CLIP is largely incapable of performing spatial reasoning off-the-shelf. We reduce the gap between zero-shot baselines from prior work and supervised models by as much as 29{\%} on RefCOCOg, and on RefGTA (video game imagery), ReCLIP`s relative improvement over supervised ReC models trained on real images is 8{\%}. | null | null | 10.18653/v1/2022.acl-long.357 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,023 |
inproceedings | liu-etal-2022-dynamic | Dynamic Prefix-Tuning for Generative Template-based Event Extraction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.358/ | Liu, Xiao and Huang, Heyan and Shi, Ge and Wang, Bo | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5216--5228 | We consider event extraction in a generative manner with template-based conditional generation. Although there is a rising trend of casting the task of event extraction as a sequence generation problem with prompts, these generation-based methods have two significant challenges, including using suboptimal prompts and static event type information. In this paper, we propose a generative template-based event extraction method with dynamic prefix (GTEE-DynPref) by integrating context information with type-specific prefixes to learn a context-specific prefix for each context. Experimental results show that our model achieves competitive results with the state-of-the-art classification-based model OneIE on ACE 2005 and achieves the best performances on ERE.Additionally, our model is proven to be portable to new types of events effectively. | null | null | 10.18653/v1/2022.acl-long.358 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,024 |
inproceedings | akbari-etal-2022-e | {E}-{LANG}: Energy-Based Joint Inferencing of Super and Swift Language Models | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.359/ | Akbari, Mohammad and Banitalebi-Dehkordi, Amin and Zhang, Yong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5229--5244 | Building huge and highly capable language models has been a trend in the past years. Despite their great performance, they incur high computational cost. A common solution is to apply model compression or choose light-weight architectures, which often need a separate fixed-size model for each desirable computational budget, and may lose performance in case of heavy compression. This paper proposes an effective dynamic inference approach, called E-LANG, which distributes the inference between large accurate Super-models and light-weight Swift models. To this end, a decision making module routes the inputs to Super or Swift models based on the energy characteristics of the representations in the latent space. This method is easily adoptable and architecture agnostic. As such, it can be applied to black-box pre-trained models without a need for architectural manipulations, reassembling of modules, or re-training. Unlike existing methods that are only applicable to encoder-only backbones and classification tasks, our method also works for encoder-decoder structures and sequence-to-sequence tasks such as translation. The E-LANG performance is verified through a set of experiments with T5 and BERT backbones on GLUE, SuperGLUE, and WMT. In particular, we outperform T5-11B with an average computations speed-up of 3.3X on GLUE and 2.9X on SuperGLUE. We also achieve BERT-based SOTA on GLUE with 3.2X less computations. Code and demo are available in supplementary materials. | null | null | 10.18653/v1/2022.acl-long.359 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,025 |
inproceedings | xiao-etal-2022-primera | {PRIMERA}: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.360/ | Xiao, Wen and Beltagy, Iz and Carenini, Giuseppe and Cohan, Arman | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5245--5263 | We introduce PRIMERA, a pre-trained model for multi-document representation with a focus on summarization that reduces the need for dataset-specific architectures and large amounts of fine-tuning labeled data. PRIMERA uses our newly proposed pre-training objective designed to teach the model to connect and aggregate information across documents. It also uses efficient encoder-decoder transformers to simplify the processing of concatenated input documents. With extensive experiments on 6 multi-document summarization datasets from 3 different domains on zero-shot, few-shot and full-supervised settings, PRIMERA outperforms current state-of-the-art dataset-specific and pre-trained models on most of these settings with large margins. | null | null | 10.18653/v1/2022.acl-long.360 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,026 |
inproceedings | du-etal-2022-dynamic | Dynamic Global Memory for Document-level Argument Extraction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.361/ | Du, Xinya and Li, Sha and Ji, Heng | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5264--5275 | Extracting informative arguments of events from news articles is a challenging problem in information extraction, which requires a global contextual understanding of each document. While recent work on document-level extraction has gone beyond single-sentence and increased the cross-sentence inference capability of end-to-end models, they are still restricted by certain input sequence length constraints and usually ignore the global context between events. To tackle this issue, we introduce a new global neural generation-based framework for document-level event argument extraction by constructing a document memory store to record the contextual event information and leveraging it to implicitly and explicitly help with decoding of arguments for later events. Empirical results show that our framework outperforms prior methods substantially and it is more robust to adversarially annotated examples with our constrained decoding design. | null | null | 10.18653/v1/2022.acl-long.361 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,027 |
inproceedings | wiechmann-kerz-2022-measuring | Measuring the Impact of (Psycho-)Linguistic and Readability Features and Their Spill Over Effects on the Prediction of Eye Movement Patterns | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.362/ | Wiechmann, Daniel and Qiao, Yu and Kerz, Elma and Mattern, Justus | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5276--5290 | There is a growing interest in the combined use of NLP and machine learning methods to predict gaze patterns during naturalistic reading. While promising results have been obtained through the use of transformer-based language models, little work has been undertaken to relate the performance of such models to general text characteristics. In this paper we report on experiments with two eye-tracking corpora of naturalistic reading and two language models (BERT and GPT-2). In all experiments, we test effects of a broad spectrum of features for predicting human reading behavior that fall into five categories (syntactic complexity, lexical richness, register-based multiword combinations, readability and psycholinguistic word properties). Our experiments show that both the features included and the architecture of the transformer-based language models play a role in predicting multiple eye-tracking measures during naturalistic reading. We also report the results of experiments aimed at determining the relative importance of features from different groups using SP-LIME. | null | null | 10.18653/v1/2022.acl-long.362 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,028 |
inproceedings | sun-etal-2022-alternative | Alternative Input Signals Ease Transfer in Multilingual Machine Translation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.363/ | Sun, Simeng and Fan, Angela and Cross, James and Chaudhary, Vishrav and Tran, Chau and Koehn, Philipp and Guzm{\'a}n, Francisco | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5291--5305 | Recent work in multilingual machine translation (MMT) has focused on the potential of positive transfer between languages, particularly cases where higher-resourced languages can benefit lower-resourced ones. While training an MMT model, the supervision signals learned from one language pair can be transferred to the other via the tokens shared by multiple source languages. However, the transfer is inhibited when the token overlap among source languages is small, which manifests naturally when languages use different writing systems. In this paper, we tackle inhibited transfer by augmenting the training data with alternative signals that unify different writing systems, such as phonetic, romanized, and transliterated input. We test these signals on Indic and Turkic languages, two language families where the writing systems differ but languages still share common features. Our results indicate that a straightforward multi-source self-ensemble {--} training a model on a mixture of various signals and ensembling the outputs of the same model fed with different signals during inference, outperforms strong ensemble baselines by 1.3 BLEU points on both language families. Further, we find that incorporating alternative inputs via self-ensemble can be particularly effective when training set is small, leading to +5 BLEU when only 5{\%} of the total training data is accessible. Finally, our analysis demonstrates that including alternative signals yields more consistency and translates named entities more accurately, which is crucial for increased factuality of automated systems. | null | null | 10.18653/v1/2022.acl-long.363 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,029 |
inproceedings | leong-whitenack-2022-phone | Phone-ing it in: Towards Flexible Multi-Modal Language Model Training by Phonetic Representations of Data | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.364/ | Leong, Colin and Whitenack, Daniel | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5306--5315 | Multi-modal techniques offer significant untapped potential to unlock improved NLP technology for local languages. However, many advances in language model pre-training are focused on text, a fact that only increases systematic inequalities in the performance of NLP tasks across the world`s languages. In this work, we propose a multi-modal approach to train language models using whatever text and/or audio data might be available in a language. Initial experiments using Swahili and Kinyarwanda data suggest the viability of the approach for downstream Named Entity Recognition (NER) tasks, with models pre-trained on phone data showing an improvement of up to 6{\%} F1-score above models that are trained from scratch. Preprocessing and training code will be uploaded to \url{https://github.com/sil-ai/phone-it-in}. | null | null | 10.18653/v1/2022.acl-long.364 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,030 |
inproceedings | min-etal-2022-noisy | Noisy Channel Language Model Prompting for Few-Shot Text Classification | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.365/ | Min, Sewon and Lewis, Mike and Hajishirzi, Hannaneh and Zettlemoyer, Luke | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5316--5330 | We introduce a noisy channel approach for language model prompting in few-shot text classification. Instead of computing the likelihood of the label given the input (referred as direct models), channel models compute the conditional probability of the input given the label, and are thereby required to explain every word in the input. We use channel models for recently proposed few-shot learning methods with no or very limited updates to the language model parameters, via either in-context demonstration or prompt tuning. Our experiments show that, for both methods, channel models significantly outperform their direct counterparts, which we attribute to their stability, i.e., lower variance and higher worst-case accuracy. We also present extensive ablations that provide recommendations for when to use channel prompt tuning instead of other competitive models (e.g., direct head tuning): channel prompt tuning is preferred when the number of training examples is small, labels in the training data are imbalanced, or generalization to unseen labels is required. | null | null | 10.18653/v1/2022.acl-long.365 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,031 |
inproceedings | downey-etal-2022-multilingual | Multilingual unsupervised sequence segmentation transfers to extremely low-resource languages | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.366/ | Downey, C. and Drizin, Shannon and Haroutunian, Levon and Thukral, Shivin | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5331--5346 | We show that unsupervised sequence-segmentation performance can be transferred to extremely low-resource languages by pre-training a Masked Segmental Language Model (Downey et al., 2021) multilingually. Further, we show that this transfer can be achieved by training over a collection of low-resource languages that are typologically similar (but phylogenetically unrelated) to the target language. In our experiments, we transfer from a collection of 10 Indigenous American languages (AmericasNLP, Mager et al., 2021) to K`iche', a Mayan language. We compare our multilingual model to a monolingual (from-scratch) baseline, as well as a model pre-trained on Quechua only. We show that the multilingual pre-trained approach yields consistent segmentation quality across target dataset sizes, exceeding the monolingual baseline in 6/10 experimental settings. Our model yields especially strong results at small target sizes, including a zero-shot performance of 20.6 F1. These results have promising implications for low-resource NLP pipelines involving human-like linguistic units, such as the sparse transcription framework proposed by Bird (2020). | null | null | 10.18653/v1/2022.acl-long.366 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,032 |
inproceedings | nzeyimana-niyongabo-rubungo-2022-kinyabert | {K}inya{BERT}: a Morphology-aware {K}inyarwanda Language Model | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.367/ | Nzeyimana, Antoine and Niyongabo Rubungo, Andre | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5347--5363 | Pre-trained language models such as BERT have been successful at tackling many natural language processing tasks. However, the unsupervised sub-word tokenization methods commonly used in these models (e.g., byte-pair encoding - BPE) are sub-optimal at handling morphologically rich languages. Even given a morphological analyzer, naive sequencing of morphemes into a standard BERT architecture is inefficient at capturing morphological compositionality and expressing word-relative syntactic regularities. We address these challenges by proposing a simple yet effective two-tier BERT architecture that leverages a morphological analyzer and explicitly represents morphological compositionality.Despite the success of BERT, most of its evaluations have been conducted on high-resource languages, obscuring its applicability on low-resource languages. We evaluate our proposed method on the low-resource morphologically rich Kinyarwanda language, naming the proposed model architecture KinyaBERT. A robust set of experimental results reveal that KinyaBERT outperforms solid baselines by 2{\%} in F1 score on a named entity recognition task and by 4.3{\%} in average score of a machine-translated GLUE benchmark. KinyaBERT fine-tuning has better convergence and achieves more robust results on multiple tasks even in the presence of translation noise. | null | null | 10.18653/v1/2022.acl-long.367 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,033 |
inproceedings | park-caragea-2022-calibration | On the Calibration of Pre-trained Language Models using Mixup Guided by Area Under the Margin and Saliency | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.368/ | Park, Seo Yeon and Caragea, Cornelia | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5364--5374 | A well-calibrated neural model produces confidence (probability outputs) closely approximated by the expected accuracy. While prior studies have shown that mixup training as a data augmentation technique can improve model calibration on image classification tasks, little is known about using mixup for model calibration on natural language understanding (NLU) tasks. In this paper, we explore mixup for model calibration on several NLU tasks and propose a novel mixup strategy for pre-trained language models that improves model calibration further. Our proposed mixup is guided by both the Area Under the Margin (AUM) statistic (Pleiss et al., 2020) and the saliency map of each sample (Simonyan et al., 2013). Moreover, we combine our mixup strategy with model miscalibration correction techniques (i.e., label smoothing and temperature scaling) and provide detailed analyses of their impact on our proposed mixup. We focus on systematically designing experiments on three NLU tasks: natural language inference, paraphrase detection, and commonsense reasoning. Our method achieves the lowest expected calibration error compared to strong baselines on both in-domain and out-of-domain test samples while maintaining competitive accuracy. | null | null | 10.18653/v1/2022.acl-long.368 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,034 |
inproceedings | stowe-etal-2022-impli | {IMPLI}: Investigating {NLI} Models' Performance on Figurative Language | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.369/ | Stowe, Kevin and Utama, Prasetya and Gurevych, Iryna | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5375--5388 | Natural language inference (NLI) has been widely used as a task to train and evaluate models for language understanding. However, the ability of NLI models to perform inferences requiring understanding of figurative language such as idioms and metaphors remains understudied. We introduce the IMPLI (Idiomatic and Metaphoric Paired Language Inference) dataset, an English dataset consisting of paired sentences spanning idioms and metaphors. We develop novel methods to generate 24k semiautomatic pairs as well as manually creating 1.8k gold pairs. We use IMPLI to evaluate NLI models based on RoBERTa fine-tuned on the widely used MNLI dataset. We then show that while they can reliably detect entailment relationship between figurative phrases with their literal counterparts, they perform poorly on similarly structured examples where pairs are designed to be non-entailing. This suggests the limits of current NLI models with regard to understanding figurative language and this dataset serves as a benchmark for future improvements in this direction. | null | null | 10.18653/v1/2022.acl-long.369 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,035 |
inproceedings | wu-etal-2022-qaconv | {QAC}onv: Question Answering on Informative Conversations | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.370/ | Wu, Chien-Sheng and Madotto, Andrea and Liu, Wenhao and Fung, Pascale and Xiong, Caiming | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5389--5411 | This paper introduces QAConv, a new question answering (QA) dataset that uses conversations as a knowledge source. We focus on informative conversations, including business emails, panel discussions, and work channels. Unlike open-domain and task-oriented dialogues, these conversations are usually long, complex, asynchronous, and involve strong domain knowledge. In total, we collect 34,608 QA pairs from 10,259 selected conversations with both human-written and machine-generated questions. We use a question generator and a dialogue summarizer as auxiliary tools to collect and recommend questions. The dataset has two testing scenarios: chunk mode and full mode, depending on whether the grounded partial conversation is provided or retrieved. Experimental results show that state-of-the-art pretrained QA systems have limited zero-shot performance and tend to predict our questions as unanswerable. Our dataset provides a new training and evaluation testbed to facilitate QA on conversations research. | null | null | 10.18653/v1/2022.acl-long.370 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,036 |
inproceedings | zhou-etal-2022-prix | Prix-{LM}: Pretraining for Multilingual Knowledge Base Construction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.371/ | Zhou, Wenxuan and Liu, Fangyu and Vuli{\'c}, Ivan and Collier, Nigel and Chen, Muhao | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5412--5424 | Knowledge bases (KBs) contain plenty of structured world and commonsense knowledge. As such, they often complement distributional text-based information and facilitate various downstream tasks. Since their manual construction is resource- and time-intensive, recent efforts have tried leveraging large pretrained language models (PLMs) to generate additional monolingual knowledge facts for KBs. However, such methods have not been attempted for building and enriching multilingual KBs. Besides wider application, such multilingual KBs can provide richer combined knowledge than monolingual (e.g., English) KBs. Knowledge expressed in different languages may be complementary and unequally distributed: this implies that the knowledge available in high-resource languages can be transferred to low-resource ones. To achieve this, it is crucial to represent multilingual knowledge in a shared/unified space. To this end, we propose a unified representation model, Prix-LM, for multilingual KB construction and completion. We leverage two types of knowledge, monolingual triples and cross-lingual links, extracted from existing multilingual KBs, and tune a multilingual language encoder XLM-R via a causal language modeling objective. Prix-LM integrates useful multilingual and KB-based factual knowledge into a single model. Experiments on standard entity-related tasks, such as link prediction in multiple languages, cross-lingual entity linking and bilingual lexicon induction, demonstrate its effectiveness, with gains reported over strong task-specialised baselines. | null | null | 10.18653/v1/2022.acl-long.371 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,037 |
inproceedings | lo-etal-2022-semantic | Semantic Composition with {PSHRG} for Derivation Tree Reconstruction from Graph-Based Meaning Representations | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.372/ | Lo, Chun Hei and Lam, Wai and Cheng, Hong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5425--5439 | We introduce a data-driven approach to generating derivation trees from meaning representation graphs with probabilistic synchronous hyperedge replacement grammar (PSHRG). SHRG has been used to produce meaning representation graphs from texts and syntax trees, but little is known about its viability on the reverse. In particular, we experiment on Dependency Minimal Recursion Semantics (DMRS) and adapt PSHRG as a formalism that approximates the semantic composition of DMRS graphs and simultaneously recovers the derivations that license the DMRS graphs. Consistent results are obtained as evaluated on a collection of annotated corpora. This work reveals the ability of PSHRG in formalizing a syntax{--}semantics interface, modelling compositional graph-to-tree translations, and channelling explainability to surface realization. | null | null | 10.18653/v1/2022.acl-long.372 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,038 |
inproceedings | cirik-etal-2022-holm | {HOLM}: Hallucinating Objects with Language Models for Referring Expression Recognition in Partially-Observed Scenes | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.373/ | Cirik, Volkan and Morency, Louis-Philippe and Berg-Kirkpatrick, Taylor | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5440--5453 | AI systems embodied in the physical world face a fundamental challenge of partial observability; operating with only a limited view and knowledge of the environment. This creates challenges when AI systems try to reason about language and its relationship with the environment: objects referred to through language (e.g. giving many instructions) are not immediately visible. Actions by the AI system may be required to bring these objects in view. A good benchmark to study this challenge is Dynamic Referring Expression Recognition (dRER) task, where the goal is to find a target location by dynamically adjusting the field of view (FoV) in a partially observed 360 scenes. In this paper, we introduce HOLM, Hallucinating Objects with Language Models, to address the challenge of partial observability. HOLM uses large pre-trained language models (LMs) to infer object hallucinations for the unobserved part of the environment. Our core intuition is that if a pair of objects co-appear in an environment frequently, our usage of language should reflect this fact about the world. Based on this intuition, we prompt language models to extract knowledge about object affinities which gives us a proxy for spatial relationships of objects. Our experiments show that HOLM performs better than the state-of-the-art approaches on two datasets for dRER; allowing to study generalization for both indoor and outdoor settings. | null | null | 10.18653/v1/2022.acl-long.373 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,039 |
inproceedings | ahuja-etal-2022-multi | Multi Task Learning For Zero Shot Performance Prediction of Multilingual Models | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.374/ | Ahuja, Kabir and Kumar, Shanu and Dandapat, Sandipan and Choudhury, Monojit | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5454--5467 | Massively Multilingual Transformer based Language Models have been observed to be surprisingly effective on zero-shot transfer across languages, though the performance varies from language to language depending on the pivot language(s) used for fine-tuning. In this work, we build upon some of the existing techniques for predicting the zero-shot performance on a task, by modeling it as a multi-task learning problem. We jointly train predictive models for different tasks which helps us build more accurate predictors for tasks where we have test data in very few languages to measure the actual performance of the model. Our approach also lends us the ability to perform a much more robust feature selection, and identify a common set of features that influence zero-shot performance across a variety of tasks. | null | null | 10.18653/v1/2022.acl-long.374 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,040 |
inproceedings | martins-etal-2022-former | $\infty$-former: Infinite Memory Transformer | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.375/ | Martins, Pedro Henrique and Marinho, Zita and Martins, Andre | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5468--5485 | Transformers are unable to model long-term memories effectively, since the amount of computation they need to perform grows with the context length. While variations of efficient transformers have been proposed, they all have a finite memory capacity and are forced to drop old information. In this paper, we propose the $\infty$-former, which extends the vanilla transformer with an unbounded long-term memory. By making use of a continuous-space attention mechanism to attend over the long-term memory, the $\infty$-former`s attention complexity becomes independent of the context length, trading off memory length with precision.In order to control where precision is more important, $\infty$-former maintains {\textquotedblleft}sticky memories,{\textquotedblright} being able to model arbitrarily long contexts while keeping the computation budget fixed.Experiments on a synthetic sorting task, language modeling, and document grounded dialogue generation demonstrate the $\infty$-former`s ability to retain information from long sequences. | null | null | 10.18653/v1/2022.acl-long.375 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,041 |
inproceedings | blasi-etal-2022-systematic | Systematic Inequalities in Language Technology Performance across the World`s Languages | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.376/ | Blasi, Damian and Anastasopoulos, Antonios and Neubig, Graham | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5486--5505 | Natural language processing (NLP) systems have become a central technology in communication, education, medicine, artificial intelligence, and many other domains of research and development. While the performance of NLP methods has grown enormously over the last decade, this progress has been restricted to a minuscule subset of the world`s $\approx$6,500 languages. We introduce a framework for estimating the global utility of language technologies as revealed in a comprehensive snapshot of recent publications in NLP. Our analyses involve the field at large, but also more in-depth studies on both user-facing technologies (machine translation, language understanding, question answering, text-to-speech synthesis) as well as foundational NLP tasks (dependency parsing, morphological inflection). In the process, we (1) quantify disparities in the current state of NLP research, (2) explore some of its associated societal and academic factors, and (3) produce tailored recommendations for evidence-based policy making aimed at promoting more global and equitable language technologies. Data and code to reproduce the findings discussed in this paper areavailable on GitHub (\url{https://github.com/neubig/globalutility}). | null | null | 10.18653/v1/2022.acl-long.376 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,042 |
inproceedings | weissweiler-etal-2022-camel | {CaMEL}: {C}ase {M}arker {E}xtraction without {L}abels | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.377/ | Weissweiler, Leonie and Hofmann, Valentin and Jalili Sabet, Masoud and Schuetze, Hinrich | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5506--5516 | We introduce \textbf{CaMEL} (\textbf{Ca}se \textbf{M}arker \textbf{E}xtraction without \textbf{L}abels), a novel and challenging task in computational morphology that is especially relevant for low-resource languages. We propose a first model for CaMEL that uses a massively multilingual corpus to extract case markers in 83 languages based only on a noun phrase chunker and an alignment system. To evaluate CaMEL, we automatically construct a silver standard from UniMorph. The case markers extracted by our model can be used to detect and visualise similarities and differences between the case systems of different languages as well as to annotate fine-grained deep cases in languages in which they are not overtly marked. | null | null | 10.18653/v1/2022.acl-long.377 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,043 |
inproceedings | nejadgholi-etal-2022-improving | Improving Generalizability in Implicitly Abusive Language Detection with Concept Activation Vectors | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.378/ | Nejadgholi, Isar and Fraser, Kathleen and Kiritchenko, Svetlana | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5517--5529 | Robustness of machine learning models on ever-changing real-world data is critical, especially for applications affecting human well-being such as content moderation. New kinds of abusive language continually emerge in online discussions in response to current events (e.g., COVID-19), and the deployed abuse detection systems should be updated regularly to remain accurate. In this paper, we show that general abusive language classifiers tend to be fairly reliable in detecting out-of-domain explicitly abusive utterances but fail to detect new types of more subtle, implicit abuse. Next, we propose an interpretability technique, based on the Testing Concept Activation Vector (TCAV) method from computer vision, to quantify the sensitivity of a trained model to the human-defined concepts of explicit and implicit abusive language, and use that to explain the generalizability of the model on new data, in this case, COVID-related anti-Asian hate speech. Extending this technique, we introduce a novel metric, Degree of Explicitness, for a single instance and show that the new metric is beneficial in suggesting out-of-domain unlabeled examples to effectively enrich the training data with informative, implicitly abusive texts. | null | null | 10.18653/v1/2022.acl-long.378 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,044 |
inproceedings | falk-lapesa-2022-reports | Reports of personal experiences and stories in argumentation: datasets and analysis | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.379/ | Falk, Neele and Lapesa, Gabriella | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5530--5553 | Reports of personal experiences or stories can play a crucial role in argumentation, as they represent an immediate and (often) relatable way to back up one`s position with respect to a given topic. They are easy to understand and increase empathy: this makes them powerful in argumentation. The impact of personal reports and stories in argumentation has been studied in the Social Sciences, but it is still largely underexplored in NLP. Our work is the first step towards filling this gap: our goal is to develop robust classifiers to identify documents containing personal experiences and reports. The main challenge is the scarcity of annotated data: our solution is to leverage existing annotations to be able to scale-up the analysis. Our contribution is two-fold. First, we conduct a set of in-domain and cross-domain experiments involving three datasets (two from Argument Mining, one from the Social Sciences), modeling architectures, training setups and fine-tuning options tailored to the involved domains. We show that despite the differences among datasets and annotations, robust cross-domain classification is possible. Second, we employ linear regression for performance mining, identifying performance trends both for overall classification performance and individual classifier predictions. | null | null | 10.18653/v1/2022.acl-long.379 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,045 |
inproceedings | same-etal-2022-non | Non-neural Models Matter: a Re-evaluation of Neural Referring Expression Generation Systems | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.380/ | Same, Fahime and Chen, Guanyi and Van Deemter, Kees | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5554--5567 | In recent years, neural models have often outperformed rule-based and classic Machine Learning approaches in NLG. These classic approaches are now often disregarded, for example when new neural models are evaluated. We argue that they should not be overlooked, since, for some tasks, well-designed non-neural approaches achieve better performance than neural ones. In this paper, the task of generating referring expressions in linguistic context is used as an example. We examined two very different English datasets (WEBNLG and WSJ), and evaluated each algorithm using both automatic and human evaluations. Overall, the results of these evaluations suggest that rule-based systems with simple rule sets achieve on-par or better performance on both datasets compared to state-of-the-art neural REG systems. In the case of the more realistic dataset, WSJ, a machine learning-based system with well-designed linguistic features performed best. We hope that our work can encourage researchers to consider non-neural models in future. | null | null | 10.18653/v1/2022.acl-long.380 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,046 |
inproceedings | zhao-etal-2022-bridging | Bridging the Generalization Gap in Text-to-{SQL} Parsing with Schema Expansion | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.381/ | Zhao, Chen and Su, Yu and Pauls, Adam and Platanios, Emmanouil Antonios | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5568--5578 | Text-to-SQL parsers map natural language questions to programs that are executable over tables to generate answers, and are typically evaluated on large-scale datasets like Spider (Yu et al., 2018). We argue that existing benchmarks fail to capture a certain out-of-domain generalization problem that is of significant practical importance: matching domain specific phrases to composite operation over columns. To study this problem, we first propose a synthetic dataset along with a re-purposed train/test split of the Squall dataset (Shi et al., 2020) as new benchmarks to quantify domain generalization over column operations, and find existing state-of-the-art parsers struggle in these benchmarks. We propose to address this problem by incorporating prior domain knowledge by preprocessing table schemas, and design a method that consists of two components: schema expansion and schema pruning. This method can be easily applied to multiple existing base parsers, and we show that it significantly outperforms baseline parsers on this domain generalization problem, boosting the underlying parsers' overall performance by up to 13.8{\%} relative accuracy gain (5.1{\%} absolute) on the new Squall data split. | null | null | 10.18653/v1/2022.acl-long.381 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,047 |
inproceedings | peng-etal-2022-predicate | Predicate-Argument Based Bi-Encoder for Paraphrase Identification | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.382/ | Peng, Qiwei and Weir, David and Weeds, Julie and Chai, Yekun | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5579--5589 | Paraphrase identification involves identifying whether a pair of sentences express the same or similar meanings. While cross-encoders have achieved high performances across several benchmarks, bi-encoders such as SBERT have been widely applied to sentence pair tasks. They exhibit substantially lower computation complexity and are better suited to symmetric tasks. In this work, we adopt a bi-encoder approach to the paraphrase identification task, and investigate the impact of explicitly incorporating predicate-argument information into SBERT through weighted aggregation. Experiments on six paraphrase identification datasets demonstrate that, with a minimal increase in parameters, the proposed model is able to outperform SBERT/SRoBERTa significantly. Further, ablation studies reveal that the predicate-argument based component plays a significant role in the performance gain. | null | null | 10.18653/v1/2022.acl-long.382 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,048 |
inproceedings | wang-etal-2022-miner | {MINER}: Improving Out-of-Vocabulary Named Entity Recognition from an Information Theoretic Perspective | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.383/ | Wang, Xiao and Dou, Shihan and Xiong, Limao and Zou, Yicheng and Zhang, Qi and Gui, Tao and Qiao, Liang and Cheng, Zhanzhan and Huang, Xuanjing | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5590--5600 | NER model has achieved promising performance on standard NER benchmarks. However, recent studies show that previous approaches may over-rely on entity mention information, resulting in poor performance on out-of-vocabulary(OOV) entity recognition. In this work, we propose MINER, a novel NER learning framework, to remedy this issue from an information-theoretic perspective. The proposed approach contains two mutual information based training objectives: i) generalizing information maximization, which enhances representation via deep understanding of context and entity surface forms; ii) superfluous information minimization, which discourages representation from rotate memorizing entity names or exploiting biased cues in data. Experiments on various settings and datasets demonstrate that it achieves better performance in predicting OOV entities. | null | null | 10.18653/v1/2022.acl-long.383 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,049 |
inproceedings | de-kock-vlachos-2022-leveraging | Leveraging {W}ikipedia article evolution for promotional tone detection | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.384/ | De Kock, Christine and Vlachos, Andreas | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5601--5613 | Detecting biased language is useful for a variety of applications, such as identifying hyperpartisan news sources or flagging one-sided rhetoric. In this work we introduce WikiEvolve, a dataset for document-level promotional tone detection. Unlike previously proposed datasets, WikiEvolve contains seven versions of the same article from Wikipedia, from different points in its revision history; one with promotional tone, and six without it. This allows for obtaining more precise training signal for learning models from promotional tone detection. We adapt the previously proposed gradient reversal layer framework to encode two article versions simultaneously and thus leverage this additional training signal. In our experiments, our proposed adaptation of gradient reversal improves the accuracy of four different architectures on both in-domain and out-of-domain evaluation. | null | null | 10.18653/v1/2022.acl-long.384 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,050 |
inproceedings | dingemanse-liesenfeld-2022-text | From text to talk: {H}arnessing conversational corpora for humane and diversity-aware language technology | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.385/ | Dingemanse, Mark and Liesenfeld, Andreas | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5614--5633 | Informal social interaction is the primordial home of human language. Linguistically diverse conversational corpora are an important and largely untapped resource for computational linguistics and language technology. Through the efforts of a worldwide language documentation movement, such corpora are increasingly becoming available. We show how interactional data from 63 languages (26 families) harbours insights about turn-taking, timing, sequential structure and social action, with implications for language technology, natural language understanding, and the design of conversational interfaces. Harnessing linguistically diverse conversational corpora will provide the empirical foundations for flexible, localizable, humane language technologies of the future. | null | null | 10.18653/v1/2022.acl-long.385 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,051 |
inproceedings | liu-etal-2022-flooding | Flooding-{X}: Improving {BERT}`s Resistance to Adversarial Attacks via Loss-Restricted Fine-Tuning | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.386/ | Liu, Qin and Zheng, Rui and Rong, Bao and Liu, Jingyi and Liu, ZhiHua and Cheng, Zhanzhan and Qiao, Liang and Gui, Tao and Zhang, Qi and Huang, Xuanjing | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5634--5644 | Adversarial robustness has attracted much attention recently, and the mainstream solution is adversarial training. However, the tradition of generating adversarial perturbations for each input embedding (in the settings of NLP) scales up the training computational complexity by the number of gradient steps it takes to obtain the adversarial samples. To address this problem, we leverage Flooding method which primarily aims at better generalization and we find promising in defending adversarial attacks. We further propose an effective criterion to bring hyper-parameter-dependent flooding into effect with a narrowed-down search space by measuring how the gradient steps taken within one epoch affect the loss of each batch. Our approach requires zero adversarial sample for training, and its time consumption is equivalent to fine-tuning, which can be 2-15 times faster than standard adversarial training. We experimentally show that our method improves BERT`s resistance to textual adversarial attacks by a large margin, and achieves state-of-the-art robust accuracy on various text classification and GLUE tasks. | null | null | 10.18653/v1/2022.acl-long.386 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,052 |
inproceedings | rony-etal-2022-rome | {R}o{M}e: A Robust Metric for Evaluating Natural Language Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.387/ | Rony, Md Rashad Al Hasan and Kovriguina, Liubov and Chaudhuri, Debanjan and Usbeck, Ricardo and Lehmann, Jens | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5645--5657 | Evaluating Natural Language Generation (NLG) systems is a challenging task. Firstly, the metric should ensure that the generated hypothesis reflects the reference`s semantics. Secondly, it should consider the grammatical quality of the generated sentence. Thirdly, it should be robust enough to handle various surface forms of the generated sentence. Thus, an effective evaluation metric has to be multifaceted. In this paper, we propose an automatic evaluation metric incorporating several core aspects of natural language understanding (language competence, syntactic and semantic variation). Our proposed metric, RoMe, is trained on language features such as semantic similarity combined with tree edit distance and grammatical acceptability, using a self-supervised neural network to assess the overall quality of the generated sentence. Moreover, we perform an extensive robustness analysis of the state-of-the-art methods and RoMe. Empirical results suggest that RoMe has a stronger correlation to human judgment over state-of-the-art metrics in evaluating system-generated sentences across several NLG tasks. | null | null | 10.18653/v1/2022.acl-long.387 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,053 |
inproceedings | milewski-etal-2022-finding | Finding Structural Knowledge in Multimodal-{BERT} | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.388/ | Milewski, Victor and de Lhoneux, Miryam and Moens, Marie-Francine | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5658--5671 | In this work, we investigate the knowledge learned in the embeddings of multimodal-BERT models. More specifically, we probe their capabilities of storing the grammatical structure of linguistic data and the structure learned over objects in visual data. To reach that goal, we first make the inherent structure of language and visuals explicit by a dependency parse of the sentences that describe the image and by the dependencies between the object regions in the image, respectively. We call this explicit visual structure the scene tree, that is based on the dependency tree of the language description. Extensive probing experiments show that the multimodal-BERT models do not encode these scene trees. | null | null | 10.18653/v1/2022.acl-long.388 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,054 |
inproceedings | chen-etal-2022-fully | Fully Hyperbolic Neural Networks | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.389/ | Chen, Weize and Han, Xu and Lin, Yankai and Zhao, Hexu and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5672--5686 | Hyperbolic neural networks have shown great potential for modeling complex data. However, existing hyperbolic networks are not completely hyperbolic, as they encode features in the hyperbolic space yet formalize most of their operations in the tangent space (a Euclidean subspace) at the origin of the hyperbolic model. This hybrid method greatly limits the modeling ability of networks. In this paper, we propose a fully hyperbolic framework to build hyperbolic networks based on the Lorentz model by adapting the Lorentz transformations (including boost and rotation) to formalize essential operations of neural networks. Moreover, we also prove that linear transformation in tangent spaces used by existing hyperbolic networks is a relaxation of the Lorentz rotation and does not include the boost, implicitly limiting the capabilities of existing hyperbolic networks. The experimental results on four NLP tasks show that our method has better performance for building both shallow and deep networks. Our code will be released to facilitate follow-up research. | null | null | 10.18653/v1/2022.acl-long.389 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,055 |
inproceedings | fang-feng-2022-neural | Neural Machine Translation with Phrase-Level Universal Visual Representations | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.390/ | Fang, Qingkai and Feng, Yang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5687--5698 | Multimodal machine translation (MMT) aims to improve neural machine translation (NMT) with additional visual information, but most existing MMT methods require paired input of source sentence and image, which makes them suffer from shortage of sentence-image pairs. In this paper, we propose a phrase-level retrieval-based method for MMT to get visual information for the source input from existing sentence-image data sets so that MMT can break the limitation of paired sentence-image input. Our method performs retrieval at the phrase level and hence learns visual information from pairs of source phrase and grounded region, which can mitigate data sparsity. Furthermore, our method employs the conditional variational auto-encoder to learn visual representations which can filter redundant visual information and only retain visual information related to the phrase. Experiments show that the proposed method significantly outperforms strong baselines on multiple MMT datasets, especially when the textual context is limited. | null | null | 10.18653/v1/2022.acl-long.390 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,056 |
inproceedings | zhao-etal-2022-m3ed | {M}3{ED}: Multi-modal Multi-scene Multi-label Emotional Dialogue Database | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.391/ | Zhao, Jinming and Zhang, Tenggan and Hu, Jingwen and Liu, Yuchen and Jin, Qin and Wang, Xinchao and Li, Haizhou | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5699--5710 | The emotional state of a speaker can be influenced by many different factors in dialogues, such as dialogue scene, dialogue topic, and interlocutor stimulus. The currently available data resources to support such multimodal affective analysis in dialogues are however limited in scale and diversity. In this work, we propose a Multi-modal Multi-scene Multi-label Emotional Dialogue dataset, M$^3$ED, which contains 990 dyadic emotional dialogues from 56 different TV series, a total of 9,082 turns and 24,449 utterances. M$^3$ED is annotated with 7 emotion categories (happy, surprise, sad, disgust, anger, fear, and neutral) at utterance level, and encompasses acoustic, visual, and textual modalities. To the best of our knowledge, M$^3$ED is the first multimodal emotional dialogue dataset in Chinese.It is valuable for cross-culture emotion analysis and recognition. We apply several state-of-the-art methods on the M$^3$ED dataset to verify the validity and quality of the dataset. We also propose a general Multimodal Dialogue-aware Interaction framework, MDI, to model the dialogue context for emotion recognition, which achieves comparable performance to the state-of-the-art methods on the M$^3$ED. The full dataset and codes are available. | null | null | 10.18653/v1/2022.acl-long.391 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,057 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.