entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | chen-etal-2022-shot | Few-shot Named Entity Recognition with Self-describing Networks | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.392/ | Chen, Jiawei and Liu, Qing and Lin, Hongyu and Han, Xianpei and Sun, Le | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5711--5722 | Few-shot NER needs to effectively capture information from limited instances and transfer useful knowledge from external resources. In this paper, we propose a self-describing mechanism for few-shot NER, which can effectively leverage illustrative instances and precisely transfer knowledge from external resources by describing both entity types and mentions using a universal concept set. Specifically, we design Self-describing Networks (SDNet), a Seq2Seq generation model which can universally describe mentions using concepts, automatically map novel entity types to concepts, and adaptively recognize entities on-demand. We pre-train SDNet with large-scale corpus, and conduct experiments on 8 benchmarks from different domains. Experiments show that SDNet achieves competitive performances on all benchmarks and achieves the new state-of-the-art on 6 benchmarks, which demonstrates its effectiveness and robustness. | null | null | 10.18653/v1/2022.acl-long.392 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,058 |
inproceedings | ao-etal-2022-speecht5 | {S}peech{T}5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.393/ | Ao, Junyi and Wang, Rui and Zhou, Long and Wang, Chengyi and Ren, Shuo and Wu, Yu and Liu, Shujie and Ko, Tom and Li, Qing and Zhang, Yu and Wei, Zhihua and Qian, Yao and Li, Jinyu and Wei, Furu | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5723--5738 | Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification. | null | null | 10.18653/v1/2022.acl-long.393 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,059 |
inproceedings | moramarco-etal-2022-human | Human Evaluation and Correlation with Automatic Metrics in Consultation Note Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.394/ | Moramarco, Francesco and Papadopoulos Korfiatis, Alex and Perera, Mark and Juric, Damir and Flann, Jack and Reiter, Ehud and Belz, Anya and Savkov, Aleksandar | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5739--5754 | In recent years, machine learning models have rapidly become better at generating clinical consultation notes; yet, there is little work on how to properly evaluate the generated consultation notes to understand the impact they may have on both the clinician using them and the patient`s clinical safety. To address this we present an extensive human evaluation study of consultation notes where 5 clinicians (i) listen to 57 mock consultations, (ii) write their own notes, (iii) post-edit a number of automatically generated notes, and (iv) extract all the errors, both quantitative and qualitative. We then carry out a correlation study with 18 automatic quality metrics and the human judgements. We find that a simple, character-based Levenshtein distance metric performs on par if not better than common model-based metrics like BertScore. All our findings and annotations are open-sourced. | null | null | 10.18653/v1/2022.acl-long.394 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,060 |
inproceedings | lu-etal-2022-unified | Unified Structure Generation for Universal Information Extraction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.395/ | Lu, Yaojie and Liu, Qing and Dai, Dai and Xiao, Xinyan and Lin, Hongyu and Han, Xianpei and Sun, Le and Wu, Hua | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5755--5772 | Information extraction suffers from its varying targets, heterogeneous structures, and demand-specific schemas. In this paper, we propose a unified text-to-structure generation framework, namely UIE, which can universally model different IE tasks, adaptively generate targeted structures, and collaboratively learn general IE abilities from different knowledge sources. Specifically, UIE uniformly encodes different extraction structures via a structured extraction language, adaptively generates target extractions via a schema-based prompt mechanism {--} structural schema instructor, and captures the common IE abilities via a large-scale pretrained text-to-structure model. Experiments show that UIE achieved the state-of-the-art performance on 4 IE tasks, 13 datasets, and on all supervised, low-resource, and few-shot settings for a wide range of entity, relation, event and sentiment extraction tasks and their unification. These results verified the effectiveness, universality, and transferability of UIE. | null | null | 10.18653/v1/2022.acl-long.395 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,061 |
inproceedings | zhang-etal-2022-subgraph | Subgraph Retrieval Enhanced Model for Multi-hop Knowledge Base Question Answering | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.396/ | Zhang, Jing and Zhang, Xiaokang and Yu, Jifan and Tang, Jian and Tang, Jie and Li, Cuiping and Chen, Hong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5773--5784 | Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning. The desired subgraph is crucial as a small one may exclude the answer but a large one might introduce more noises. However, the existing retrieval is either heuristic or interwoven with the reasoning, causing reasoning on the partial subgraphs, which increases the reasoning bias when the intermediate supervision is missing. This paper proposes a trainable subgraph retriever (SR) decoupled from the subsequent reasoning process, which enables a plug-and-play framework to enhance any subgraph-oriented KBQA model. Extensive experiments demonstrate SR achieves significantly better retrieval and QA performance than existing retrieval methods. Via weakly supervised pre-training as well as the end-to-end fine-tuning, SR achieves new state-of-the-art performance when combined with NSM (He et al., 2021), a subgraph-oriented reasoner, for embedding-based KBQA methods. Codes and datasets are available online (\url{https://github.com/RUCKBReasoning/SubgraphRetrievalKBQA}) | null | null | 10.18653/v1/2022.acl-long.396 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,062 |
inproceedings | liu-etal-2022-pre | Pre-training to Match for Unified Low-shot Relation Extraction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.397/ | Liu, Fangchao and Lin, Hongyu and Han, Xianpei and Cao, Boxi and Sun, Le | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5785--5795 | Low-shot relation extraction (RE) aims to recognize novel relations with very few or even no samples, which is critical in real scenario application. Few-shot and zero-shot RE are two representative low-shot RE tasks, which seem to be with similar target but require totally different underlying abilities. In this paper, we propose Multi-Choice Matching Networks to unify low-shot relation extraction. To fill in the gap between zero-shot and few-shot RE, we propose the triplet-paraphrase meta-training, which leverages triplet paraphrase to pre-train zero-shot label matching ability and uses meta-learning paradigm to learn few-shot instance summarizing ability. Experimental results on three different low-shot RE tasks show that the proposed method outperforms strong baselines by a large margin, and achieve the best performance on few-shot RE leaderboard. | null | null | 10.18653/v1/2022.acl-long.397 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,063 |
inproceedings | cao-etal-2022-prompt | Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.398/ | Cao, Boxi and Lin, Hongyu and Han, Xianpei and Liu, Fangchao and Sun, Le | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5796--5808 | Prompt-based probing has been widely used in evaluating the abilities of pretrained language models (PLMs). Unfortunately, recent studies have discovered such an evaluation may be inaccurate, inconsistent and unreliable. Furthermore, the lack of understanding its inner workings, combined with its wide applicability, has the potential to lead to unforeseen risks for evaluating and applying PLMs in real-world applications. To discover, understand and quantify the risks, this paper investigates the prompt-based probing from a causal view, highlights three critical biases which could induce biased results and conclusions, and proposes to conduct debiasing via causal intervention. This paper provides valuable insights for the design of unbiased datasets, better probing frameworks and more reliable evaluations of pretrained language models. Furthermore, our conclusions also echo that we need to rethink the criteria for identifying better pretrained language models. | null | null | 10.18653/v1/2022.acl-long.398 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,064 |
inproceedings | amigo-delgado-2022-evaluating | Evaluating Extreme Hierarchical Multi-label Classification | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.399/ | Amigo, Enrique and Delgado, Agust{\'i}n | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5809--5819 | Several natural language processing (NLP) tasks are defined as a classification problem in its most complex form: Multi-label Hierarchical Extreme classification, in which items may be associated with multiple classes from a set of thousands of possible classes organized in a hierarchy and with a highly unbalanced distribution both in terms of class frequency and the number of labels per item. We analyze the state of the art of evaluation metrics based on a set of formal properties and we define an information theoretic based metric inspired by the Information Contrast Model (ICM). Experiments on synthetic data and a case study on real data show the suitability of the ICM for such scenarios. | null | null | 10.18653/v1/2022.acl-long.399 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,065 |
inproceedings | cuesta-lazaro-etal-2022-sea | What does the sea say to the shore? A {BERT} based {DST} style approach for speaker to dialogue attribution in novels | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.400/ | Cuesta-Lazaro, Carolina and Prasad, Animesh and Wood, Trevor | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5820--5829 | We present a complete pipeline to extract characters in a novel and link them to their direct-speech utterances. Our model is divided into three independent components: extracting direct-speech, compiling a list of characters, and attributing those characters to their utterances. Although we find that existing systems can perform the first two tasks accurately, attributing characters to direct speech is a challenging problem due to the narrator`s lack of explicit character mentions, and the frequent use of nominal and pronominal coreference when such explicit mentions are made. We adapt the progress made on Dialogue State Tracking to tackle a new problem: attributing speakers to dialogues. This is the first application of deep learning to speaker attribution, and it shows that is possible to overcome the need for the hand-crafted features and rules used in the past. Our full pipeline improves the performance of state-of-the-art models by a relative 50{\%} in F1-score. | null | null | 10.18653/v1/2022.acl-long.400 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,066 |
inproceedings | krishna-etal-2022-measuring | Measuring Fairness of Text Classifiers via Prediction Sensitivity | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.401/ | Krishna, Satyapriya and Gupta, Rahul and Verma, Apurv and Dhamala, Jwala and Pruksachatkun, Yada and Chang, Kai-Wei | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5830--5842 | With the rapid growth in language processing applications, fairness has emerged as an important consideration in data-driven solutions. Although various fairness definitions have been explored in the recent literature, there is lack of consensus on which metrics most accurately reflect the fairness of a system. In this work, we propose a new formulation {--} accumulated prediction sensitivity, which measures fairness in machine learning models based on the model`s prediction sensitivity to perturbations in input features. The metric attempts to quantify the extent to which a single prediction depends on a protected attribute, where the protected attribute encodes the membership status of an individual in a protected group. We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness. It also correlates well with humans' perception of fairness. We conduct experiments on two text classification datasets {--} Jigsaw Toxicity, and Bias in Bios, and evaluate the correlations between metrics and manual annotations on whether the model produced a fair outcome. We observe that the proposed fairness metric based on prediction sensitivity is statistically significantly more correlated with human annotation than the existing counterfactual fairness metric. | null | null | 10.18653/v1/2022.acl-long.401 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,067 |
inproceedings | chen-etal-2022-rotateqvs | {R}otate{QVS}: Representing Temporal Information as Rotations in Quaternion Vector Space for Temporal Knowledge Graph Completion | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.402/ | Chen, Kai and Wang, Ye and Li, Yitong and Li, Aiping | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5843--5857 | Temporal factors are tied to the growth of facts in realistic applications, such as the progress of diseases and the development of political situation, therefore, research on Temporal Knowledge Graph (TKG) attracks much attention. In TKG, relation patterns inherent with temporality are required to be studied for representation learning and reasoning across temporal facts. However, existing methods can hardly model temporal relation patterns, nor can capture the intrinsic connections between relations when evolving over time, lacking of interpretability. In this paper, we propose a novel temporal modeling method which represents temporal entities as Rotations in Quaternion Vector Space (RotateQVS) and relations as complex vectors in Hamilton`s quaternion space. We demonstrate our method can model key patterns of relations in TKG, such as symmetry, asymmetry, inverse, and can capture time-evolved relations by theory. And empirically, we show that our method can boost the performance of link prediction tasks over four temporal knowledge graph benchmarks. | null | null | 10.18653/v1/2022.acl-long.402 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,068 |
inproceedings | wang-etal-2022-feeding | Feeding What You Need by Understanding What You Learned | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.403/ | Wang, Xiaoqiang and Liu, Bang and Xu, Fangli and Long, Bo and Tang, Siliang and Wu, Lingfei | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5858--5874 | Machine Reading Comprehension (MRC) reveals the ability to understand a given text passage and answer questions based on it. Existing research works in MRC rely heavily on large-size models and corpus to improve the performance evaluated by metrics such as Exact Match ($EM$) and $F_1$. However, such a paradigm lacks sufficient interpretation to model capability and can not efficiently train a model with a large corpus. In this paper, we argue that a deep understanding of model capabilities and data properties can help us feed a model with appropriate training data based on its learning status. Specifically, we design an MRC capability assessment framework that assesses model capabilities in an explainable and multi-dimensional manner. Based on it, we further uncover and disentangle the connections between various data properties and model performance. Finally, to verify the effectiveness of the proposed MRC capability assessment framework, we incorporate it into a curriculum learning pipeline and devise a Capability Boundary Breakthrough Curriculum (CBBC) strategy, which performs a model capability-based training to maximize the data value and improve training efficiency. Extensive experiments demonstrate that our approach significantly improves performance, achieving up to an 11.22{\%} / 8.71{\%} improvement of $EM$ / $F_1$ on MRC tasks. | null | null | 10.18653/v1/2022.acl-long.403 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,069 |
inproceedings | chen-etal-2022-probing | Probing Simile Knowledge from Pre-trained Language Models | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.404/ | Chen, Weijie and Chang, Yongzhu and Zhang, Rongsheng and Pu, Jiashu and Chen, Guandan and Zhang, Le and Xi, Yadong and Chen, Yijiang and Su, Chang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5875--5887 | Simile interpretation (SI) and simile generation (SG) are challenging tasks for NLP because models require adequate world knowledge to produce predictions. Previous works have employed many hand-crafted resources to bring knowledge-related into models, which is time-consuming and labor-intensive. In recent years, pre-trained language models (PLMs) based approaches have become the de-facto standard in NLP since they learn generic knowledge from a large corpus. The knowledge embedded in PLMs may be useful for SI and SG tasks. Nevertheless, there are few works to explore it. In this paper, we probe simile knowledge from PLMs to solve the SI and SG tasks in the unified framework of simile triple completion for the first time. The backbone of our framework is to construct masked sentences with manual patterns and then predict the candidate words in the masked position. In this framework, we adopt a secondary training process (Adjective-Noun mask Training) with the masked language model (MLM) loss to enhance the prediction diversity of candidate words in the masked position. Moreover, pattern ensemble (PE) and pattern search (PS) are applied to improve the quality of predicted words. Finally, automatic and human evaluations demonstrate the effectiveness of our framework in both SI and SG tasks. | null | null | 10.18653/v1/2022.acl-long.404 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,070 |
inproceedings | mao-etal-2022-effective | An Effective and Efficient Entity Alignment Decoding Algorithm via Third-Order Tensor Isomorphism | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.405/ | Mao, Xin and Ma, Meirong and Yuan, Hao and Zhu, Jianchao and Wang, ZongYu and Xie, Rui and Wu, Wei and Lan, Man | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5888--5898 | Entity alignment (EA) aims to discover the equivalent entity pairs between KGs, which is a crucial step for integrating multi-source KGs.For a long time, most researchers have regarded EA as a pure graph representation learning task and focused on improving graph encoders while paying little attention to the decoding process. In this paper, we propose an effective and efficient EA Decoding Algorithm via Third-order Tensor Isomorphism (DATTI).Specifically, we derive two sets of isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations. By combining these equations, DATTI could effectively utilize the adjacency and inner correlation isomorphisms of KGs to enhance the decoding process of EA.Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even on the most advanced EA methods, while the extra required time is less than 3 seconds. | null | null | 10.18653/v1/2022.acl-long.405 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,071 |
inproceedings | chen-etal-2022-entailment | Entailment Graph Learning with Textual Entailment and Soft Transitivity | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.406/ | Chen, Zhibin and Feng, Yansong and Zhao, Dongyan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5899--5910 | Typed entailment graphs try to learn the entailment relations between predicates from text and model them as edges between predicate nodes. The construction of entailment graphs usually suffers from severe sparsity and unreliability of distributional similarity. We propose a two-stage method, Entailment Graph with Textual Entailment and Transitivity (EGT2). EGT2 learns the local entailment relations by recognizing the textual entailment between template sentences formed by typed CCG-parsed predicates. Based on the generated local graph, EGT2 then uses three novel soft transitivity constraints to consider the logical transitivity in entailment structures. Experiments on benchmark datasets show that EGT2 can well model the transitivity in entailment graph to alleviate the sparsity, and leads to signifcant improvement over current state-of-the-art methods. | null | null | 10.18653/v1/2022.acl-long.406 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,072 |
inproceedings | ju-etal-2022-logic | Logic Traps in Evaluating Attribution Scores | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.407/ | Ju, Yiming and Zhang, Yuanzhe and Yang, Zhao and Jiang, Zhongtao and Liu, Kang and Zhao, Jun | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5911--5922 | Modern deep learning models are notoriously opaque, which has motivated the development of methods for interpreting how deep models predict. This goal is usually approached with attribution method, which assesses the influence of features on model predictions. As an explanation method, the evaluation criteria of attribution methods is how accurately it reflects the actual reasoning process of the model (faithfulness). Meanwhile, since the reasoning process of deep models is inaccessible, researchers design various evaluation methods to demonstrate their arguments. However, some crucial logic traps in these evaluation methods are ignored in most works, causing inaccurate evaluation and unfair comparison. This paper systematically reviews existing methods for evaluating attribution scores and summarizes the logic traps in these methods. We further conduct experiments to demonstrate the existence of each logic trap. Through both theoretical and experimental analysis, we hope to increase attention on the inaccurate evaluation of attribution scores. Moreover, with this paper, we suggest stopping focusing on improving performance under unreliable evaluation systems and starting efforts on reducing the impact of proposed logic traps. | null | null | 10.18653/v1/2022.acl-long.407 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,073 |
inproceedings | gong-etal-2022-continual | Continual Pre-training of Language Models for Math Problem Understanding with Syntax-Aware Memory Network | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.408/ | Gong, Zheng and Zhou, Kun and Zhao, Xin and Sha, Jing and Wang, Shijin and Wen, Ji-Rong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5923--5933 | In this paper, we study how to continually pre-train language models for improving the understanding of math problems. Specifically, we focus on solving a fundamental challenge in modeling math problems, how to fuse the semantics of textual description and formulas, which are highly different in essence. To address this issue, we propose a new approach called \textbf{COMUS} to \textbf{co}ntinually pre-train language models for \textbf{m}ath problem \textbf{u}nderstanding with \textbf{s}yntax-aware memory network. In this approach, we first construct the math syntax graph to model the structural semantic information, by combining the parsing trees of the text and formulas, and then design the syntax-aware memory networks to deeply fuse the features from the graph and text. With the help of syntax relations, we can model the interaction between the token from the text and its semantic-related nodes within the formulas, which is helpful to capture fine-grained semantic correlations between texts and formulas. Besides, we devise three continual pre-training tasks to further align and fuse the representations of the text and math syntax graph. Experimental results on four tasks in the math domain demonstrate the effectiveness of our approach. Our code and data are publicly available at the link: blue\url{https://github.com/RUCAIBox/COMUS}. | null | null | 10.18653/v1/2022.acl-long.408 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,074 |
inproceedings | kong-etal-2022-multitasking | Multitasking Framework for Unsupervised Simple Definition Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.409/ | Kong, Cunliang and Chen, Yun and Zhang, Hengyuan and Yang, Liner and Yang, Erhong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5934--5943 | The definition generation task can help language learners by providing explanations for unfamiliar words. This task has attracted much attention in recent years. We propose a novel task of Simple Definition Generation (SDG) to help language learners and low literacy readers. A significant challenge of this task is the lack of learner`s dictionaries in many languages, and therefore the lack of data for supervised training. We explore this task and propose a multitasking framework SimpDefiner that only requires a standard dictionary with complex definitions and a corpus containing arbitrary simple texts. We disentangle the complexity factors from the text by carefully designing a parameter sharing scheme between two decoders. By jointly training these components, the framework can generate both complex and simple definitions simultaneously. We demonstrate that the framework can generate relevant, simple definitions for the target words through automatic and manual evaluations on English and Chinese datasets. Our method outperforms the baseline model by a 1.77 SARI score on the English dataset, and raises the proportion of the low level (HSK level 1-3) words in Chinese definitions by 3.87{\%}. | null | null | 10.18653/v1/2022.acl-long.409 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,075 |
inproceedings | jie-etal-2022-learning | Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.410/ | Jie, Zhanming and Li, Jierui and Lu, Wei | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5944--5955 | Solving math word problems requires deductive reasoning over the quantities in the text. Various recent research efforts mostly relied on sequence-to-sequence or sequence-to-tree models to generate mathematical expressions without explicitly performing relational reasoning between quantities in the given context. While empirically effective, such approaches typically do not provide explanations for the generated expressions. In this work, we view the task as a complex relation extraction problem, proposing a novel approach that presents explainable deductive reasoning steps to iteratively construct target expressions, where each step involves a primitive operation over two quantities defining their relation. Through extensive experiments on four benchmark datasets, we show that the proposed model significantly outperforms existing strong baselines. We further demonstrate that the deductive procedure not only presents more explainable steps but also enables us to make more accurate predictions on questions that require more complex reasoning. | null | null | 10.18653/v1/2022.acl-long.410 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,076 |
inproceedings | kumar-etal-2022-become | When did you become so smart, oh wise one?! Sarcasm Explanation in Multi-modal Multi-party Dialogues | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.411/ | Kumar, Shivani and Kulkarni, Atharva and Akhtar, Md Shad and Chakraborty, Tanmoy | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5956--5968 | Indirect speech such as sarcasm achieves a constellation of discourse goals in human communication. While the indirectness of figurative language warrants speakers to achieve certain pragmatic goals, it is challenging for AI agents to comprehend such idiosyncrasies of human communication. Though sarcasm identification has been a well-explored topic in dialogue analysis, for conversational systems to truly grasp a conversation`s innate meaning and generate appropriate responses, simply detecting sarcasm is not enough; it is vital to explain its underlying sarcastic connotation to capture its true essence. In this work, we study the discourse structure of sarcastic conversations and propose a novel task {--} Sarcasm Explanation in Dialogue (SED). Set in a multimodal and code-mixed setting, the task aims to generate natural language explanations of satirical conversations. To this end, we curate WITS, a new dataset to support our task. We propose MAF (Modality Aware Fusion), a multimodal context-aware attention and global information fusion module to capture multimodality and use it to benchmark WITS. The proposed attention module surpasses the traditional multimodal fusion baselines and reports the best performance on almost all metrics. Lastly, we carry out detailed analysis both quantitatively and qualitatively. | null | null | 10.18653/v1/2022.acl-long.411 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,077 |
inproceedings | lee-etal-2022-toward | Toward Interpretable Semantic Textual Similarity via Optimal Transport-based Contrastive Sentence Learning | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.412/ | Lee, Seonghyeon and Lee, Dongha and Jang, Seongbo and Yu, Hwanjo | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5969--5979 | Recently, finetuning a pretrained language model to capture the similarity between sentence embeddings has shown the state-of-the-art performance on the semantic textual similarity (STS) task. However, the absence of an interpretation method for the sentence similarity makes it difficult to explain the model output. In this work, we explicitly describe the sentence distance as the weighted sum of contextualized token distances on the basis of a transportation problem, and then present the optimal transport-based distance measure, named RCMD; it identifies and leverages semantically-aligned token pairs. In the end, we propose CLRCMD, a contrastive learning framework that optimizes RCMD of sentence pairs, which enhances the quality of sentence similarity and their interpretation. Extensive experiments demonstrate that our learning framework outperforms other baselines on both STS and interpretable-STS benchmarks, indicating that it computes effective sentence similarity and also provides interpretation consistent with human judgement. | null | null | 10.18653/v1/2022.acl-long.412 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,078 |
inproceedings | zhang-etal-2022-pre | Pre-training and Fine-tuning Neural Topic Model: A Simple yet Effective Approach to Incorporating External Knowledge | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.413/ | Zhang, Linhai and Hu, Xuemeng and Wang, Boyu and Zhou, Deyu and Zhang, Qian-Wen and Cao, Yunbo | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5980--5989 | Recent years have witnessed growing interests in incorporating external knowledge such as pre-trained word embeddings (PWEs) or pre-trained language models (PLMs) into neural topic modeling. However, we found that employing PWEs and PLMs for topic modeling only achieved limited performance improvements but with huge computational overhead. In this paper, we propose a novel strategy to incorporate external knowledge into neural topic modeling where the neural topic model is pre-trained on a large corpus and then fine-tuned on the target dataset. Experiments have been conducted on three datasets and results show that the proposed approach significantly outperforms both current state-of-the-art neural topic models and some topic modeling approaches enhanced with PWEs or PLMs. Moreover, further study shows that the proposed approach greatly reduces the need for the huge size of training data. | null | null | 10.18653/v1/2022.acl-long.413 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,079 |
inproceedings | zhang-etal-2022-multi | Multi-View Document Representation Learning for Open-Domain Dense Retrieval | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.414/ | Zhang, Shunyu and Liang, Yaobo and Gong, Ming and Jiang, Daxin and Duan, Nan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 5990--6000 | Dense retrieval has achieved impressive advances in first-stage retrieval from a large-scale document collection, which is built on bi-encoder architecture to produce single vector representation of query and document. However, a document can usually answer multiple potential queries from different views. So the single vector representation of a document is hard to match with multi-view queries, and faces a semantic mismatch problem. This paper proposes a multi-view document representation learning framework, aiming to produce multi-view embeddings to represent documents and enforce them to align with different queries. First, we propose a simple yet effective method of generating multiple embeddings through viewers. Second, to prevent multi-view embeddings from collapsing to the same one, we further propose a global-local loss with annealed temperature to encourage the multiple viewers to better align with different potential queries. Experiments show our method outperforms recent works and achieves state-of-the-art results. | null | null | 10.18653/v1/2022.acl-long.414 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,080 |
inproceedings | bai-etal-2022-graph | Graph Pre-training for {AMR} Parsing and Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.415/ | Bai, Xuefeng and Chen, Yulong and Zhang, Yue | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6001--6015 | Abstract meaning representation (AMR) highlights the core semantic information of text in a graph structure. Recently, pre-trained language models (PLMs) have advanced tasks of AMR parsing and AMR-to-text generation, respectively. However, PLMs are typically pre-trained on textual data, thus are sub-optimal for modeling structural knowledge. To this end, we investigate graph self-supervised training to improve the structure awareness of PLMs over AMR graphs. In particular, we introduce two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre-training. We further design a unified framework to bridge the gap between pre-training and fine-tuning tasks. Experiments on both AMR parsing and AMR-to-text generation show the superiority of our model. To our knowledge, we are the first to consider pre-training on semantic graphs. | null | null | 10.18653/v1/2022.acl-long.415 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,081 |
inproceedings | yoran-etal-2022-turning | Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning Skills | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.416/ | Yoran, Ori and Talmor, Alon and Berant, Jonathan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6016--6031 | Models pre-trained with a language modeling objective possess ample world knowledge and language skills, but are known to struggle in tasks that require reasoning. In this work, we propose to leverage semi-structured tables, and automatically generate at scale question-paragraph pairs, where answering the question requires reasoning over multiple facts in the paragraph. We add a pre-training step over this synthetic data, which includes examples that require 16 different reasoning skills such as number comparison, conjunction, and fact composition. To improve data efficiency, we sample examples from reasoning skills where the model currently errs. We evaluate our approach on three reasoning-focused reading comprehension datasets, and show that our model, PReasM, substantially outperforms T5, a popular pre-trained encoder-decoder model. Moreover, sampling examples based on model errors leads to faster training and higher performance. | null | null | 10.18653/v1/2022.acl-long.416 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,082 |
inproceedings | ye-etal-2022-rng | {RNG}-{KBQA}: Generation Augmented Iterative Ranking for Knowledge Base Question Answering | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.417/ | Ye, Xi and Yavuz, Semih and Hashimoto, Kazuma and Zhou, Yingbo and Xiong, Caiming | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6032--6043 | Existing KBQA approaches, despite achieving strong performance on i.i.d. test data, often struggle in generalizing to questions involving unseen KB schema items. Prior ranking-based approaches have shown some success in generalization, but suffer from the coverage issue. We present RnG-KBQA, a Rank-and-Generate approach for KBQA, which remedies the coverage issue with a generation model while preserving a strong generalization capability. Our approach first uses a contrastive ranker to rank a set of candidate logical forms obtained by searching over the knowledge graph. It then introduces a tailored generation model conditioned on the question and the top-ranked candidates to compose the final logical form. We achieve new state-of-the-art results on GrailQA and WebQSP datasets. In particular, our method surpasses the prior state-of-the-art by a large margin on the GrailQA leaderboard. In addition, RnG-KBQA outperforms all prior approaches on the popular WebQSP benchmark, even including the ones that use the oracle entity linking. The experimental results demonstrate the effectiveness of the interplay between ranking and generation, which leads to the superior performance of our proposed approach across all settings with especially strong improvements in zero-shot generalization. | null | null | 10.18653/v1/2022.acl-long.417 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,083 |
inproceedings | jwalapuram-etal-2022-rethinking | Rethinking Self-Supervision Objectives for Generalizable Coherence Modeling | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.418/ | Jwalapuram, Prathyusha and Joty, Shafiq and Lin, Xiang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6044--6059 | Given the claims of improved text generation quality across various pre-trained neural models, we consider the coherence evaluation of machine generated text to be one of the principal applications of coherence models that needs to be investigated. Prior work in neural coherence modeling has primarily focused on devising new architectures for solving the permuted document task. We instead use a basic model architecture and show significant improvements over state of the art within the same training regime. We then design a harder self-supervision objective by increasing the ratio of negative samples within a contrastive learning setup, and enhance the model further through automatic hard negative mining coupled with a large global negative queue encoded by a momentum encoder. We show empirically that increasing the density of negative samples improves the basic model, and using a global negative queue further improves and stabilizes the model while training with hard negative samples. We evaluate the coherence model on task-independent test sets that resemble real-world applications and show significant improvements in coherence evaluations of downstream tasks. | null | null | 10.18653/v1/2022.acl-long.418 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,084 |
inproceedings | wang-etal-2022-just | Just Rank: Rethinking Evaluation with Word and Sentence Similarities | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.419/ | Wang, Bin and Kuo, C.-C. Jay and Li, Haizhou | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6060--6077 | Word and sentence embeddings are useful feature representations in natural language processing. However, intrinsic evaluation for embeddings lags far behind, and there has been no significant update since the past decade. Word and sentence similarity tasks have become the de facto evaluation method. It leads models to overfit to such evaluations, negatively impacting embedding models' development. This paper first points out the problems using semantic similarity as the gold standard for word and sentence embedding evaluations. Further, we propose a new intrinsic evaluation method called EvalRank, which shows a much stronger correlation with downstream tasks. Extensive experiments are conducted based on 60+ models and popular datasets to certify our judgments. Finally, the practical evaluation toolkit is released for future benchmarking purposes. | null | null | 10.18653/v1/2022.acl-long.419 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,085 |
inproceedings | li-etal-2022-markuplm | {M}arkup{LM}: Pre-training of Text and Markup Language for Visually Rich Document Understanding | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.420/ | Li, Junlong and Xu, Yiheng and Cui, Lei and Wei, Furu | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6078--6087 | Multimodal pre-training with text, layout, and image has made significant progress for Visually Rich Document Understanding (VRDU), especially the fixed-layout documents such as scanned document images. While, there are still a large number of digital documents where the layout information is not fixed and needs to be interactively and dynamically rendered for visualization, making existing layout-based pre-training approaches not easy to apply. In this paper, we propose MarkupLM for document understanding tasks with markup languages as the backbone, such as HTML/XML-based documents, where text and markup information is jointly pre-trained. Experiment results show that the pre-trained MarkupLM significantly outperforms the existing strong baseline models on several document understanding tasks. The pre-trained model and code will be publicly available at \url{https://aka.ms/markuplm}. | null | null | 10.18653/v1/2022.acl-long.420 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,086 |
inproceedings | song-etal-2022-clip | {CLIP} Models are Few-Shot Learners: Empirical Studies on {VQA} and Visual Entailment | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.421/ | Song, Haoyu and Dong, Li and Zhang, Weinan and Liu, Ting and Wei, Furu | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6088--6100 | CLIP has shown a remarkable zero-shot capability on a wide range of vision tasks. Previously, CLIP is only regarded as a powerful visual encoder. However, after being pre-trained by language supervision from a large amount of image-caption pairs, CLIP itself should also have acquired some few-shot abilities for vision-language tasks. In this work, we empirically show that CLIP can be a strong vision-language few-shot learner by leveraging the power of language. We first evaluate CLIP`s zero-shot performance on a typical visual question answering task and demonstrate a zero-shot cross-modality transfer capability of CLIP on the visual entailment task. Then we propose a parameter-efficient fine-tuning strategy to boost the few-shot performance on the vqa task. We achieve competitive zero/few-shot results on the visual question answering and visual entailment tasks without introducing any additional pre-training procedure. | null | null | 10.18653/v1/2022.acl-long.421 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,087 |
inproceedings | cao-etal-2022-kqa | {KQA} Pro: A Dataset with Explicit Compositional Programs for Complex Question Answering over Knowledge Base | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.422/ | Cao, Shulin and Shi, Jiaxin and Pan, Liangming and Nie, Lunyiu and Xiang, Yutong and Hou, Lei and Li, Juanzi and He, Bin and Zhang, Hanwang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6101--6119 | Complex question answering over knowledge base (Complex KBQA) is challenging because it requires various compositional reasoning capabilities, such as multi-hop inference, attribute comparison, set operation, etc. Existing benchmarks have some shortcomings that limit the development of Complex KBQA: 1) they only provide QA pairs without explicit reasoning processes; 2) questions are poor in diversity or scale. To this end, we introduce KQA Pro, a dataset for Complex KBQA including around 120K diverse natural language questions. We introduce a compositional and interpretable programming language KoPL to represent the reasoning process of complex questions. For each question, we provide the corresponding KoPL program and SPARQL query, so that KQA Pro can serve for both KBQA and semantic parsing tasks. Experimental results show that state-of-the-art KBQA methods cannot achieve promising results on KQA Pro as on current datasets, which suggests that KQA Pro is challenging and Complex KBQA requires further research efforts. We also treat KQA Pro as a diagnostic dataset for testing multiple reasoning skills, conduct a thorough evaluation of existing models and discuss further directions for Complex KBQA. Our codes and datasets can be obtained from \url{https://github.com/shijx12/KQAPro_Baselines}. | null | null | 10.18653/v1/2022.acl-long.422 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,088 |
inproceedings | zhou-etal-2022-debiased | Debiased Contrastive Learning of Unsupervised Sentence Representations | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.423/ | Zhou, Kun and Zhang, Beichen and Zhao, Xin and Wen, Ji-Rong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6120--6130 | Recently, contrastive learning has been shown to be effective in improving pre-trained language models (PLM) to derive high-quality sentence representations. It aims to pull close positive examples to enhance the alignment while push apart irrelevant negatives for the uniformity of the whole representation space. However, previous works mostly adopt in-batch negatives or sample from training data at random. Such a way may cause the sampling bias that improper negatives (false negatives and anisotropy representations) are used to learn sentence representations, which will hurt the uniformity of the representation space. To address it, we present a new framework \textbf{DCLR} (Debiased Contrastive Learning of unsupervised sentence Representations) to alleviate the influence of these improper negatives.In DCLR, we design an instance weighting method to punish false negatives and generate noise-based negatives to guarantee the uniformity of the representation space.Experiments on seven semantic textual similarity tasks show that our approach is more effective than competitive baselines. Our code and data are publicly available at the link: blue\url{https://github.com/RUCAIBox/DCLR}. | null | null | 10.18653/v1/2022.acl-long.423 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,089 |
inproceedings | tan-etal-2022-msp | {MSP}: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.424/ | Tan, Zhixing and Zhang, Xiangwen and Wang, Shuo and Liu, Yang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6131--6142 | Prompting has recently been shown as a promising approach for applying pre-trained language models to perform downstream tasks. We present Multi-Stage Prompting, a simple and automatic approach for leveraging pre-trained language models to translation tasks. To better mitigate the discrepancy between pre-training and translation, MSP divides the translation process via pre-trained language models into three separate stages: the encoding stage, the re-encoding stage, and the decoding stage. During each stage, we independently apply different continuous prompts for allowing pre-trained language models better shift to translation tasks. We conduct extensive experiments on three translation tasks. Experiments show that our method can significantly improve the translation performance of pre-trained language models. | null | null | 10.18653/v1/2022.acl-long.424 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,090 |
inproceedings | chiu-etal-2022-salesbot | {S}ales{B}ot: Transitioning from Chit-Chat to Task-Oriented Dialogues | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.425/ | Chiu, Ssu and Li, Maolin and Lin, Yen-Ting and Chen, Yun-Nung | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6143--6158 | Dialogue systems are usually categorized into two types, open-domain and task-oriented. The first one focuses on chatting with users and making them engage in the conversations, where selecting a proper topic to fit the dialogue context is essential for a successful dialogue. The other one focuses on a specific task instead of casual talks, e.g., finding a movie on Friday night, playing a song. These two directions have been studied separately due to their different purposes. However, how to smoothly transition from social chatting to task-oriented dialogues is important for triggering the business opportunities, and there is no any public data focusing on such scenarios. Hence, this paper focuses on investigating the conversations starting from open-domain social chatting and then gradually transitioning to task-oriented purposes, and releases a large-scale dataset with detailed annotations for encouraging this research direction. To achieve this goal, this paper proposes a framework to automatically generate many dialogues without human involvement, in which any powerful open-domain dialogue generation model can be easily leveraged. The human evaluation shows that our generated dialogue data has a natural flow at a reasonable quality, showing that our released data has a great potential of guiding future research directions and commercial activities. Furthermore, the released models allow researchers to automatically generate unlimited dialogues in the target scenarios, which can greatly benefit semi-supervised and unsupervised approaches. | null | null | 10.18653/v1/2022.acl-long.425 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,091 |
inproceedings | li-etal-2022-uctopic | {UCT}opic: Unsupervised Contrastive Learning for Phrase Representations and Topic Mining | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.426/ | Li, Jiacheng and Shang, Jingbo and McAuley, Julian | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6159--6169 | High-quality phrase representations are essential to finding topics and related terms in documents (a.k.a. topic mining). Existing phrase representation learning methods either simply combine unigram representations in a context-free manner or rely on extensive annotations to learn context-aware knowledge. In this paper, we propose UCTopic, a novel unsupervised contrastive learning framework for context-aware phrase representations and topic mining. UCTopic is pretrained in a large scale to distinguish if the contexts of two phrase mentions have the same semantics. The key to the pretraining is positive pair construction from our phrase-oriented assumptions. However, we find traditional in-batch negatives cause performance decay when finetuning on a dataset with small topic numbers. Hence, we propose cluster-assisted contrastive learning (CCL) which largely reduces noisy negatives by selecting negatives from clusters and further improves phrase representations for topics accordingly. UCTopic outperforms the state-of-the-art phrase representation model by 38.2{\%} NMI in average on four entity clustering tasks. Comprehensive evaluation on topic mining shows that UCTopic can extract coherent and diverse topical phrases. | null | null | 10.18653/v1/2022.acl-long.426 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,092 |
inproceedings | chi-etal-2022-xlm | {XLM}-{E}: Cross-lingual Language Model Pre-training via {ELECTRA} | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.427/ | Chi, Zewen and Huang, Shaohan and Dong, Li and Ma, Shuming and Zheng, Bo and Singhal, Saksham and Bajaj, Payal and Song, Xia and Mao, Xian-Ling and Huang, Heyan and Wei, Furu | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6170--6182 | In this paper, we introduce ELECTRA-style tasks to cross-lingual language model pre-training. Specifically, we present two pre-training tasks, namely multilingual replaced token detection, and translation replaced token detection. Besides, we pretrain the model, named as XLM-E, on both multilingual and parallel corpora. Our model outperforms the baseline models on various cross-lingual understanding tasks with much less computation cost. Moreover, analysis shows that XLM-E tends to obtain better cross-lingual transferability. | null | null | 10.18653/v1/2022.acl-long.427 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,093 |
inproceedings | lou-etal-2022-nested | Nested Named Entity Recognition as Latent Lexicalized Constituency Parsing | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.428/ | Lou, Chao and Yang, Songlin and Tu, Kewei | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6183--6198 | Nested named entity recognition (NER) has been receiving increasing attention. Recently, Fu et al. (2020) adapt a span-based constituency parser to tackle nested NER. They treat nested entities as partially-observed constituency trees and propose the masked inside algorithm for partial marginalization. However, their method cannot leverage entity heads, which have been shown useful in entity mention detection and entity typing. In this work, we resort to more expressive structures, lexicalized constituency trees in which constituents are annotated by headwords, to model nested entities. We leverage the Eisner-Satta algorithm to perform partial marginalization and inference efficiently. In addition, we propose to use (1) a two-stage strategy (2) a head regularization loss and (3) a head-aware labeling loss in order to enhance the performance. We make a thorough ablation study to investigate the functionality of each component. Experimentally, our method achieves the state-of-the-art performance on ACE2004, ACE2005 and NNE, and competitive performance on GENIA, and meanwhile has a fast inference speed. | null | null | 10.18653/v1/2022.acl-long.428 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,094 |
inproceedings | ye-durrett-2022-explanations | Can Explanations Be Useful for Calibrating Black Box Models? | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.429/ | Ye, Xi and Durrett, Greg | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6199--6212 | NLP practitioners often want to take existing trained models and apply them to data from new domains. While fine-tuning or few-shot learning can be used to adapt a base model, there is no single recipe for making these techniques work; moreover, one may not have access to the original model weights if it is deployed as a black box. We study how to improve a black box model`s performance on a new domain by leveraging explanations of the model`s behavior. Our approach first extracts a set of features combining human intuition about the task with model attributions generated by black box interpretation techniques, then uses a simple calibrator, in the form of a classifier, to predict whether the base model was correct or not. We experiment with our method on two tasks, extractive question answering and natural language inference, covering adaptation from several pairs of domains with limited target-domain data. The experimental results across all the domain pairs show that explanations are useful for calibrating these models, boosting accuracy when predictions do not have to be returned on every example. We further show that the calibration model transfers to some extent between tasks. | null | null | 10.18653/v1/2022.acl-long.429 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,095 |
inproceedings | wang-etal-2022-oie | {OIE}@{OIA}: an Adaptable and Efficient Open Information Extraction Framework | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.430/ | Wang, Xin and Peng, Minlong and Sun, Mingming and Li, Ping | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6213--6226 | Different Open Information Extraction (OIE) tasks require different types of information, so the OIE field requires strong adaptability of OIE algorithms to meet different task requirements. This paper discusses the adaptability problem in existing OIE systems and designs a new adaptable and efficient OIE system - OIE@OIA as a solution. OIE@OIA follows the methodology of Open Information eXpression (OIX): parsing a sentence to an Open Information Annotation (OIA) Graph and then adapting the OIA graph to different OIE tasks with simple rules. As the core of our OIE@OIA system, we implement an end-to-end OIA generator by annotating a dataset (we make it open available) and designing an efficient learning algorithm for the complex OIA graph. We easily adapt the OIE@OIA system to accomplish three popular OIE tasks. The experimental show that our OIE@OIA achieves new SOTA performances on these tasks, showing the great adaptability of our OIE@OIA system. Furthermore, compared to other end-to-end OIE baselines that need millions of samples for training, our OIE@OIA needs much fewer training samples (12K), showing a significant advantage in terms of efficiency. | null | null | 10.18653/v1/2022.acl-long.430 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,096 |
inproceedings | lu-etal-2022-reacc | {R}e{ACC}: A Retrieval-Augmented Code Completion Framework | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.431/ | Lu, Shuai and Duan, Nan and Han, Hojae and Guo, Daya and Hwang, Seung-won and Svyatkovskiy, Alexey | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6227--6240 | Code completion, which aims to predict the following code token(s) according to the code context, can improve the productivity of software development. Recent work has proved that statistical language modeling with transformers can greatly improve the performance in the code completion task via learning from large-scale source code datasets. However, current approaches focus only on code context within the file or project, i.e. internal context. Our distinction is utilizing {\textquotedblright}external{\textquotedblright} context, inspired by human behaviors of copying from the related code snippets when writing code. Specifically, we propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval. We adopt a stage-wise training approach that combines a source code retriever and an auto-regressive language model for programming language. We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark. | null | null | 10.18653/v1/2022.acl-long.431 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,097 |
inproceedings | huang-etal-2022-recommend | Does Recommend-Revise Produce Reliable Annotations? An Analysis on Missing Instances in {D}oc{RED} | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.432/ | Huang, Quzhe and Hao, Shibo and Ye, Yuan and Zhu, Shengqi and Feng, Yansong and Zhao, Dongyan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6241--6252 | DocRED is a widely used dataset for document-level relation extraction. In the large-scale annotation, a recommend-revise scheme is adopted to reduce the workload. Within this scheme, annotators are provided with candidate relation instances from distant supervision, and they then manually supplement and remove relational facts based on the recommendations. However, when comparing DocRED with a subset relabeled from scratch, we find that this scheme results in a considerable amount of false negative samples and an obvious bias towards popular entities and relations. Furthermore, we observe that the models trained on DocRED have low recall on our relabeled dataset and inherit the same bias in the training data. Through the analysis of annotators' behaviors, we figure out the underlying reason for the problems above: the scheme actually discourages annotators from supplementing adequate instances in the revision phase. We appeal to future research to take into consideration the issues with the recommend-revise scheme when designing new models and annotation schemes. The relabeled dataset is released at \url{https://github.com/AndrewZhe/Revisit-DocRED}, to serve as a more reliable test set of document RE models. | null | null | 10.18653/v1/2022.acl-long.432 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,098 |
inproceedings | mao-etal-2022-unipelt | {U}ni{PELT}: A Unified Framework for Parameter-Efficient Language Model Tuning | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.433/ | Mao, Yuning and Mathias, Lambert and Hou, Rui and Almahairi, Amjad and Ma, Hao and Han, Jiawei and Yih, Scott and Khabsa, Madian | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6253--6264 | Recent parameter-efficient language model tuning (PELT) methods manage to match the performance of fine-tuning with much fewer trainable parameters and perform especially well when training data is limited. However, different PELT methods may perform rather differently on the same task, making it nontrivial to select the most appropriate method for a specific task, especially considering the fast-growing number of new PELT methods and tasks. In light of model diversity and the difficulty of model selection, we propose a unified framework, UniPELT, which incorporates different PELT methods as submodules and learns to activate the ones that best suit the current data or task setup via gating mechanism. On the GLUE benchmark, UniPELT consistently achieves 1 4{\%} gains compared to the best individual PELT method that it incorporates and even outperforms fine-tuning under different setups. Moreover, UniPELT generally surpasses the upper bound that takes the best performance of all its submodules used individually on each task, indicating that a mixture of multiple PELT methods may be inherently more effective than single methods. | null | null | 10.18653/v1/2022.acl-long.433 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,099 |
inproceedings | zheng-jiang-2022-empirical | An Empirical Study of Memorization in {NLP} | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.434/ | Zheng, Xiaosen and Jiang, Jing | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6265--6278 | A recent study by Feldman (2020) proposed a long-tail theory to explain the memorization behavior of deep learning models. However, memorization has not been empirically verified in the context of NLP, a gap addressed by this work. In this paper, we use three different NLP tasks to check if the long-tail theory holds. Our experiments demonstrate that top-ranked memorized training instances are likely atypical, and removing the top-memorized training instances leads to a more serious drop in test accuracy compared with removing training instances randomly. Furthermore, we develop an attribution method to better understand why a training instance is memorized. We empirically show that our memorization attribution method is faithful, and share our interesting finding that the top-memorized parts of a training instance tend to be features negatively correlated with the class label. | null | null | 10.18653/v1/2022.acl-long.434 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,100 |
inproceedings | ebrahimi-etal-2022-americasnli | {A}mericas{NLI}: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.435/ | Ebrahimi, Abteen and Mager, Manuel and Oncevay, Arturo and Chaudhary, Vishrav and Chiruzzo, Luis and Fan, Angela and Ortega, John and Ramos, Ricardo and Rios, Annette and Meza Ruiz, Ivan Vladimir and Gim{\'e}nez-Lugo, Gustavo and Mager, Elisabeth and Neubig, Graham and Palmer, Alexis and Coto-Solano, Rolando and Vu, Thang and Kann, Katharina | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6279--6299 | Pretrained multilingual models are able to perform cross-lingual transfer in a zero-shot setting, even for languages unseen during pretraining. However, prior work evaluating performance on unseen languages has largely been limited to low-level, syntactic tasks, and it remains unclear if zero-shot learning of high-level, semantic tasks is possible for unseen languages. To explore this question, we present AmericasNLI, an extension of XNLI (Conneau et al., 2018) to 10 Indigenous languages of the Americas. We conduct experiments with XLM-R, testing multiple zero-shot and translation-based approaches. Additionally, we explore model adaptation via continued pretraining and provide an analysis of the dataset by considering hypothesis-only models. We find that XLM-R`s zero-shot performance is poor for all 10 languages, with an average performance of 38.48{\%}. Continued pretraining offers improvements, with an average accuracy of 43.85{\%}. Surprisingly, training on poorly translated data by far outperforms all other methods with an accuracy of 49.12{\%}. | null | null | 10.18653/v1/2022.acl-long.435 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,101 |
inproceedings | ding-etal-2022-towards | Towards Learning (Dis)-Similarity of Source Code from Program Contrasts | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.436/ | Ding, Yangruibo and Buratti, Luca and Pujar, Saurabh and Morari, Alessandro and Ray, Baishakhi and Chakraborty, Saikat | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6300--6312 | Understanding the functional (dis)-similarity of source code is significant for code modeling tasks such as software vulnerability and code clone detection. We present DISCO (DIS-similarity of COde), a novel self-supervised model focusing on identifying (dis)similar functionalities of source code. Different from existing works, our approach does not require a huge amount of randomly collected datasets. Rather, we design structure-guided code transformation algorithms to generate synthetic code clones and inject real-world security bugs, augmenting the collected datasets in a targeted way. We propose to pre-train the Transformer model with such automatically generated program contrasts to better identify similar code in the wild and differentiate vulnerable programs from benign ones. To better capture the structural features of source code, we propose a new cloze objective to encode the local tree-based context (e.g., parents or sibling nodes). We pre-train our model with a much smaller dataset, the size of which is only 5{\%} of the state-of-the-art models' training datasets, to illustrate the effectiveness of our data augmentation and the pre-training approach. The evaluation shows that, even with much less data, DISCO can still outperform the state-of-the-art models in vulnerability and code clone detection tasks. | null | null | 10.18653/v1/2022.acl-long.436 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,102 |
inproceedings | ang-lim-2022-guided | Guided Attention Multimodal Multitask Financial Forecasting with Inter-Company Relationships and Global and Local News | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.437/ | Ang, Gary and Lim, Ee-Peng | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6313--6326 | Most works on financial forecasting use information directly associated with individual companies (e.g., stock prices, news on the company) to predict stock returns for trading. We refer to such company-specific information as local information. Stock returns may also be influenced by global information (e.g., news on the economy in general), and inter-company relationships. Capturing such diverse information is challenging due to the low signal-to-noise ratios, different time-scales, sparsity and distributions of global and local information from different modalities. In this paper, we propose a model that captures both global and local multimodal information for investment and risk management-related forecasting tasks. Our proposed Guided Attention Multimodal Multitask Network (GAME) model addresses these challenges by using novel attention modules to guide learning with global and local information from different modalities and dynamic inter-company relationship networks. Our extensive experiments show that GAME outperforms other state-of-the-art models in several forecasting tasks and important real-world application case studies. | null | null | 10.18653/v1/2022.acl-long.437 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,103 |
inproceedings | li-etal-2022-vision | On Vision Features in Multimodal Machine Translation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.438/ | Li, Bei and Lv, Chuanhao and Zhou, Zefan and Zhou, Tao and Xiao, Tong and Ma, Anxiang and Zhu, JingBo | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6327--6337 | Previous work on multimodal machine translation (MMT) has focused on the way of incorporating vision features into translation but little attention is on the quality of vision models. In this work, we investigate the impact of vision models on MMT. Given the fact that Transformer is becoming popular in computer vision, we experiment with various strong models (such as Vision Transformer) and enhanced features (such as object-detection and image captioning). We develop a selective attention model to study the patch-level contribution of an image in MMT. On detailed probing tasks, we find that stronger vision models are helpful for learning translation from the visual modality. Our results also suggest the need of carefully examining MMT models, especially when current benchmarks are small-scale and biased. | null | null | 10.18653/v1/2022.acl-long.438 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,104 |
inproceedings | das-etal-2022-container | {CONT}ai{NER}: Few-Shot Named Entity Recognition via Contrastive Learning | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.439/ | Das, Sarkar Snigdha Sarathi and Katiyar, Arzoo and Passonneau, Rebecca and Zhang, Rui | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6338--6353 | Named Entity Recognition (NER) in Few-Shot setting is imperative for entity tagging in low resource domains. Existing approaches only learn class-specific semantic features and intermediate representations from source domains. This affects generalizability to unseen target domains, resulting in suboptimal performances. To this end, we present CONTaiNER, a novel contrastive learning technique that optimizes the inter-token distribution distance for Few-Shot NER. Instead of optimizing class-specific attributes, CONTaiNER optimizes a generalized objective of differentiating between token categories based on their Gaussian-distributed embeddings. This effectively alleviates overfitting issues originating from training domains. Our experiments in several traditional test domains (OntoNotes, CoNLL`03, WNUT {\textquoteleft}17, GUM) and a new large scale Few-Shot NER dataset (Few-NERD) demonstrate that on average, CONTaiNER outperforms previous methods by 3{\%}-13{\%} absolute F1 points while showing consistent performance trends, even in challenging scenarios where previous approaches could not achieve appreciable performance. | null | null | 10.18653/v1/2022.acl-long.439 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,105 |
inproceedings | teodorescu-etal-2022-cree | {C}ree Corpus: A Collection of n{\^e}hiyaw{\^e}win Resources | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.440/ | Teodorescu, Daniela and Matalski, Josie and Lothian, Delaney and Barbosa, Denilson and Demmans Epp, Carrie | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6354--6364 | Plains Cree (n{\^e}hiyaw{\^e}win) is an Indigenous language that is spoken in Canada and the USA. It is the most widely spoken dialect of Cree and a morphologically complex language that is polysynthetic, highly inflective, and agglutinative. It is an extremely low resource language, with no existing corpus that is both available and prepared for supporting the development of language technologies. To support n{\^e}hiyaw{\^e}win revitalization and preservation, we developed a corpus covering diverse genres, time periods, and texts for a variety of intended audiences. The data has been verified and cleaned; it is ready for use in developing language technologies for n{\^e}hiyaw{\^e}win. The corpus includes the corresponding English phrases or audio files where available. We demonstrate the utility of the corpus through its community use and its use to build language technologies that can provide the types of support that community members have expressed are desirable. The corpus is available for public use. | null | null | 10.18653/v1/2022.acl-long.440 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,106 |
inproceedings | hsu-etal-2022-learning | Learning to Rank Visual Stories From Human Ranking Data | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.441/ | Hsu, Chi-Yang and Chu, Yun-Wei and Chen, Vincent and Lo, Kuan-Chieh and Chen, Chacha and Huang, Ting-Hao and Ku, Lun-Wei | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6365--6378 | Visual storytelling (VIST) is a typical vision and language task that has seen extensive development in the natural language generation research domain. However, it remains unclear whether conventional automatic evaluation metrics for text generation are applicable on VIST. In this paper, we present the VHED (VIST Human Evaluation Data) dataset, which first re-purposes human evaluation results for automatic evaluation; hence we develop Vrank (VIST Ranker), a novel reference-free VIST metric for story evaluation. We first show that the results from commonly adopted automatic metrics for text generation have little correlation with those obtained from human evaluation, which motivates us to directly utilize human evaluation results to learn the automatic evaluation model. In the experiments, we evaluate the generated texts to predict story ranks using our model as well as other reference-based and reference-free metrics. Results show that Vrank prediction is significantly more aligned to human evaluation than other metrics with almost 30{\%} higher accuracy when ranking story pairs. Moreover, we demonstrate that only Vrank shows human-like behavior in its strong ability to find better stories when the quality gap between two stories is high. Finally, we show the superiority of Vrank by its generalizability to pure textual stories, and conclude that this reuse of human evaluation results puts Vrank in a strong position for continued future advances. | null | null | 10.18653/v1/2022.acl-long.441 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,107 |
inproceedings | li-etal-2022-universal | Universal Conditional Masked Language Pre-training for Neural Machine Translation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.442/ | Li, Pengfei and Li, Liangyou and Zhang, Meng and Wu, Minghao and Liu, Qun | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6379--6391 | Pre-trained sequence-to-sequence models have significantly improved Neural Machine Translation (NMT). Different from prior works where pre-trained models usually adopt an unidirectional decoder, this paper demonstrates that pre-training a sequence-to-sequence model but with a bidirectional decoder can produce notable performance gains for both Autoregressive and Non-autoregressive NMT. Specifically, we propose CeMAT, a conditional masked language model pre-trained on large-scale bilingual and monolingual corpora in many languages. We also introduce two simple but effective methods to enhance the CeMAT, aligned code-switching {\&} masking and dynamic dual-masking. We conduct extensive experiments and show that our CeMAT can achieve significant performance improvement for all scenarios from low- to extremely high-resource languages, i.e., up to +14.4 BLEU on low resource and +7.9 BLEU improvements on average for Autoregressive NMT. For Non-autoregressive NMT, we demonstrate it can also produce consistent performance gains, i.e., up to +5.3 BLEU. To the best of our knowledge, this is the first work to pre-train a unified model for fine-tuning on both NMT tasks. Code, data, and pre-trained models are available at \url{https://github.com/huawei-noah/Pretrained-Language-Model/CeMAT} | null | null | 10.18653/v1/2022.acl-long.442 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,108 |
inproceedings | jimenez-etal-2022-carets | {CARETS}: A Consistency And Robustness Evaluative Test Suite for {VQA} | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.443/ | Jimenez, Carlos E. and Russakovsky, Olga and Narasimhan, Karthik | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6392--6405 | We introduce CARETS, a systematic test suite to measure consistency and robustness of modern VQA models through a series of six fine-grained capability tests. In contrast to existing VQA test sets, CARETS features balanced question generation to create pairs of instances to test models, with each pair focusing on a specific capability such as rephrasing, logical symmetry or image obfuscation. We evaluate six modern VQA systems on CARETS and identify several actionable weaknesses in model comprehension, especially with concepts such as negation, disjunction, or hypernym invariance. Interestingly, even the most sophisticated models are sensitive to aspects such as swapping the order of terms in a conjunction or varying the number of answer choices mentioned in the question. We release CARETS to be used as an extensible tool for evaluating multi-modal model robustness. | null | null | 10.18653/v1/2022.acl-long.443 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,109 |
inproceedings | gu-etal-2022-phrase | Phrase-aware Unsupervised Constituency Parsing | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.444/ | Gu, Xiaotao and Shen, Yikang and Shen, Jiaming and Shang, Jingbo and Han, Jiawei | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6406--6415 | Recent studies have achieved inspiring success in unsupervised grammar induction using masked language modeling (MLM) as the proxy task. Despite their high accuracy in identifying low-level structures, prior arts tend to struggle in capturing high-level structures like clauses, since the MLM task usually only requires information from local context. In this work, we revisit LM-based constituency parsing from a phrase-centered perspective. Inspired by the natural reading process of human, we propose to regularize the parser with phrases extracted by an unsupervised phrase tagger to help the LM model quickly manage low-level structures. For a better understanding of high-level structures, we propose a phrase-guided masking strategy for LM to emphasize more on reconstructing non-phrase words. We show that the initial phrase regularization serves as an effective bootstrap, and phrase-guided masking improves the identification of high-level structures. Experiments on the public benchmark with two different backbone models demonstrate the effectiveness and generality of our method. | null | null | 10.18653/v1/2022.acl-long.444 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,110 |
inproceedings | ji-etal-2022-achieving | Achieving Reliable Human Assessment of Open-Domain Dialogue Systems | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.445/ | Ji, Tianbo and Graham, Yvette and Jones, Gareth and Lyu, Chenyang and Liu, Qun | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6416--6437 | Evaluation of open-domain dialogue systems is highly challenging and development of better techniques is highlighted time and again as desperately needed. Despite substantial efforts to carry out reliable live evaluation of systems in recent competitions, annotations have been abandoned and reported as too unreliable to yield sensible results. This is a serious problem since automatic metrics are not known to provide a good indication of what may or may not be a high-quality conversation. Answering the distress call of competitions that have emphasized the urgent need for better evaluation techniques in dialogue, we present the successful development of human evaluation that is highly reliable while still remaining feasible and low cost. Self-replication experiments reveal almost perfectly repeatable results with a correlation of $r=0.969$. Furthermore, due to the lack of appropriate methods of statistical significance testing, the likelihood of potential improvements to systems occurring due to chance is rarely taken into account in dialogue evaluation, and the evaluation we propose facilitates application of standard tests. Since we have developed a highly reliable evaluation method, new insights into system performance can be revealed. We therefore include a comparison of state-of-the-art models (i) with and without personas, to measure the contribution of personas to conversation quality, as well as (ii) prescribed versus freely chosen topics. Interestingly with respect to personas, results indicate that personas do not positively contribute to conversation quality as expected. | null | null | 10.18653/v1/2022.acl-long.445 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,111 |
inproceedings | panthaplackel-etal-2022-updated | Updated Headline Generation: Creating Updated Summaries for Evolving News Stories | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.446/ | Panthaplackel, Sheena and Benton, Adrian and Dredze, Mark | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6438--6461 | We propose the task of updated headline generation, in which a system generates a headline for an updated article, considering both the previous article and headline. The system must identify the novel information in the article update, and modify the existing headline accordingly. We create data for this task using the NewsEdits corpus by automatically identifying contiguous article versions that are likely to require a substantive headline update. We find that models conditioned on the prior headline and body revisions produce headlines judged by humans to be as factual as gold headlines while making fewer unnecessary edits compared to a standard headline generation model. Our experiments establish benchmarks for this new contextual summarization task. | null | null | 10.18653/v1/2022.acl-long.446 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,112 |
inproceedings | ung-etal-2022-saferdialogues | {S}a{F}e{RD}ialogues: Taking Feedback Gracefully after Conversational Safety Failures | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.447/ | Ung, Megan and Xu, Jing and Boureau, Y-Lan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6462--6481 | Current open-domain conversational models can easily be made to talk in inadequate ways. Online learning from conversational feedback given by the conversation partner is a promising avenue for a model to improve and adapt, so as to generate fewer of these safety failures. However, current state-of-the-art models tend to react to feedback with defensive or oblivious responses. This makes for an unpleasant experience and may discourage conversation partners from giving feedback in the future. This work proposes SaFeRDialogues, a task and dataset of graceful responses to conversational feedback about safety failures. We collect a dataset of 8k dialogues demonstrating safety failures, feedback signaling them, and a response acknowledging the feedback. We show how fine-tuning on this dataset results in conversations that human raters deem considerably more likely to lead to a civil conversation, without sacrificing engagingness or general conversational ability. | null | null | 10.18653/v1/2022.acl-long.447 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,113 |
inproceedings | goodwin-etal-2022-compositional | Compositional Generalization in Dependency Parsing | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.448/ | Goodwin, Emily and Reddy, Siva and O{'}Donnell, Timothy and Bahdanau, Dzmitry | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6482--6493 | Compositionality{---} the ability to combine familiar units like words into novel phrases and sentences{---} has been the focus of intense interest in artificial intelligence in recent years. To test compositional generalization in semantic parsing, Keysers et al. (2020) introduced Compositional Freebase Queries (CFQ). This dataset maximizes the similarity between the test and train distributions over primitive units, like words, while maximizing the compound divergence: the dissimilarity between test and train distributions over larger structures, like phrases. Dependency parsing, however, lacks a compositional generalization benchmark. In this work, we introduce a gold-standard set of dependency parses for CFQ, and use this to analyze the behaviour of a state-of-the art dependency parser (Qi et al., 2020) on the CFQ dataset. We find that increasing compound divergence degrades dependency parsing performance, although not as dramatically as semantic parsing performance. Additionally, we find the performance of the dependency parser does not uniformly degrade relative to compound divergence, and the parser performs differently on different splits with the same compound divergence. We explore a number of hypotheses for what causes the non-uniform degradation in dependency parsing performance, and identify a number of syntactic structures that drive the dependency parser`s lower performance on the most challenging splits. | null | null | 10.18653/v1/2022.acl-long.448 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,114 |
inproceedings | ahuja-etal-2022-aspectnews | {ASPECTNEWS}: Aspect-Oriented Summarization of News Documents | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.449/ | Ahuja, Ojas and Xu, Jiacheng and Gupta, Akshay and Horecka, Kevin and Durrett, Greg | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6494--6506 | Generic summaries try to cover an entire document and query-based summaries try to answer document-specific questions. But real users' needs often fall in between these extremes and correspond to aspects, high-level topics discussed among similar types of documents. In this paper, we collect a dataset of realistic aspect-oriented summaries, AspectNews, which covers different subtopics about articles in news sub-domains. We annotate data across two domains of articles, earthquakes and fraud investigations, where each article is annotated with two distinct summaries focusing on different aspects for each domain. A system producing a single generic summary cannot concisely satisfy both aspects. Our focus in evaluation is how well existing techniques can generalize to these domains without seeing in-domain training data, so we turn to techniques to construct synthetic training data that have been used in query-focused summarization work. We compare several training schemes that differ in how strongly keywords are used and how oracle summaries are extracted. Our evaluation shows that our final approach yields (a) focused summaries, better than those from a generic summarization system or from keyword matching; (b) a system sensitive to the choice of keywords. | null | null | 10.18653/v1/2022.acl-long.449 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,115 |
inproceedings | gu-etal-2022-memsum | {M}em{S}um: Extractive Summarization of Long Documents Using Multi-Step Episodic {M}arkov Decision Processes | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.450/ | Gu, Nianlong and Ash, Elliott and Hahnloser, Richard | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6507--6522 | We introduce MemSum (Multi-step Episodic Markov decision process extractive SUMmarizer), a reinforcement-learning-based extractive summarizer enriched at each step with information on the current extraction history. When MemSum iteratively selects sentences into the summary, it considers a broad information set that would intuitively also be used by humans in this task: 1) the text content of the sentence, 2) the global text context of the rest of the document, and 3) the extraction history consisting of the set of sentences that have already been extracted. With a lightweight architecture, MemSum obtains state-of-the-art test-set performance (ROUGE) in summarizing long documents taken from PubMed, arXiv, and GovReport. Ablation studies demonstrate the importance of local, global, and history information. A human evaluation confirms the high quality and low redundancy of the generated summaries, stemming from MemSum`s awareness of extraction history. | null | null | 10.18653/v1/2022.acl-long.450 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,116 |
inproceedings | menon-etal-2022-clues | {CLUES}: A Benchmark for Learning Classifiers using Natural Language Explanations | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.451/ | R. Menon, Rakesh and Ghosh, Sayan and Srivastava, Shashank | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6523--6546 | Supervised learning has traditionally focused on inductive learning by observing labeled examples of a task. In contrast, a hallmark of human intelligence is the ability to learn new concepts purely from language. Here, we explore training zero-shot classifiers for structured data purely from language. For this, we introduce CLUES, a benchmark for Classifier Learning Using natural language ExplanationS, consisting of a range of classification tasks over structured data along with natural language supervision in the form of explanations. CLUES consists of 36 real-world and 144 synthetic classification tasks. It contains crowdsourced explanations describing real-world tasks from multiple teachers and programmatically generated explanations for the synthetic tasks. To model the influence of explanations in classifying an example, we develop ExEnt, an entailment-based model that learns classifiers using explanations. ExEnt generalizes up to 18{\%} better (relative) on novel tasks than a baseline that does not use explanations. We delineate key challenges for automated learning from explanations, addressing which can lead to progress on CLUES in the future. Code and datasets are available at: \url{https://clues-benchmark.github.io}. | null | null | 10.18653/v1/2022.acl-long.451 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,117 |
inproceedings | shi-etal-2022-substructure | Substructure Distribution Projection for Zero-Shot Cross-Lingual Dependency Parsing | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.452/ | Shi, Freda and Gimpel, Kevin and Livescu, Karen | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6547--6563 | We present substructure distribution projection (SubDP), a technique that projects a distribution over structures in one domain to another, by projecting substructure distributions separately. Models for the target domain can then be trained, using the projected distributions as soft silver labels. We evaluate SubDP on zero shot cross-lingual dependency parsing, taking dependency arcs as substructures: we project the predicted dependency arc distributions in the source language(s) to target language(s), and train a target language parser on the resulting distributions. Given an English tree bank as the only source of human supervision, SubDP achieves better unlabeled attachment score than all prior work on the Universal Dependencies v2.2 (Nivre et al., 2020) test set across eight diverse target languages, as well as the best labeled attachment score on six languages. In addition, SubDP improves zero shot cross-lingual dependency parsing with very few (e.g., 50) supervised bitext pairs, across a broader range of target languages. | null | null | 10.18653/v1/2022.acl-long.452 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,118 |
inproceedings | tonneau-etal-2022-multilingual | Multilingual Detection of Personal Employment Status on {T}witter | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.453/ | Tonneau, Manuel and Adjodah, Dhaval and Palotti, Joao and Grinberg, Nir and Fraiberger, Samuel | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6564--6587 | Detecting disclosures of individuals' employment status on social media can provide valuable information to match job seekers with suitable vacancies, offer social protection, or measure labor market flows. However, identifying such personal disclosures is a challenging task due to their rarity in a sea of social media content and the variety of linguistic forms used to describe them. Here, we examine three Active Learning (AL) strategies in real-world settings of extreme class imbalance, and identify five types of disclosures about individuals' employment status (e.g. job loss) in three languages using BERT-based classification models. Our findings show that, even under extreme imbalance settings, a small number of AL iterations is sufficient to obtain large and significant gains in precision, recall, and diversity of results compared to a supervised baseline with the same number of labels. We also find that no AL strategy consistently outperforms the rest. Qualitative analysis suggests that AL helps focus the attention mechanism of BERT on core terms and adjust the boundaries of semantic expansion, highlighting the importance of interpretable models to provide greater control and visibility into this dynamic learning process. | null | null | 10.18653/v1/2022.acl-long.453 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,119 |
inproceedings | zhao-etal-2022-multihiertt | {M}ulti{H}iertt: Numerical Reasoning over Multi Hierarchical Tabular and Textual Data | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.454/ | Zhao, Yilun and Li, Yunxiang and Li, Chenying and Zhang, Rui | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6588--6600 | Numerical reasoning over hybrid data containing both textual and tabular content (e.g., financial reports) has recently attracted much attention in the NLP community. However, existing question answering (QA) benchmarks over hybrid data only include a single flat table in each document and thus lack examples of multi-step numerical reasoning across multiple hierarchical tables. To facilitate data analytical progress, we construct a new large-scale benchmark, MultiHiertt, with QA pairs over Multi Hierarchical Tabular and Textual data. MultiHiertt is built from a wealth of financial reports and has the following unique characteristics: 1) each document contain multiple tables and longer unstructured texts; 2) most of tables contained are hierarchical; 3) the reasoning process required for each question is more complex and challenging than existing benchmarks; and 4) fine-grained annotations of reasoning processes and supporting facts are provided to reveal complex numerical reasoning. We further introduce a novel QA model termed MT2Net, which first applies facts retrieving to extract relevant supporting facts from both tables and text and then uses a reasoning module to perform symbolic reasoning over retrieved facts. We conduct comprehensive experiments on various baselines. The experimental results show that MultiHiertt presents a strong challenge for existing baselines whose results lag far behind the performance of human experts. The dataset and code are publicly available at \url{https://github.com/psunlpgroup/MultiHiertt}. | null | null | 10.18653/v1/2022.acl-long.454 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,120 |
inproceedings | bylinina-tikhonov-2022-transformers | Transformers in the loop: Polarity in neural models of language | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.455/ | Bylinina, Lisa and Tikhonov, Alexey | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6601--6610 | Representation of linguistic phenomena in computational language models is typically assessed against the predictions of existing linguistic theories of these phenomena. Using the notion of polarity as a case study, we show that this is not always the most adequate set-up. We probe polarity via so-called {\textquoteleft}negative polarity items' (in particular, English {\textquoteleft}any') in two pre-trained Transformer-based models (BERT and GPT-2). We show that {--} at least for polarity {--} metrics derived from language models are more consistent with data from psycholinguistic experiments than linguistic theory predictions. Establishing this allows us to more adequately evaluate the performance of language models and also to use language models to discover new insights into natural language grammar beyond existing linguistic theories. This work contributes to establishing closer ties between psycholinguistic experiments and experiments with language models. | null | null | 10.18653/v1/2022.acl-long.455 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,121 |
inproceedings | he-etal-2022-bridging | Bridging the Data Gap between Training and Inference for Unsupervised Neural Machine Translation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.456/ | He, Zhiwei and Wang, Xing and Wang, Rui and Shi, Shuming and Tu, Zhaopeng | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6611--6623 | Back-translation is a critical component of Unsupervised Neural Machine Translation (UNMT), which generates pseudo parallel data from target monolingual data. A UNMT model is trained on the pseudo parallel data with $\text{\bf translated source}$, and translates $\text{\bf natural source}$ sentences in inference. The source discrepancy between training and inference hinders the translation performance of UNMT models. By carefully designing experiments, we identify two representative characteristics of the data gap in source: (1) $\text{\textit{style gap}}$ (i.e., translated vs. natural text style) that leads to poor generalization capability; (2) $\text{\textit{content gap}}$ that induces the model to produce hallucination content biased towards the target language. To narrow the data gap, we propose an online self-training approach, which simultaneously uses the pseudo parallel data $\{$natural source, translated target$\}$ to mimic the inference scenario. Experimental results on several widely-used language pairs show that our approach outperforms two strong baselines (XLM and MASS) by remedying the style and content gaps. | null | null | 10.18653/v1/2022.acl-long.456 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,122 |
inproceedings | cohen-etal-2022-sdr | {SDR}: Efficient Neural Re-ranking using Succinct Document Representation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.457/ | Cohen, Nachshon and Portnoy, Amit and Fetahu, Besnik and Ingber, Amir | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6624--6637 | BERT based ranking models have achieved superior performance on various information retrieval tasks. However, the large number of parameters and complex self-attention operations come at a significant latency overhead. To remedy this, recent works propose late-interaction architectures, which allow pre-computation of intermediate document representations, thus reducing latency. Nonetheless, having solved the immediate latency issue, these methods now introduce storage costs and network fetching latency, which limit their adoption in real-life production systems. In this work, we propose the Succinct Document Representation (SDR) scheme that computes \textit{highly compressed} intermediate document representations, mitigating the storage/network issue. Our approach first reduces the dimension of token representations by encoding them using a novel autoencoder architecture that uses the document`s textual content in both the encoding and decoding phases. After this token encoding step, we further reduce the size of the document representations using modern quantization techniques. Evaluation on MSMARCO`s passage re-reranking task show that compared to existing approaches using compressed document representations, our method is highly efficient, achieving 4x{--}11.6x higher compression rates for the same ranking quality. Similarly, on the TREC CAR dataset, we achieve 7.7x higher compression rate for the same ranking quality. | null | null | 10.18653/v1/2022.acl-long.457 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,123 |
inproceedings | valizadeh-parde-2022-ai | The {AI} Doctor Is In: A Survey of Task-Oriented Dialogue Systems for Healthcare Applications | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.458/ | Valizadeh, Mina and Parde, Natalie | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6638--6660 | Task-oriented dialogue systems are increasingly prevalent in healthcare settings, and have been characterized by a diverse range of architectures and objectives. Although these systems have been surveyed in the medical community from a non-technical perspective, a systematic review from a rigorous computational perspective has to date remained noticeably absent. As a result, many important implementation details of healthcare-oriented dialogue systems remain limited or underspecified, slowing the pace of innovation in this area. To fill this gap, we investigated an initial pool of 4070 papers from well-known computer science, natural language processing, and artificial intelligence venues, identifying 70 papers discussing the system-level implementation of task-oriented dialogue systems for healthcare applications. We conducted a comprehensive technical review of these papers, and present our key findings including identified gaps and corresponding recommendations. | null | null | 10.18653/v1/2022.acl-long.458 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,124 |
inproceedings | le-etal-2022-shield | {SHIELD}: Defending Textual Neural Networks against Multiple Black-Box Adversarial Attacks with Stochastic Multi-Expert Patcher | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.459/ | Le, Thai and Park, Noseong and Lee, Dongwon | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6661--6674 | Even though several methods have proposed to defend textual neural network (NN) models against black-box adversarial attacks, they often defend against a specific text perturbation strategy and/or require re-training the models from scratch. This leads to a lack of generalization in practice and redundant computation. In particular, the state-of-the-art transformer models (e.g., BERT, RoBERTa) require great time and computation resources. By borrowing an idea from software engineering, in order to address these limitations, we propose a novel algorithm, SHIELD, which modifies and re-trains only the last layer of a textual NN, and thus it {\textquotedblleft}patches{\textquotedblright} and {\textquotedblleft}transforms{\textquotedblright} the NN into a stochastic weighted ensemble of multi-expert prediction heads. Considering that most of current black-box attacks rely on iterative search mechanisms to optimize their adversarial perturbations, SHIELD confuses the attackers by automatically utilizing different weighted ensembles of predictors depending on the input. In other words, SHIELD breaks a fundamental assumption of the attack, which is a victim NN model remains constant during an attack. By conducting comprehensive experiments, we demonstrate that all of CNN, RNN, BERT, and RoBERTa-based textual NNs, once patched by SHIELD, exhibit a relative enhancement of 15{\%}{--}70{\%} in accuracy on average against 14 different black-box attacks, outperforming 6 defensive baselines across 3 public datasets. All codes are to be released. | null | null | 10.18653/v1/2022.acl-long.459 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,125 |
inproceedings | chatterjee-etal-2022-accurate | Accurate Online Posterior Alignments for Principled Lexically-Constrained Decoding | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.460/ | Chatterjee, Soumya and Sarawagi, Sunita and Jyothi, Preethi | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6675--6689 | Online alignment in machine translation refers to the task of aligning a target word to a source word when the target sequence has only been partially decoded. Good online alignments facilitate important applications such as lexically constrained translation where user-defined dictionaries are used to inject lexical constraints into the translation model. We propose a novel posterior alignment technique that is truly online in its execution and superior in terms of alignment error rates compared to existing methods. Our proposed inference technique jointly considers alignment and token probabilities in a principled manner and can be seamlessly integrated within existing constrained beam-search decoding algorithms. On five language pairs, including two distant language pairs, we achieve consistent drop in alignment error rates. When deployed on seven lexically constrained translation tasks, we achieve significant improvements in BLEU specifically around the constrained positions. | null | null | 10.18653/v1/2022.acl-long.460 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,126 |
inproceedings | chen-etal-2022-leveraging | Leveraging Task Transferability to Meta-learning for Clinical Section Classification with Limited Data | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.461/ | Chen, Zhuohao and Kim, Jangwon and Bhakta, Ram and Sir, Mustafa | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6690--6702 | Identifying sections is one of the critical components of understanding medical information from unstructured clinical notes and developing assistive technologies for clinical note-writing tasks. Most state-of-the-art text classification systems require thousands of in-domain text data to achieve high performance. However, collecting in-domain and recent clinical note data with section labels is challenging given the high level of privacy and sensitivity. The present paper proposes an algorithmic way to improve the task transferability of meta-learning-based text classification in order to address the issue of low-resource target data. Specifically, we explore how to make the best use of the source dataset and propose a unique task transferability measure named Normalized Negative Conditional Entropy (NNCE). Leveraging the NNCE, we develop strategies for selecting clinical categories and sections from source task data to boost cross-domain meta-learning accuracy. Experimental results show that our task selection strategies improve section classification accuracy significantly compared to meta-learning algorithms. | null | null | 10.18653/v1/2022.acl-long.461 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,127 |
inproceedings | pujari-etal-2022-reinforcement | Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.462/ | Pujari, Rajkumar and Oveson, Erik and Kulkarni, Priyanka and Nouri, Elnaz | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6703--6712 | As large Pre-trained Language Models (PLMs) trained on large amounts of data in an unsupervised manner become more ubiquitous, identifying various types of bias in the text has come into sharp focus. Existing {\textquoteleft}Stereotype Detection' datasets mainly adopt a diagnostic approach toward large PLMs. Blodgett et. al. (2021) show that there are significant reliability issues with the existing benchmark datasets. Annotating a reliable dataset requires a precise understanding of the subtle nuances of how stereotypes manifest in text. In this paper, we annotate a focused evaluation set for {\textquoteleft}Stereotype Detection' that addresses those pitfalls by de-constructing various ways in which stereotypes manifest in text. Further, we present a multi-task model that leverages the abundance of data-rich neighboring tasks such as hate speech detection, offensive language detection, misogyny detection, etc., to improve the empirical performance on {\textquoteleft}Stereotype Detection'. We then propose a reinforcement-learning agent that guides the multi-task learning model by learning to identify the training examples from the neighboring tasks that help the target task the most. We show that the proposed models achieve significant empirical gains over existing baselines on all the tasks. | null | null | 10.18653/v1/2022.acl-long.462 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,128 |
inproceedings | boldsen-paggio-2022-letters | Letters From the Past: Modeling Historical Sound Change Through Diachronic Character Embeddings | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.463/ | Boldsen, Sidsel and Paggio, Patrizia | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6713--6722 | While a great deal of work has been done on NLP approaches to lexical semantic change detection, other aspects of language change have received less attention from the NLP community. In this paper, we address the detection of sound change through historical spelling. We propose that a sound change can be captured by comparing the relative distance through time between the distributions of the characters involved before and after the change has taken place. We model these distributions using PPMI character embeddings. We verify this hypothesis in synthetic data and then test the method`s ability to trace the well-known historical change of lenition of plosives in Danish historical sources. We show that the models are able to identify several of the changes under consideration and to uncover meaningful contexts in which they appeared. The methodology has the potential to contribute to the study of open questions such as the relative chronology of sound shifts and their geographical distribution. | null | null | 10.18653/v1/2022.acl-long.463 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,129 |
inproceedings | liu-etal-2022-token | A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.464/ | Liu, Tianyu and Zhang, Yizhe and Brockett, Chris and Mao, Yi and Sui, Zhifang and Chen, Weizhu and Dolan, Bill | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6723--6737 | Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications. Existing work usually attempts to detect these hallucinations based on a corresponding oracle reference at a sentence or document level. However ground-truth references may not be readily available for many free-form text generation applications, and sentence- or document-level detection may fail to provide the fine-grained signals that would prevent fallacious content in real time. As a first step to addressing these issues, we propose a novel token-level, reference-free hallucination detection task and an associated annotated dataset named HaDeS (HAllucination DEtection dataSet). To create this dataset, we first perturb a large number of text segments extracted from English language Wikipedia, and then verify these with crowd-sourced annotations. To mitigate label imbalance during annotation, we utilize an iterative model-in-loop strategy. We conduct comprehensive data analyses and create multiple baseline models. | null | null | 10.18653/v1/2022.acl-long.464 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,130 |
inproceedings | grivas-etal-2022-low | Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in Practice | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.465/ | Grivas, Andreas and Bogoychev, Nikolay and Lopez, Adam | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6738--6758 | Classifiers in natural language processing (NLP) often have a large number of output classes. For example, neural language models (LMs) and machine translation (MT) models both predict tokens from a vocabulary of thousands. The Softmax output layer of these models typically receives as input a dense feature representation, which has much lower dimensionality than the output. In theory, the result is some words may be impossible to be predicted via argmax, irrespective of input features, and empirically, there is evidence this happens in small language models (Demeter et al., 2020). In this paper we ask whether it can happen in practical large language models and translation models. To do so, we develop algorithms to detect such unargmaxable tokens in public models. We find that 13 out of 150 models do indeed have such tokens; however, they are very infrequent and unlikely to impact model quality. We release our algorithms and code to the public. | null | null | 10.18653/v1/2022.acl-long.465 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,131 |
inproceedings | ma-etal-2022-prompt | {P}rompt for Extraction? {PAIE}: {P}rompting Argument Interaction for Event Argument Extraction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.466/ | Ma, Yubo and Wang, Zehao and Cao, Yixin and Li, Mukai and Chen, Meiqi and Wang, Kun and Shao, Jing | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6759--6774 | In this paper, we propose an effective yet efficient model PAIE for both sentence-level and document-level Event Argument Extraction (EAE), which also generalizes well when there is a lack of training data. On the one hand, PAIE utilizes prompt tuning for extractive objectives to take the best advantages of Pre-trained Language Models (PLMs). It introduces two span selectors based on the prompt to select start/end tokens among input texts for each role. On the other hand, it captures argument interactions via multi-role prompts and conducts joint optimization with optimal span assignments via a bipartite matching loss. Also, with a flexible prompt design, PAIE can extract multiple arguments with the same role instead of conventional heuristic threshold tuning. We have conducted extensive experiments on three benchmarks, including both sentence- and document-level EAE. The results present promising improvements from PAIE (3.5{\%} and 2.3{\%} F1 gains in average on three benchmarks, for PAIE-base and PAIE-large respectively). Further analysis demonstrates the efficiency, generalization to few-shot settings, and effectiveness of different extractive prompt tuning strategies. Our code is available at \url{https://github.com/mayubo2333/PAIE}. | null | null | 10.18653/v1/2022.acl-long.466 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,132 |
inproceedings | zhang-feng-2022-reducing | Reducing Position Bias in Simultaneous Machine Translation with Length-Aware Framework | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.467/ | Zhang, Shaolei and Feng, Yang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6775--6788 | Simultaneous machine translation (SiMT) starts translating while receiving the streaming source inputs, and hence the source sentence is always incomplete during translating. Different from the full-sentence MT using the conventional seq-to-seq architecture, SiMT often applies prefix-to-prefix architecture, which forces each target word to only align with a partial source prefix to adapt to the incomplete source in streaming inputs. However, the source words in the front positions are always illusoryly considered more important since they appear in more prefixes, resulting in position bias, which makes the model pay more attention on the front source positions in testing. In this paper, we first analyze the phenomenon of position bias in SiMT, and develop a Length-Aware Framework to reduce the position bias by bridging the structural gap between SiMT and full-sentence MT. Specifically, given the streaming inputs, we first predict the full-sentence length and then fill the future source position with positional encoding, thereby turning the streaming inputs into a pseudo full-sentence. The proposed framework can be integrated into most existing SiMT methods to further improve performance. Experiments on two representative SiMT methods, including the state-of-the-art adaptive policy, show that our method successfully reduces the position bias and thereby achieves better SiMT performance. | null | null | 10.18653/v1/2022.acl-long.467 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,133 |
inproceedings | louis-spanakis-2022-statutory | A Statutory Article Retrieval Dataset in {F}rench | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.468/ | Louis, Antoine and Spanakis, Gerasimos | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6789--6803 | Statutory article retrieval is the task of automatically retrieving law articles relevant to a legal question. While recent advances in natural language processing have sparked considerable interest in many legal tasks, statutory article retrieval remains primarily untouched due to the scarcity of large-scale and high-quality annotated datasets. To address this bottleneck, we introduce the Belgian Statutory Article Retrieval Dataset (BSARD), which consists of 1,100+ French native legal questions labeled by experienced jurists with relevant articles from a corpus of 22,600+ Belgian law articles. Using BSARD, we benchmark several state-of-the-art retrieval approaches, including lexical and dense architectures, both in zero-shot and supervised setups. We find that fine-tuned dense retrieval models significantly outperform other systems. Our best performing baseline achieves 74.8{\%} R@100, which is promising for the feasibility of the task and indicates there is still room for improvement. By the specificity of the domain and addressed task, BSARD presents a unique challenge problem for future research on legal information retrieval. Our dataset and source code are publicly available. | null | null | 10.18653/v1/2022.acl-long.468 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,134 |
inproceedings | logacheva-etal-2022-paradetox | {P}ara{D}etox: Detoxification with Parallel Data | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.469/ | Logacheva, Varvara and Dementieva, Daryna and Ustyantsev, Sergey and Moskovskiy, Daniil and Dale, David and Krotova, Irina and Semenov, Nikita and Panchenko, Alexander | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6804--6818 | We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task. We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources. We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems. | null | null | 10.18653/v1/2022.acl-long.469 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,135 |
inproceedings | boldsen-etal-2022-interpreting | Interpreting Character Embeddings With Perceptual Representations: The Case of Shape, Sound, and Color | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.470/ | Boldsen, Sidsel and Agirrezabal, Manex and Hollenstein, Nora | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6819--6836 | Character-level information is included in many NLP models, but evaluating the information encoded in character representations is an open issue. We leverage perceptual representations in the form of shape, sound, and color embeddings and perform a representational similarity analysis to evaluate their correlation with textual representations in five languages. This cross-lingual analysis shows that textual character representations correlate strongly with sound representations for languages using an alphabetic script, while shape correlates with featural scripts. We further develop a set of probing classifiers to intrinsically evaluate what phonological information is encoded in character embeddings. Our results suggest that information on features such as voicing are embedded in both LSTM and transformer-based representations. | null | null | 10.18653/v1/2022.acl-long.470 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,136 |
inproceedings | carlsson-etal-2022-fine | Fine-Grained Controllable Text Generation Using Non-Residual Prompting | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.471/ | Carlsson, Fredrik and {\"Ohman, Joey and Liu, Fangyu and Verlinden, Severine and Nivre, Joakim and Sahlgren, Magnus | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6837--6857 | The introduction of immensely large Causal Language Models (CLMs) has rejuvenated the interest in open-ended text generation. However, controlling the generative process for these Transformer-based models is at large an unsolved problem. Earlier work has explored either plug-and-play decoding strategies, or more powerful but blunt approaches such as prompting. There hence currently exists a trade-off between fine-grained control, and the capability for more expressive high-level instructions. To alleviate this trade-off, we propose an encoder-decoder architecture that enables intermediate text prompts at arbitrary time steps. We propose a resource-efficient method for converting a pre-trained CLM into this architecture, and demonstrate its potential on various experiments, including the novel task of contextualized word inclusion. Our method provides strong results on multiple experimental settings, proving itself to be both expressive and versatile. | null | null | 10.18653/v1/2022.acl-long.471 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,137 |
inproceedings | lux-vu-2022-language | Language-Agnostic Meta-Learning for Low-Resource Text-to-Speech with Articulatory Features | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.472/ | Lux, Florian and Vu, Thang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6858--6868 | While neural text-to-speech systems perform remarkably well in high-resource scenarios, they cannot be applied to the majority of the over 6,000 spoken languages in the world due to a lack of appropriate training data. In this work, we use embeddings derived from articulatory vectors rather than embeddings derived from phoneme identities to learn phoneme representations that hold across languages. In conjunction with language agnostic meta learning, this enables us to fine-tune a high-quality text-to-speech model on just 30 minutes of data in a previously unseen language spoken by a previously unseen speaker. | null | null | 10.18653/v1/2022.acl-long.472 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,138 |
inproceedings | cassidy-etal-2022-twittirish | {T}witt{I}rish: A {U}niversal {D}ependencies Treebank of Tweets in {M}odern {I}rish | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.473/ | Cassidy, Lauren and Lynn, Teresa and Barry, James and Foster, Jennifer | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6869--6884 | Modern Irish is a minority language lacking sufficient computational resources for the task of accurate automatic syntactic parsing of user-generated content such as tweets. Although language technology for the Irish language has been developing in recent years, these tools tend to perform poorly on user-generated content. As with other languages, the linguistic style observed in Irish tweets differs, in terms of orthography, lexicon, and syntax, from that of standard texts more commonly used for the development of language models and parsers. We release the first Universal Dependencies treebank of Irish tweets, facilitating natural language processing of user-generated content in Irish. In this paper, we explore the differences between Irish tweets and standard Irish text, and the challenges associated with dependency parsing of Irish tweets. We describe our bootstrapping method of treebank development and report on preliminary parsing experiments. | null | null | 10.18653/v1/2022.acl-long.473 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,139 |
inproceedings | liu-etal-2022-length | Length Control in Abstractive Summarization by Pretraining Information Selection | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.474/ | Liu, Yizhu and Jia, Qi and Zhu, Kenny | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6885--6895 | Previous length-controllable summarization models mostly control lengths at the decoding stage, whereas the encoding or the selection of information from the source document is not sensitive to the designed length. They also tend to generate summaries as long as those in the training data. In this paper, we propose a length-aware attention mechanism (LAAM) to adapt the encoding of the source based on the desired length. Our approach works by training LAAM on a summary length balanced dataset built from the original training data, and then fine-tuning as usual. Results show that this approach is effective in generating high-quality summaries with desired lengths and even those short lengths never seen in the original training set. | null | null | 10.18653/v1/2022.acl-long.474 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,140 |
inproceedings | fei-etal-2022-cqg | {CQG}: A Simple and Effective Controlled Generation Framework for Multi-hop Question Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.475/ | Fei, Zichu and Zhang, Qi and Gui, Tao and Liang, Di and Wang, Sirui and Wu, Wei and Huang, Xuanjing | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6896--6906 | Multi-hop question generation focuses on generating complex questions that require reasoning over multiple pieces of information of the input passage. Current models with state-of-the-art performance have been able to generate the correct questions corresponding to the answers. However, most models can not ensure the complexity of generated questions, so they may generate shallow questions that can be answered without multi-hop reasoning. To address this challenge, we propose the CQG, which is a simple and effective controlled framework. CQG employs a simple method to generate the multi-hop questions that contain key entities in multi-hop reasoning chains, which ensure the complexity and quality of the questions. In addition, we introduce a novel controlled Transformer-based decoder to guarantee that key entities appear in the questions. Experiment results show that our model greatly improves performance, which also outperforms the state-of-the-art model about 25{\%} by 5 BLEU points on HotpotQA. | null | null | 10.18653/v1/2022.acl-long.475 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,141 |
inproceedings | abdou-etal-2022-word | Word Order Does Matter and Shuffled Language Models Know It | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.476/ | Abdou, Mostafa and Ravishankar, Vinit and Kulmizev, Artur and S{\o}gaard, Anders | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6907--6919 | Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information. Somewhat counter-intuitively, some of these studies also report that position embeddings appear to be crucial for models' good performance with shuffled text. We probe these language models for word order information and investigate what position embeddings learned from shuffled text encode, showing that these models retain a notion of word order information. We show this is in part due to a subtlety in how shuffling is implemented in previous work {--} before rather than after subword segmentation. Surprisingly, we find even Language models trained on text shuffled after subword segmentation retain some semblance of information about word order because of the statistical dependencies between sentence length and unigram probabilities. Finally, we show that beyond GLUE, a variety of language understanding tasks do require word order information, often to an extent that cannot be learned through fine-tuning. | null | null | 10.18653/v1/2022.acl-long.476 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,142 |
inproceedings | chrysostomou-aletras-2022-empirical | An Empirical Study on Explanations in Out-of-Domain Settings | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.477/ | Chrysostomou, George and Aletras, Nikolaos | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6920--6938 | Recent work in Natural Language Processing has focused on developing approaches that extract faithful explanations, either via identifying the most important tokens in the input (i.e. post-hoc explanations) or by designing inherently faithful models that first select the most important tokens and then use them to predict the correct label (i.e. select-then-predict models). Currently, these approaches are largely evaluated on in-domain settings. Yet, little is known about how post-hoc explanations and inherently faithful models perform in out-of-domain settings. In this paper, we conduct an extensive empirical study that examines: (1) the out-of-domain faithfulness of post-hoc explanations, generated by five feature attribution methods; and (2) the out-of-domain performance of two inherently faithful models over six datasets. Contrary to our expectations, results show that in many cases out-of-domain post-hoc explanation faithfulness measured by sufficiency and comprehensiveness is higher compared to in-domain. We find this misleading and suggest using a random baseline as a yardstick for evaluating post-hoc explanation faithfulness. Our findings also show that select-then predict models demonstrate comparable predictive performance in out-of-domain settings to full-text trained models. | null | null | 10.18653/v1/2022.acl-long.477 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,143 |
inproceedings | kotnis-etal-2022-milie | {MILIE}: Modular {\&} Iterative Multilingual Open Information Extraction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.478/ | Kotnis, Bhushan and Gashteovski, Kiril and Rubio, Daniel and Shaker, Ammar and Rodriguez-Tembras, Vanesa and Takamoto, Makoto and Niepert, Mathias and Lawrence, Carolin | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6939--6950 | Open Information Extraction (OpenIE) is the task of extracting (subject, predicate, object) triples from natural language sentences. Current OpenIE systems extract all triple slots independently. In contrast, we explore the hypothesis that it may be beneficial to extract triple slots iteratively: first extract easy slots, followed by the difficult ones by conditioning on the easy slots, and therefore achieve a better overall extraction. Based on this hypothesis, we propose a neural OpenIE system, MILIE, that operates in an iterative fashion. Due to the iterative nature, the system is also modularit is possible to seamlessly integrate rule based extraction systems with a neural end-to-end system, thereby allowing rule based systems to supply extraction slots which MILIE can leverage for extracting the remaining slots. We confirm our hypothesis empirically: MILIE outperforms SOTA systems on multiple languages ranging from Chinese to Arabic. Additionally, we are the first to provide an OpenIE test dataset for Arabic and Galician. | null | null | 10.18653/v1/2022.acl-long.478 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,144 |
inproceedings | sugawara-etal-2022-makes | What Makes Reading Comprehension Questions Difficult? | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.479/ | Sugawara, Saku and Nangia, Nikita and Warstadt, Alex and Bowman, Samuel | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6951--6971 | For a natural language understanding benchmark to be useful in research, it has to consist of examples that are diverse and difficult enough to discriminate among current and near-future state-of-the-art systems. However, we do not yet know how best to select text sources to collect a variety of challenging examples. In this study, we crowdsource multiple-choice reading comprehension questions for passages taken from seven qualitatively distinct sources, analyzing what attributes of passages contribute to the difficulty and question types of the collected examples. To our surprise, we find that passage source, length, and readability measures do not significantly affect question difficulty. Through our manual annotation of seven reasoning types, we observe several trends between passage sources and reasoning types, e.g., logical reasoning is more often required in questions written for technical passages. These results suggest that when creating a new benchmark dataset, selecting a diverse set of passages can help ensure a diverse range of question types, but that passage difficulty need not be a priority. | null | null | 10.18653/v1/2022.acl-long.479 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,145 |
inproceedings | iranzo-sanchez-etal-2022-simultaneous | From Simultaneous to Streaming Machine Translation by Leveraging Streaming History | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.480/ | Iranzo-S{\'a}nchez, Javier and Civera, Jorge and Juan, Alfons | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6972--6985 | Simultaneous Machine Translation is the task of incrementally translating an input sentence before it is fully available. Currently, simultaneous translation is carried out by translating each sentence independently of the previously translated text. More generally, Streaming MT can be understood as an extension of Simultaneous MT to the incremental translation of a continuous input text stream. In this work, a state-of-the-art simultaneous sentence-level MT system is extended to the streaming setup by leveraging the streaming history. Extensive empirical results are reported on IWSLT Translation Tasks, showing that leveraging the streaming history leads to significant quality gains. In particular, the proposed system proves to compare favorably to the best performing systems. | null | null | 10.18653/v1/2022.acl-long.480 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,146 |
inproceedings | lu-etal-2022-rationale | A Rationale-Centric Framework for Human-in-the-loop Machine Learning | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.481/ | Lu, Jinghui and Yang, Linyi and Namee, Brian and Zhang, Yue | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6986--6996 | We present a novel rational-centric framework with human-in-the-loop {--} Rationales-centric Double-robustness Learning (RDL) {--} to boost model out-of-distribution performance in few-shot learning scenarios. By using static semi-factual generation and dynamic human-intervened correction, RDL, acting like a sensible {\textquotedblleft}inductive bias{\textquotedblright}, exploits rationales (i.e. phrases that cause the prediction), human interventions and semi-factual augmentations to decouple spurious associations and bias models towards generally applicable underlying distributions, which enables fast and accurate generalisation. Experimental results show that RDL leads to significant prediction benefits on both in-distribution and out-of-distribution tests, especially for few-shot learning scenarios, compared to many state-of-the-art benchmarks. We also perform extensive ablation studies to support in-depth analyses of each component in our framework. | null | null | 10.18653/v1/2022.acl-long.481 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,147 |
inproceedings | hershcovich-etal-2022-challenges | Challenges and Strategies in Cross-Cultural {NLP} | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.482/ | Hershcovich, Daniel and Frank, Stella and Lent, Heather and de Lhoneux, Miryam and Abdou, Mostafa and Brandl, Stephanie and Bugliarello, Emanuele and Cabello Piqueras, Laura and Chalkidis, Ilias and Cui, Ruixiang and Fierro, Constanza and Margatina, Katerina and Rust, Phillip and S{\o}gaard, Anders | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 6997--7013 | Various efforts in the Natural Language Processing (NLP) community have been made to accommodate linguistic diversity and serve speakers of many different languages. However, it is important to acknowledge that speakers and the content they produce and require, vary not just by language, but also by culture. Although language and culture are tightly linked, there are important differences. Analogous to cross-lingual and multilingual NLP, cross-cultural and multicultural NLP considers these differences in order to better serve users of NLP systems. We propose a principled framework to frame these efforts, and survey existing and potential strategies. | null | null | 10.18653/v1/2022.acl-long.482 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,148 |
inproceedings | cui-etal-2022-prototypical | Prototypical Verbalizer for Prompt-based Few-shot Tuning | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.483/ | Cui, Ganqu and Hu, Shengding and Ding, Ning and Huang, Longtao and Liu, Zhiyuan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 7014--7024 | Prompt-based tuning for pre-trained language models (PLMs) has shown its effectiveness in few-shot learning. Typically, prompt-based tuning wraps the input text into a cloze question. To make predictions, the model maps the output words to labels via a verbalizer, which is either manually designed or automatically built. However, manual verbalizers heavily depend on domain-specific prior knowledge and human efforts, while finding appropriate label words automatically still remains challenging. In this work, we propose the prototypical verbalizer (ProtoVerb) which is built directly from training data. Specifically, ProtoVerb learns prototype vectors as verbalizers by contrastive learning. In this way, the prototypes summarize training instances and are able to enclose rich class-level semantics. We conduct experiments on both topic classification and entity typing tasks, and the results demonstrate that ProtoVerb significantly outperforms current automatic verbalizers, especially when training data is extremely scarce. More surprisingly, ProtoVerb consistently boosts prompt-based tuning even on untuned PLMs, indicating an elegant non-tuning way to utilize PLMs. Our codes are avaliable at \url{https://github.com/thunlp/OpenPrompt}. | null | null | 10.18653/v1/2022.acl-long.483 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,149 |
inproceedings | hagen-etal-2022-clickbait | Clickbait Spoiling via Question Answering and Passage Retrieval | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.484/ | Hagen, Matthias and Fr{\"obe, Maik and Jurk, Artur and Potthast, Martin | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 7025--7036 | We introduce and study the task of clickbait spoiling: generating a short text that satisfies the curiosity induced by a clickbait post. Clickbait links to a web page and advertises its contents by arousing curiosity instead of providing an informative summary. Our contributions are approaches to classify the type of spoiler needed (i.e., a phrase or a passage), and to generate appropriate spoilers. A large-scale evaluation and error analysis on a new corpus of 5,000 manually spoiled clickbait posts{---}the Webis Clickbait Spoiling Corpus 2022{---}shows that our spoiler type classifier achieves an accuracy of 80{\%}, while the question answering model DeBERTa-large outperforms all others in generating spoilers for both types. | null | null | 10.18653/v1/2022.acl-long.484 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,150 |
inproceedings | zhou-etal-2022-bert | {BERT} Learns to Teach: Knowledge Distillation with Meta Learning | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.485/ | Zhou, Wangchunshu and Xu, Canwen and McAuley, Julian | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 7037--7049 | We present Knowledge Distillation with Meta Learning (MetaDistil), a simple yet effective alternative to traditional knowledge distillation (KD) methods where the teacher model is fixed during training. We show the teacher network can learn to better transfer knowledge to the student network (i.e., \textit{learning to teach}) with the feedback from the performance of the distilled student network in a meta learning framework. Moreover, we introduce a pilot update mechanism to improve the alignment between the inner-learner and meta-learner in meta learning algorithms that focus on an improved inner-learner. Experiments on various benchmarks show that MetaDistil can yield significant improvements compared with traditional KD algorithms and is less sensitive to the choice of different student capacity and hyperparameters, facilitating the use of KD on different tasks and models. | null | null | 10.18653/v1/2022.acl-long.485 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,151 |
inproceedings | fang-etal-2022-stemm | {STEMM}: Self-learning with Speech-text Manifold Mixup for Speech Translation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.486/ | Fang, Qingkai and Ye, Rong and Li, Lei and Feng, Yang and Wang, Mingxuan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 7050--7062 | How to learn a better speech representation for end-to-end speech-to-text translation (ST) with limited labeled data? Existing techniques often attempt to transfer powerful machine translation (MT) capabilities to ST, but neglect the representation discrepancy across modalities. In this paper, we propose the Speech-TExt Manifold Mixup (STEMM) method to calibrate such discrepancy. Specifically, we mix up the representation sequences of different modalities, and take both unimodal speech sequences and multimodal mixed sequences as input to the translation model in parallel, and regularize their output predictions with a self-learning framework. Experiments on MuST-C speech translation benchmark and further analysis show that our method effectively alleviates the cross-modal representation discrepancy, and achieves significant improvements over a strong baseline on eight translation directions. | null | null | 10.18653/v1/2022.acl-long.486 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,152 |
inproceedings | wang-etal-2022-integrating | Integrating Vectorized Lexical Constraints for Neural Machine Translation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.487/ | Wang, Shuo and Tan, Zhixing and Liu, Yang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 7063--7073 | Lexically constrained neural machine translation (NMT), which controls the generation of NMT models with pre-specified constraints, is important in many practical scenarios. Due to the representation gap between discrete constraints and continuous vectors in NMT models, most existing works choose to construct synthetic data or modify the decoding algorithm to impose lexical constraints, treating the NMT model as a black box. In this work, we propose to open this black box by directly integrating the constraints into NMT models. Specifically, we vectorize source and target constraints into continuous keys and values, which can be utilized by the attention modules of NMT models. The proposed integration method is based on the assumption that the correspondence between keys and values in attention modules is naturally suitable for modeling constraint pairs. Experimental results show that our method consistently outperforms several representative baselines on four language pairs, demonstrating the superiority of integrating vectorized lexical constraints. | null | null | 10.18653/v1/2022.acl-long.487 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,153 |
inproceedings | liu-etal-2022-mpii | {MPII}: Multi-Level Mutual Promotion for Inference and Interpretation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.488/ | Liu, Yan and Chen, Sanyuan and Yang, Yazheng and Dai, Qi | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 7074--7084 | In order to better understand the rationale behind model behavior, recent works have exploited providing interpretation to support the inference prediction. However, existing methods tend to provide human-unfriendly interpretation, and are prone to sub-optimal performance due to one-side promotion, i.e. either inference promotion with interpretation or vice versa. In this paper, we propose a multi-level Mutual Promotion mechanism for self-evolved Inference and sentence-level Interpretation (MPII). Specifically, from the model-level, we propose a Step-wise Integration Mechanism to jointly perform and deeply integrate inference and interpretation in an autoregressive manner. From the optimization-level, we propose an Adversarial Fidelity Regularization to improve the fidelity between inference and interpretation with the Adversarial Mutual Information training strategy. Extensive experiments on NLI and CQA tasks reveal that the proposed MPII approach can significantly outperform baseline models for both the inference performance and the interpretation quality. | null | null | 10.18653/v1/2022.acl-long.488 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,154 |
inproceedings | dai-etal-2022-stablemoe | {S}table{M}o{E}: Stable Routing Strategy for Mixture of Experts | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.489/ | Dai, Damai and Dong, Li and Ma, Shuming and Zheng, Bo and Sui, Zhifang and Chang, Baobao and Wei, Furu | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 7085--7095 | The Mixture-of-Experts (MoE) technique can scale up the model size of Transformers with an affordable computational overhead. We point out that existing learning-to-route MoE methods suffer from the routing fluctuation issue, i.e., the target expert of the same input may change along with training, but only one expert will be activated for the input during inference. The routing fluctuation tends to harm sample efficiency because the same input updates different experts but only one is finally used. In this paper, we propose StableMoE with two training stages to address the routing fluctuation problem. In the first training stage, we learn a balanced and cohesive routing strategy and distill it into a lightweight router decoupled from the backbone model. In the second training stage, we utilize the distilled router to determine the token-to-expert assignment and freeze it for a stable routing strategy. We validate our method on language modeling and multilingual machine translation. The results show that StableMoE outperforms existing MoE methods in terms of both convergence speed and performance. | null | null | 10.18653/v1/2022.acl-long.489 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,155 |
inproceedings | zhu-li-2022-boundary | Boundary Smoothing for Named Entity Recognition | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.490/ | Zhu, Enwei and Li, Jinpeng | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 7096--7108 | Neural named entity recognition (NER) models may easily encounter the over-confidence issue, which degrades the performance and calibration. Inspired by label smoothing and driven by the ambiguity of boundary annotation in NER engineering, we propose boundary smoothing as a regularization technique for span-based neural NER models. It re-assigns entity probabilities from annotated spans to the surrounding ones. Built on a simple but strong baseline, our model achieves results better than or competitive with previous state-of-the-art systems on eight well-known NER benchmarks. Further empirical analysis suggests that boundary smoothing effectively mitigates over-confidence, improves model calibration, and brings flatter neural minima and more smoothed loss landscapes. | null | null | 10.18653/v1/2022.acl-long.490 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,156 |
inproceedings | wang-etal-2022-incorporating | Incorporating Hierarchy into Text Encoder: a Contrastive Learning Approach for Hierarchical Text Classification | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.491/ | Wang, Zihan and Wang, Peiyi and Huang, Lianzhe and Sun, Xin and Wang, Houfeng | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 7109--7119 | Hierarchical text classification is a challenging subtask of multi-label classification due to its complex label hierarchy. Existing methods encode text and label hierarchy separately and mix their representations for classification, where the hierarchy remains unchanged for all input text. Instead of modeling them separately, in this work, we propose Hierarchy-guided Contrastive Learning (HGCLR) to directly embed the hierarchy into a text encoder. During training, HGCLR constructs positive samples for input text under the guidance of the label hierarchy. By pulling together the input text and its positive sample, the text encoder can learn to generate the hierarchy-aware text representation independently. Therefore, after training, the HGCLR enhanced text encoder can dispense with the redundant hierarchy. Extensive experiments on three benchmark datasets verify the effectiveness of HGCLR. | null | null | 10.18653/v1/2022.acl-long.491 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 30,157 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.