entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
xing-etal-2022-automatic
Automatic Explanation Generation For Climate Science Claims
Parameswaran, Pradeesh and Biggs, Jennifer and Powers, David
dec
2022
Adelaide, Australia
Australasian Language Technology Association
https://aclanthology.org/2022.alta-1.16/
Xing, Rui and Bhatia, Shraey and Baldwin, Timothy and Lau, Jey Han
Proceedings of the 20th Annual Workshop of the Australasian Language Technology Association
122--129
Climate change is an existential threat to humanity, the proliferation of unsubstantiated claims relating to climate science is manipulating public perception, motivating the need for fact-checking in climate science. In this work, we draw on recent work that uses retrieval-augmented generation for veracity prediction and explanation generation, in framing explanation generation as a query-focused multi-document summarization task. We adapt PRIMERA to the climate science domain by adding additional global attention on claims. Through automatic evaluation and qualitative analysis, we demonstrate that our method is effective at generating explanations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,654
inproceedings
huang-hyslop-2022-zhangzhou
Zhangzhou Implosives and Their Variations
Parameswaran, Pradeesh and Biggs, Jennifer and Powers, David
dec
2022
Adelaide, Australia
Australasian Language Technology Association
https://aclanthology.org/2022.alta-1.17/
Huang, Yishan and Hyslop, Gwendolyn
Proceedings of the 20th Annual Workshop of the Australasian Language Technology Association
122--129
Zhangzhou Southern Min employs the airstream mechanism of glottalic ingressive as a contrastive feature in its onset system. However, their realisations are highly diverse with eleven phonetic variants that can be derived from three implosive phonemes (/ɓ, ɗ, ɠ/). The allophonic variations are regressively motivated by three driving factors comprising the nasal [Ṽ], labial-velar [u, w], and palatal [i, j] characteristics of subsequent segments. Several processes that include labialisation, nasalisation, lenition, laminalisation, dentalisation and palatalisation have been found to trigger alternation on the airstream mechanism, manner of articulation, and place of articulation of related sounds, resulting in diverse phonetic outputs of the three implosives phonemes that can be captured using phonological rules.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,655
inproceedings
vallejo-etal-2022-evaluating
Evaluating the Examiner: The Perils of {P}earson Correlation for Validating Text Similarity Metrics
Parameswaran, Pradeesh and Biggs, Jennifer and Powers, David
dec
2022
Adelaide, Australia
Australasian Language Technology Association
https://aclanthology.org/2022.alta-1.18/
Vallejo, Gisela and Baldwin, Timothy and Frermann, Lea
Proceedings of the 20th Annual Workshop of the Australasian Language Technology Association
130--138
In recent years, researchers have developed question-answering based approaches to automatically evaluate system summaries, reporting improved validity compared to word overlap-based metrics like ROUGE, in terms of correlation with human ratings of criteria including fluency and hallucination. In this paper, we take a closer look at one particular metric, QuestEval, and ask whether: (1) it can serve as a more general metric for long document similarity assessment; and (2) a single correlation score between metric scores and human ratings, as the currently standard approach, is sufficient for metric validation. We find that correlation scores can be misleading, and that score distributions and outliers should be taken into account. With these caveats in mind, QuestEval can be a promising candidate for long document similarity assessment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,656
inproceedings
almodovar-etal-2022-language
Can Language Models Help in System Security? Investigating Log Anomaly Detection using {BERT}
Parameswaran, Pradeesh and Biggs, Jennifer and Powers, David
dec
2022
Adelaide, Australia
Australasian Language Technology Association
https://aclanthology.org/2022.alta-1.19/
Almodovar, Crispin and Sabrina, Fariza and Karimi, Sarvnaz and Azad, Salahuddin
Proceedings of the 20th Annual Workshop of the Australasian Language Technology Association
139--147
The log files generated by networked computer systems contain valuable information that can be used to monitor system security and stability. Recently, techniques based on Deep Learning and Natural Language Processing have been proven effective in detecting anomalous activities from system logs. The current approaches, however, have limited practical application because they rely on log templates which cannot handle variability in log content, or they require supervised training to be effective. In this paper, a novel log anomaly detection approach named LogFiT is proposed. The LogFiT model inherits the linguistic {\textquotedblleft}knowledge{\textquotedblright} encoded within a pretrained BERT-based language model and fine-tunes it towards learning the linguistic structure of system logs. The LogFiT model is trained in a self-supervised manner using normal log data only. Using masked token prediction and centroid distance minimisation as training objectives, the LogFiT model learns to recognise the linguistic patterns associated with the normal log data. During inference, a discriminator function uses the LogFiT model`s top-k token prediction accuracy and computed centroid distance to determine if the input is normal or anomaly. Experiments show that LogFiT`s F1 score and specificity exceeds that of baseline models on the HDFS dataset and comparable on the BGL dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,657
inproceedings
domingos-santos-2022-semantics
A Semantics of Spatial Expressions for interacting with unmanned aerial vehicles
Parameswaran, Pradeesh and Biggs, Jennifer and Powers, David
dec
2022
Adelaide, Australia
Australasian Language Technology Association
https://aclanthology.org/2022.alta-1.20/
Domingos, Lucas and Santos, Paulo
Proceedings of the 20th Annual Workshop of the Australasian Language Technology Association
148--155
This paper describes an investigation of establishing communication between a quadro- tor and a human by means of qualitative spatial relations using speech recognition. It is based on a system capable to receive, interpret, process, act, transmit and execute the commands given. This system is composed of a quadrotor equipped with a GPS, IMU sensors and radio communication, and a computer acting as a ground station, that is capable of understanding and interpreting the received commands and correctly provide answers according to an underlying qualitative reasoning formalism. Tests were performed, whose results show that the error rate was less than five percent for vertical and radial dimensions, otherwise, in horizontal dimension, we had an error rate of almost ten percent.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,658
inproceedings
brock-etal-2022-textstar
Textstar: a Fast and Lightweight Graph-Based Algorithm for Extractive Summarization and Keyphrase Extraction
Parameswaran, Pradeesh and Biggs, Jennifer and Powers, David
dec
2022
Adelaide, Australia
Australasian Language Technology Association
https://aclanthology.org/2022.alta-1.22/
Brock, David and Khan, Ali and Doan, Tam and Lin, Alicia and Guo, Yifan and Tarau, Paul
Proceedings of the 20th Annual Workshop of the Australasian Language Technology Association
161--169
We introduce Textstar, a graph-based summarization and keyphrase extraction system that builds a document graph using only lemmatization and POS tagging. The document graph aggregates connections between lemma and sentence identifier nodes. Consecutive lemmas in each sentence, as well as consecutive sentences themselves, are connected in rings to form a ring of rings representing the document. We iteratively apply a centrality algorithm of our choice to the document graph and trim the lowest ranked nodes at each step. After the desired number of remaining sentences and lemmas is reached, we extract the sentences as the summary, and the remaining lemmas are aggregated into keyphrases using their context. Our algorithm is efficient enough to one-shot process large document graphs without any training, and empirical evaluation on several benchmarks indicates that our performance is higher than most other graph based algorithms.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,660
inproceedings
tran-etal-2022-contrastive
Contrastive Visual and Language Learning for Visual Relationship Detection
Parameswaran, Pradeesh and Biggs, Jennifer and Powers, David
dec
2022
Adelaide, Australia
Australasian Language Technology Association
https://aclanthology.org/2022.alta-1.23/
Tran, Thanh and Neau, Maelic and Santos, Paulo and Powers, David
Proceedings of the 20th Annual Workshop of the Australasian Language Technology Association
170--177
Visual Relationship Detection aims to understand real-world objects' interactions by grounding visual concepts to compositional visual relation triples, written in the form of (subject, predicate, object). Previous works have explored the use of contrastive learning to implicitly predict the predicates from the relevant image regions. However, these models often directly leverage in-distribution spatial and language co-occurrences biases during training, preventing the models from generalizing to out-of-distribution compositions. In this work, we examine whether contrastive vision and language models pre-trained on large-scale external image and text dataset can assist the detection of compositional visual relationships. To this end, we propose a semi-supervised contrastive fine-tuning approach for the visual relationship detection task. The results show that fine-tuned models that were pre-trained on larger datasets do not yield better performance when performing visual relationship detection, and larger models can yield lower performance when compared with their smaller counterparts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,661
inproceedings
molla-2022-overview
Overview of the 2022 {ALTA} Shared task: {PIBOSO} sentence classification, 10 years later
Parameswaran, Pradeesh and Biggs, Jennifer and Powers, David
dec
2022
Adelaide, Australia
Australasian Language Technology Association
https://aclanthology.org/2022.alta-1.24/
Moll{\'a}, Diego
Proceedings of the 20th Annual Workshop of the Australasian Language Technology Association
178--182
The 2022 ALTA shared task has been running annually since 2010. This year, the shared task is a re-visit of the 2012 ALTA shared task. The purpose of this task is to classify sentences of medical publications using the PIBOSO taxonomy. This is a multi-label classification task which can help medical researchers and practitioners conduct Evidence Based Medicine (EBM). In this paper we present the task, the evaluation criteria, and the results of the systems participating in the shared task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,662
inproceedings
ishihara-etal-2022-estimating
Estimating the Strength of Authorship Evidence with a Deep-Learning-Based Approach
Parameswaran, Pradeesh and Biggs, Jennifer and Powers, David
dec
2022
Adelaide, Australia
Australasian Language Technology Association
https://aclanthology.org/2022.alta-1.25/
Ishihara, Shunichi and Tsuge, Satoru and Inaba, Mitsuyuki and Zaitsu, Wataru
Proceedings of the 20th Annual Workshop of the Australasian Language Technology Association
183--187
This study is the first likelihood ratio (LR)-based forensic text comparison study in which each text is mapped onto an embedding vector using RoBERTa as the pre-trained model. The scores obtained with Cosine distance and probabilistic linear discriminant analysis (PLDA) were calibrated to LRs with logistic regression; the quality of the LRs was assessed by log LR cost (Cllr). Although the documents in the experiments were very short (maximum 100 words), the systems reached the Cllr values of 0.55595 and 0.71591 for the Cosine and PLDA systems, respectively. The effectiveness of deep-learning-based text representation is discussed by comparing the results of the current study to those of the previous studies of systems based on conventional feature engineering tested with longer documents.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,663
inproceedings
modarressi-etal-2022-adapler
{A}dap{L}e{R}: Speeding up Inference by Adaptive Length Reduction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.1/
Modarressi, Ali and Mohebbi, Hosein and Pilehvar, Mohammad Taher
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1--15
Pre-trained language models have shown stellar performance in various downstream tasks. But, this usually comes at the cost of high latency and computation, hindering their usage in resource-limited settings. In this work, we propose a novel approach for reducing the computational cost of BERT with minimal loss in downstream performance. Our method dynamically eliminates less contributing tokens through layers, resulting in shorter lengths and consequently lower computational cost. To determine the importance of each token representation, we train a Contribution Predictor for each layer using a gradient-based saliency method. Our experiments on several diverse classification tasks show speedups up to 22x during inference time without much sacrifice in performance. We also validate the quality of the selected tokens in our method using human annotations in the ERASER benchmark. In comparison to other widely used strategies for selecting important tokens, such as saliency and attention, our proposed method has a significantly lower false positive rate in generating rationales. Our code is freely available at \url{https://github.com/amodaresi/AdapLeR}.
null
null
10.18653/v1/2022.acl-long.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,667
inproceedings
belz-etal-2022-quantified
Quantified Reproducibility Assessment of {NLP} Results
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.2/
Belz, Anya and Popovic, Maja and Mille, Simon
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
16--28
This paper describes and tests a method for carrying out quantified reproducibility assessment (QRA) that is based on concepts and definitions from metrology. QRA produces a single score estimating the degree of reproducibility of a given system and evaluation measure, on the basis of the scores from, and differences between, different reproductions. We test QRA on 18 different system and evaluation measure combinations (involving diverse NLP tasks and types of evaluation), for each of which we have the original results and one to seven reproduction results. The proposed QRA method produces degree-of-reproducibility scores that are comparable across multiple reproductions not only of the same, but also of different, original studies. We find that the proposed method facilitates insights into causes of variation between reproductions, and as a result, allows conclusions to be drawn about what aspects of system and/or evaluation design need to be changed in order to improve reproducibility.
null
null
10.18653/v1/2022.acl-long.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,668
inproceedings
yu-etal-2022-rare
Rare Tokens Degenerate All Tokens: Improving Neural Text Generation via Adaptive Gradient Gating for Rare Token Embeddings
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.3/
Yu, Sangwon and Song, Jongyoon and Kim, Heeseung and Lee, Seongmin and Ryu, Woo-Jong and Yoon, Sungroh
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
29--45
Recent studies have determined that the learned token embeddings of large-scale neural language models are degenerated to be anisotropic with a narrow-cone shape. This phenomenon, called the representation degeneration problem, facilitates an increase in the overall similarity between token embeddings that negatively affect the performance of the models. Although the existing methods that address the degeneration problem based on observations of the phenomenon triggered by the problem improves the performance of the text generation, the training dynamics of token embeddings behind the degeneration problem are still not explored. In this study, we analyze the training dynamics of the token embeddings focusing on rare token embedding. We demonstrate that the specific part of the gradient for rare token embeddings is the key cause of the degeneration problem for all tokens during training stage. Based on the analysis, we propose a novel method called, adaptive gradient gating(AGG). AGG addresses the degeneration problem by gating the specific part of the gradient for rare token embeddings. Experimental results from language modeling, word similarity, and machine translation tasks quantitatively and qualitatively verify the effectiveness of AGG.
null
null
10.18653/v1/2022.acl-long.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,669
inproceedings
seker-etal-2022-alephbert
{A}leph{BERT}: Language Model Pre-training and Evaluation from Sub-Word to Sentence Level
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.4/
Seker, Amit and Bandel, Elron and Bareket, Dan and Brusilovsky, Idan and Greenfeld, Refael and Tsarfaty, Reut
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
46--56
Large Pre-trained Language Models (PLMs) have become ubiquitous in the development of language understanding technology and lie at the heart of many artificial intelligence advances. While advances reported for English using PLMs are unprecedented, reported advances using PLMs for Hebrew are few and far between. The problem is twofold. First, so far, Hebrew resources for training large language models are not of the same magnitude as their English counterparts. Second, most benchmarks available to evaluate progress in Hebrew NLP require morphological boundaries which are not available in the output of standard PLMs. In this work we remedy both aspects. We present AlephBERT, a large PLM for Modern Hebrew, trained on larger vocabulary and a larger dataset than any Hebrew PLM before. Moreover, we introduce a novel neural architecture that recovers the morphological segments encoded in contextualized embedding vectors. Based on this new morphological component we offer an evaluation suite consisting of multiple tasks and benchmarks that cover sentence-level, word-level and sub-word level analyses. On all tasks, AlephBERT obtains state-of-the-art results beyond contemporary Hebrew baselines. We make our AlephBERT model, the morphological extraction model, and the Hebrew evaluation suite publicly available, for evaluating future Hebrew PLMs.
null
null
10.18653/v1/2022.acl-long.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,670
inproceedings
li-etal-2022-learning
Learning to Imagine: Integrating Counterfactual Thinking in Neural Discrete Reasoning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.5/
Li, Moxin and Feng, Fuli and Zhang, Hanwang and He, Xiangnan and Zhu, Fengbin and Chua, Tat-Seng
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
57--69
Neural discrete reasoning (NDR) has shown remarkable progress in combining deep models with discrete reasoning. However, we find that existing NDR solution suffers from large performance drop on hypothetical questions, e.g. {\textquotedblleft}what the annualized rate of return would be if the revenue in 2020 was doubled{\textquotedblright}. The key to hypothetical question answering (HQA) is counterfactual thinking, which is a natural ability of human reasoning but difficult for deep models. In this work, we devise a Learning to Imagine (L2I) module, which can be seamlessly incorporated into NDR models to perform the imagination of unseen counterfactual. In particular, we formulate counterfactual thinking into two steps: 1) identifying the fact to intervene, and 2) deriving the counterfactual from the fact and assumption, which are designed as neural networks. Based on TAT-QA, we construct a very challenging HQA dataset with 8,283 hypothetical questions. We apply the proposed L2I to TAGOP, the state-of-the-art solution on TAT-QA, validating the rationality and effectiveness of our approach.
null
null
10.18653/v1/2022.acl-long.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,671
inproceedings
zaharia-etal-2022-domain
Domain Adaptation in Multilingual and Multi-Domain Monolingual Settings for Complex Word Identification
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.6/
Zaharia, George-Eduard and Sm{\u{a}}du, R{\u{a}}zvan-Alexandru and Cercel, Dumitru and Dascalu, Mihai
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
70--80
Complex word identification (CWI) is a cornerstone process towards proper text simplification. CWI is highly dependent on context, whereas its difficulty is augmented by the scarcity of available datasets which vary greatly in terms of domains and languages. As such, it becomes increasingly more difficult to develop a robust model that generalizes across a wide array of input examples. In this paper, we propose a novel training technique for the CWI task based on domain adaptation to improve the target character and context representations. This technique addresses the problem of working with multiple domains, inasmuch as it creates a way of smoothing the differences between the explored datasets. Moreover, we also propose a similar auxiliary task, namely text simplification, that can be used to complement lexical complexity prediction. Our model obtains a boost of up to 2.42{\%} in terms of Pearson Correlation Coefficients in contrast to vanilla training techniques, when considering the CompLex from the Lexical Complexity Prediction 2021 dataset. At the same time, we obtain an increase of 3{\%} in Pearson scores, while considering a cross-lingual setup relying on the Complex Word Identification 2018 dataset. In addition, our model yields state-of-the-art results in terms of Mean Absolute Error.
null
null
10.18653/v1/2022.acl-long.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,672
inproceedings
liang-etal-2022-jointcl
{J}oint{CL}: A Joint Contrastive Learning Framework for Zero-Shot Stance Detection
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.7/
Liang, Bin and Zhu, Qinglin and Li, Xiang and Yang, Min and Gui, Lin and He, Yulan and Xu, Ruifeng
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
81--91
Zero-shot stance detection (ZSSD) aims to detect the stance for an unseen target during the inference stage. In this paper, we propose a joint contrastive learning (JointCL) framework, which consists of stance contrastive learning and target-aware prototypical graph contrastive learning. Specifically, a stance contrastive learning strategy is employed to better generalize stance features for unseen targets. Further, we build a prototypical graph for each instance to learn the target-based representation, in which the prototypes are deployed as a bridge to share the graph structures between the known targets and the unseen ones. Then a novel target-aware prototypical graph contrastive learning strategy is devised to generalize the reasoning ability of target-based stance representations to the unseen targets. Extensive experiments on three benchmark datasets show that the proposed approach achieves state-of-the-art performance in the ZSSD task.
null
null
10.18653/v1/2022.acl-long.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,673
inproceedings
ramachandran-etal-2022-caspi
[{CASPI}] Causal-aware Safe Policy Improvement for Task-oriented Dialogue
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.8/
Ramachandran, Govardana Sachithanandam and Hashimoto, Kazuma and Xiong, Caiming
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
92--102
The recent success of reinforcement learning (RL) in solving complex tasks is often attributed to its capacity to explore and exploit an environment. Sample efficiency is usually not an issue for tasks with cheap simulators to sample data online. On the other hand, Task-oriented Dialogues (ToD) are usually learnt from offline data collected using human demonstrations. Collecting diverse demonstrations and annotating them is expensive. Unfortunately, RL policy trained on off-policy data are prone to issues of bias and generalization, which are further exacerbated by stochasticity in human response and non-markovian nature of annotated belief state of a dialogue management system. To this end, we propose a batch-RL framework for ToD policy learning: Causal-aware Safe Policy Improvement (CASPI). CASPI includes a mechanism to learn fine-grained reward that captures intention behind human response and also offers guarantee on dialogue policy`s performance against a baseline. We demonstrate the effectiveness of this framework on end-to-end dialogue task of the Multiwoz2.0 dataset. The proposed method outperforms the current state of the art. Further more we demonstrate sample efficiency, where our method trained only on 20{\%} of the data, are comparable to current state of the art method trained on 100{\%} data on two out of there evaluation metrics.
null
null
10.18653/v1/2022.acl-long.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,674
inproceedings
ma-etal-2022-unitranser
{U}ni{T}ran{S}e{R}: A Unified Transformer Semantic Representation Framework for Multimodal Task-Oriented Dialog System
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.9/
Ma, Zhiyuan and Li, Jianjun and Li, Guohui and Cheng, Yongjing
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
103--114
As a more natural and intelligent interaction manner, multimodal task-oriented dialog system recently has received great attention and many remarkable progresses have been achieved. Nevertheless, almost all existing studies follow the pipeline to first learn intra-modal features separately and then conduct simple feature concatenation or attention-based feature fusion to generate responses, which hampers them from learning inter-modal interactions and conducting cross-modal feature alignment for generating more intention-aware responses. To address these issues, we propose UniTranSeR, a Unified Transformer Semantic Representation framework with feature alignment and intention reasoning for multimodal dialog systems. Specifically, we first embed the multimodal features into a unified Transformer semantic space to prompt inter-modal interactions, and then devise a feature alignment and intention reasoning (FAIR) layer to perform cross-modal entity alignment and fine-grained key-value reasoning, so as to effectively identify user`s intention for generating more accurate responses. Experimental results verify the effectiveness of UniTranSeR, showing that it significantly outperforms state-of-the-art approaches on the representative MMD dataset.
null
null
10.18653/v1/2022.acl-long.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,675
inproceedings
feng-etal-2022-dynamic
Dynamic Schema Graph Fusion Network for Multi-Domain Dialogue State Tracking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.10/
Feng, Yue and Lipani, Aldo and Ye, Fanghua and Zhang, Qiang and Yilmaz, Emine
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
115--126
Dialogue State Tracking (DST) aims to keep track of users' intentions during the course of a conversation. In DST, modelling the relations among domains and slots is still an under-studied problem. Existing approaches that have considered such relations generally fall short in: (1) fusing prior slot-domain membership relations and dialogue-aware dynamic slot relations explicitly, and (2) generalizing to unseen domains. To address these issues, we propose a novel \textbf{D}ynamic \textbf{S}chema \textbf{G}raph \textbf{F}usion \textbf{Net}work (\textbf{DSGFNet}), which generates a dynamic schema graph to explicitly fuse the prior slot-domain membership relations and dialogue-aware dynamic slot relations. It also uses the schemata to facilitate knowledge transfer to new domains. DSGFNet consists of a dialogue utterance encoder, a schema graph encoder, a dialogue-aware schema graph evolving network, and a schema graph enhanced dialogue state decoder. Empirical results on benchmark datasets (i.e., SGD, MultiWOZ2.1, and MultiWOZ2.2), show that DSGFNet outperforms existing methods.
null
null
10.18653/v1/2022.acl-long.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,676
inproceedings
zhang-etal-2022-attention
Attention Temperature Matters in Abstractive Summarization Distillation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.11/
Zhang, Shengqiang and Zhang, Xingxing and Bao, Hangbo and Wei, Furu
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
127--141
Recent progress of abstractive text summarization largely relies on large pre-trained sequence-to-sequence Transformer models, which are computationally expensive. This paper aims to distill these large models into smaller ones for faster inference and with minimal performance loss. Pseudo-labeling based methods are popular in sequence-to-sequence model distillation. In this paper, we find simply manipulating attention temperatures in Transformers can make pseudo labels easier to learn for student models. Our experiments on three summarization datasets show our proposed method consistently improves vanilla pseudo-labeling based methods. Further empirical analysis shows that both pseudo labels and summaries produced by our students are shorter and more abstractive.
null
null
10.18653/v1/2022.acl-long.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,677
inproceedings
chen-etal-2022-towards
Towards Making the Most of Cross-Lingual Transfer for Zero-Shot Neural Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.12/
Chen, Guanhua and Ma, Shuming and Chen, Yun and Zhang, Dongdong and Pan, Jia and Wang, Wenping and Wei, Furu
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
142--157
This paper demonstrates that multilingual pretraining and multilingual fine-tuning are both critical for facilitating cross-lingual transfer in zero-shot translation, where the neural machine translation (NMT) model is tested on source languages unseen during supervised training. Following this idea, we present SixT+, a strong many-to-English NMT model that supports 100 source languages but is trained with a parallel dataset in only six source languages. SixT+ initializes the decoder embedding and the full encoder with XLM-R large and then trains the encoder and decoder layers with a simple two-stage training strategy. SixT+ achieves impressive performance on many-to-English translation. It significantly outperforms CRISS and m2m-100, two strong multilingual NMT systems, with an average gain of 7.2 and 5.0 BLEU respectively. Additionally, SixT+ offers a set of model parameters that can be further fine-tuned to other unsupervised tasks. We demonstrate that adding SixT+ initialization outperforms state-of-the-art explicitly designed unsupervised NMT models on Si{\ensuremath{<}}-{\ensuremath{>}}En and Ne{\ensuremath{<}}-{\ensuremath{>}}En by over 1.2 average BLEU. When applied to zero-shot cross-lingual abstractive summarization, it produces an average performance gain of 12.3 ROUGE-L over mBART-ft. We conduct detailed analyses to understand the key ingredients of SixT+, including multilinguality of the auxiliary parallel data, positional disentangled encoder, and the cross-lingual transferability of its encoder.
null
null
10.18653/v1/2022.acl-long.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,678
inproceedings
pan-etal-2022-topwords
{T}op{WORDS}-Seg: Simultaneous Text Segmentation and Word Discovery for Open-Domain {C}hinese Texts via {B}ayesian Inference
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.13/
Pan, Changzai and Sun, Maosong and Deng, Ke
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
158--169
Processing open-domain Chinese texts has been a critical bottleneck in computational linguistics for decades, partially because text segmentation and word discovery often entangle with each other in this challenging scenario. No existing methods yet can achieve effective text segmentation and word discovery simultaneously in open domain. This study fills in this gap by proposing a novel method called TopWORDS-Seg based on Bayesian inference, which enjoys robust performance and transparent interpretation when no training corpus and domain vocabulary are available. Advantages of TopWORDS-Seg are demonstrated by a series of experimental studies.
null
null
10.18653/v1/2022.acl-long.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,679
inproceedings
li-etal-2022-unsupervised-multiple
An Unsupervised Multiple-Task and Multiple-Teacher Model for Cross-lingual Named Entity Recognition
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.14/
Li, Zhuoran and Hu, Chunming and Guo, Xiaohui and Chen, Junfan and Qin, Wenyi and Zhang, Richong
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
170--179
Cross-lingual named entity recognition task is one of the critical problems for evaluating the potential transfer learning techniques on low resource languages. Knowledge distillation using pre-trained multilingual language models between source and target languages have shown their superiority in transfer. However, existing cross-lingual distillation models merely consider the potential transferability between two identical single tasks across both domains. Other possible auxiliary tasks to improve the learning performance have not been fully investigated. In this study, based on the knowledge distillation framework and multi-task learning, we introduce the similarity metric model as an auxiliary task to improve the cross-lingual NER performance on the target domain. Specifically, an entity recognizer and a similarity evaluator are first trained in parallel as two teachers from the source domain. Then, two tasks in the student model are supervised by these teachers simultaneously. Empirical studies on the three datasets across 7 different languages confirm the effectiveness of the proposed model.
null
null
10.18653/v1/2022.acl-long.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,680
inproceedings
moro-etal-2022-discriminative
Discriminative Marginalized Probabilistic Neural Method for Multi-Document Summarization of Medical Literature
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.15/
Moro, Gianluca and Ragazzi, Luca and Valgimigli, Lorenzo and Freddi, Davide
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
180--189
Although current state-of-the-art Transformer-based solutions succeeded in a wide range for single-document NLP tasks, they still struggle to address multi-input tasks such as multi-document summarization. Many solutions truncate the inputs, thus ignoring potential summary-relevant contents, which is unacceptable in the medical domain where each information can be vital. Others leverage linear model approximations to apply multi-input concatenation, worsening the results because all information is considered, even if it is conflicting or noisy with respect to a shared background. Despite the importance and social impact of medicine, there are no ad-hoc solutions for multi-document summarization. For this reason, we propose a novel discriminative marginalized probabilistic method (DAMEN) trained to discriminate critical information from a cluster of topic-related medical documents and generate a multi-document summary via token probability marginalization. Results prove we outperform the previous state-of-the-art on a biomedical dataset for multi-document summarization of systematic literature reviews. Moreover, we perform extensive ablation studies to motivate the design choices and prove the importance of each module of our method.
null
null
10.18653/v1/2022.acl-long.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,681
inproceedings
huang-etal-2022-sparse
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.16/
Huang, Shaoyi and Xu, Dongkuan and Yen, Ian and Wang, Yijue and Chang, Sung-En and Li, Bingbing and Chen, Shiyang and Xie, Mimi and Rajasekaran, Sanguthevar and Liu, Hang and Ding, Caiwen
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
190--200
Conventional wisdom in pruning Transformer-based language models is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit. However, under the trending pretrain-and-finetune paradigm, we postulate a counter-traditional hypothesis, that is: pruning increases the risk of overfitting when performed at the fine-tuning phase. In this paper, we aim to address the overfitting problem and improve pruning performance via progressive knowledge distillation with error-bound properties. We show for the first time that reducing the risk of overfitting can help the effectiveness of pruning under the pretrain-and-finetune paradigm. Ablation studies and experiments on the GLUE benchmark show that our method outperforms the leading competitors across different tasks.
null
null
10.18653/v1/2022.acl-long.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,682
inproceedings
kambhatla-etal-2022-cipherdaug
{C}ipher{DA}ug: Ciphertext based Data Augmentation for Neural Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.17/
Kambhatla, Nishant and Born, Logan and Sarkar, Anoop
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
201--218
We propose a novel data-augmentation technique for neural machine translation based on ROT-$k$ ciphertexts. ROT-$k$ is a simple letter substitution cipher that replaces a letter in the plaintext with the $k$th letter after it in the alphabet. We first generate multiple ROT-$k$ ciphertexts using different values of $k$ for the plaintext which is the source side of the parallel data. We then leverage this enciphered training data along with the original parallel data via multi-source training to improve neural machine translation. Our method, CipherDAug, uses a co-regularization-inspired training procedure, requires no external data sources other than the original training data, and uses a standard Transformer to outperform strong data augmentation techniques on several datasets by a significant margin. This technique combines easily with existing approaches to data augmentation, and yields particularly strong results in low-resource settings.
null
null
10.18653/v1/2022.acl-long.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,683
inproceedings
patil-etal-2022-overlap
Overlap-based Vocabulary Generation Improves Cross-lingual Transfer Among Related Languages
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.18/
Patil, Vaidehi and Talukdar, Partha and Sarawagi, Sunita
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
219--233
Pre-trained multilingual language models such as mBERT and XLM-R have demonstrated great potential for zero-shot cross-lingual transfer to low web-resource languages (LRL). However, due to limited model capacity, the large difference in the sizes of available monolingual corpora between high web-resource languages (HRL) and LRLs does not provide enough scope of co-embedding the LRL with the HRL, thereby affecting the downstream task performance of LRLs. In this paper, we argue that relatedness among languages in a language family along the dimension of lexical overlap may be leveraged to overcome some of the corpora limitations of LRLs. We propose Overlap BPE (OBPE), a simple yet effective modification to the BPE vocabulary generation algorithm which enhances overlap across related languages. Through extensive experiments on multiple NLP tasks and datasets, we observe that OBPE generates a vocabulary that increases the representation of LRLs via tokens shared with HRLs. This results in improved zero-shot transfer from related HRLs to LRLs without reducing HRL representation and accuracy. Unlike previous studies that dismissed the importance of token-overlap, we show that in the low-resource related language setting, token overlap matters. Synthetically reducing the overlap to zero can cause as much as a four-fold drop in zero-shot transfer accuracy.
null
null
10.18653/v1/2022.acl-long.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,684
inproceedings
zhuang-etal-2022-long
Long-range Sequence Modeling with Predictable Sparse Attention
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.19/
Zhuang, Yimeng and Zhang, Jing and Tu, Mei
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
234--243
Self-attention mechanism has been shown to be an effective approach for capturing global context dependencies in sequence modeling, but it suffers from quadratic complexity in time and memory usage. Due to the sparsity of the attention matrix, much computation is redundant. Therefore, in this paper, we design an efficient Transformer architecture, named Fourier Sparse Attention for Transformer (FSAT), for fast long-range sequence modeling. We provide a brand-new perspective for constructing sparse attention matrix, i.e. making the sparse attention matrix predictable. Two core sub-modules are: (1) A fast Fourier transform based hidden state cross module, which captures and pools $L^2$ semantic combinations in $\mathcal{O}(L\log L)$ time complexity. (2) A sparse attention matrix estimation module, which predicts dominant elements of an attention matrix based on the output of the previous hidden state cross module. By reparameterization and gradient truncation, FSAT successfully learned the index of dominant elements. The overall complexity about the sequence length is reduced from $\mathcal{O}(L^2)$ to $\mathcal{O}(L\log L)$. Extensive experiments (natural language, vision, and math) show that FSAT remarkably outperforms the standard multi-head attention and its variants in various long-sequence tasks with low computational costs, and achieves new state-of-the-art results on the Long Range Arena benchmark.
null
null
10.18653/v1/2022.acl-long.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,685
inproceedings
geng-etal-2022-improving
Improving Personalized Explanation Generation through Visualization
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.20/
Geng, Shijie and Fu, Zuohui and Ge, Yingqiang and Li, Lei and de Melo, Gerard and Zhang, Yongfeng
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
244--255
In modern recommender systems, there are usually comments or reviews from users that justify their ratings for different items. Trained on such textual corpus, explainable recommendation models learn to discover user interests and generate personalized explanations. Though able to provide plausible explanations, existing models tend to generate repeated sentences for different items or empty sentences with insufficient details. This begs an interesting question: can we immerse the models in a multimodal environment to gain proper awareness of real-world concepts and alleviate above shortcomings? To this end, we propose a visually-enhanced approach named METER with the help of visualization generation and text{--}image matching discrimination: the explainable recommendation model is encouraged to visualize what it refers to while incurring a penalty if the visualization is incongruent with the textual explanation. Experimental results and a manual assessment demonstrate that our approach can improve not only the text quality but also the diversity and explainability of the generated explanations.
null
null
10.18653/v1/2022.acl-long.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,686
inproceedings
zhang-etal-2022-new
New Intent Discovery with Pre-training and Contrastive Learning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.21/
Zhang, Yuwei and Zhang, Haode and Zhan, Li-Ming and Wu, Xiao-Ming and Lam, Albert
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
256--269
New intent discovery aims to uncover novel intent categories from user utterances to expand the set of supported intent classes. It is a critical task for the development and service expansion of a practical dialogue system. Despite its importance, this problem remains under-explored in the literature. Existing approaches typically rely on a large amount of labeled utterances and employ pseudo-labeling methods for representation learning and clustering, which are label-intensive, inefficient, and inaccurate. In this paper, we provide new solutions to two important research questions for new intent discovery: (1) how to learn semantic utterance representations and (2) how to better cluster utterances. Particularly, we first propose a multi-task pre-training strategy to leverage rich unlabeled data along with external labeled data for representation learning. Then, we design a new contrastive loss to exploit self-supervisory signals in unlabeled data for clustering. Extensive experiments on three intent recognition benchmarks demonstrate the high effectiveness of our proposed method, which outperforms state-of-the-art methods by a large margin in both unsupervised and semi-supervised scenarios. The source code will be available at \url{https://github.com/zhang-yu-wei/MTP-CLNN}.
null
null
10.18653/v1/2022.acl-long.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,687
inproceedings
davoodi-etal-2022-modeling
{M}odeling {U.S.} State-Level Policies by Extracting Winners and Losers from Legislative Texts
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.22/
Davoodi, Maryam and Waltenburg, Eric and Goldwasser, Dan
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
270--284
Decisions on state-level policies have a deep effect on many aspects of our everyday life, such as health-care and education access. However, there is little understanding of how these policies and decisions are being formed in the legislative process. We take a data-driven approach by decoding the impact of legislation on relevant stakeholders (e.g., teachers in education bills) to understand legislators' decision-making process and votes. We build a new dataset for multiple US states that interconnects multiple sources of data including bills, stakeholders, legislators, and money donors. Next, we develop a textual graph-based model to embed and analyze state bills. Our model predicts winners/losers of bills and then utilizes them to better determine the legislative body`s vote breakdown according to demographic/ideological criteria, e.g., gender.
null
null
10.18653/v1/2022.acl-long.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,688
inproceedings
ma-etal-2022-structural
Structural Characterization for Dialogue Disentanglement
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.23/
Ma, Xinbei and Zhang, Zhuosheng and Zhao, Hai
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
285--297
Tangled multi-party dialogue contexts lead to challenges for dialogue reading comprehension, where multiple dialogue threads flow simultaneously within a common dialogue record, increasing difficulties in understanding the dialogue history for both human and machine. Previous studies mainly focus on utterance encoding methods with carefully designed features but pay inadequate attention to characteristic features of the structure of dialogues. We specially take structure factors into account and design a novel model for dialogue disentangling. Based on the fact that dialogues are constructed on successive participation and interactions between speakers, we model structural information of dialogues in two aspects: 1)speaker property that indicates whom a message is from, and 2) reference dependency that shows whom a message may refer to. The proposed method achieves new state-of-the-art on the Ubuntu IRC benchmark dataset and contributes to dialogue-related comprehension.
null
null
10.18653/v1/2022.acl-long.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,689
inproceedings
zhu-etal-2022-multi
Multi-Party Empathetic Dialogue Generation: A New Task for Dialog Systems
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.24/
Zhu, Ling.Yu and Zhang, Zhengkun and Wang, Jun and Wang, Hongbin and Wu, Haiying and Yang, Zhenglu
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
298--307
Empathetic dialogue assembles emotion understanding, feeling projection, and appropriate response generation. Existing work for empathetic dialogue generation concentrates on the two-party conversation scenario. Multi-party dialogues, however, are pervasive in reality. Furthermore, emotion and sensibility are typically confused; a refined empathy analysis is needed for comprehending fragile and nuanced human feelings. We address these issues by proposing a novel task called Multi-Party Empathetic Dialogue Generation in this study. Additionally, a Static-Dynamic model for Multi-Party Empathetic Dialogue Generation, SDMPED, is introduced as a baseline by exploring the static sensibility and dynamic emotion for the multi-party empathetic dialogue learning, the aspects that help SDMPED achieve the state-of-the-art performance.
null
null
10.18653/v1/2022.acl-long.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,690
inproceedings
tu-etal-2022-misc
{MISC}: A Mixed Strategy-Aware Model integrating {COMET} for Emotional Support Conversation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.25/
Tu, Quan and Li, Yanran and Cui, Jianwei and Wang, Bin and Wen, Ji-Rong and Yan, Rui
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
308--319
Applying existing methods to emotional support conversation{---}which provides valuable assistance to people who are in need{---}has two major limitations: (a) they generally employ a conversation-level emotion label, which is too coarse-grained to capture user`s instant mental state; (b) most of them focus on expressing empathy in the response(s) rather than gradually reducing user`s distress. To address the problems, we propose a novel model $\textbf{MISC}$, which firstly infers the user`s fine-grained emotional status, and then responds skillfully using a mixture of strategy. Experimental results on the benchmark dataset demonstrate the effectiveness of our method and reveal the benefits of fine-grained emotion understanding as well as mixed-up strategy modeling.
null
null
10.18653/v1/2022.acl-long.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,691
inproceedings
du-etal-2022-glm
{GLM}: General Language Model Pretraining with Autoregressive Blank Infilling
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.26/
Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
320--335
There have been various types of pretraining architectures including autoencoding models (e.g., BERT), autoregressive models (e.g., GPT), and encoder-decoder models (e.g., T5). However, none of the pretraining frameworks performs the best for all tasks of three main categories including natural language understanding (NLU), unconditional generation, and conditional generation. We propose a General Language Model (GLM) based on autoregressive blank infilling to address this challenge. GLM improves blank filling pretraining by adding 2D positional encodings and allowing an arbitrary order to predict spans, which results in performance gains over BERT and T5 on NLU tasks. Meanwhile, GLM can be pretrained for different types of tasks by varying the number and lengths of blanks. On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1.25{\texttimes} parameters of BERT Large , demonstrating its generalizability to different downstream tasks.
null
null
10.18653/v1/2022.acl-long.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,692
inproceedings
qi-etal-2022-quoter
{Q}uote{R}: A Benchmark of Quote Recommendation for Writing
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.27/
Qi, Fanchao and Yang, Yanhui and Yi, Jing and Cheng, Zhili and Liu, Zhiyuan and Sun, Maosong
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
336--348
It is very common to use quotations (quotes) to make our writings more elegant or convincing. To help people find appropriate quotes efficiently, the task of quote recommendation is presented, aiming to recommend quotes that fit the current context of writing. There have been various quote recommendation approaches, but they are evaluated on different unpublished datasets. To facilitate the research on this task, we build a large and fully open quote recommendation dataset called QuoteR, which comprises three parts including English, standard Chinese and classical Chinese. Any part of it is larger than previous unpublished counterparts. We conduct an extensive evaluation of existing quote recommendation methods on QuoteR. Furthermore, we propose a new quote recommendation model that significantly outperforms previous methods on all three parts of QuoteR. All the code and data of this paper can be obtained at \url{https://github.com/thunlp/QuoteR}.
null
null
10.18653/v1/2022.acl-long.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,693
inproceedings
gao-etal-2022-towards
Towards Comprehensive Patent Approval Predictions:Beyond Traditional Document Classification
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.28/
Gao, Xiaochen and Hou, Zhaoyi and Ning, Yifei and Zhao, Kewen and He, Beilei and Shang, Jingbo and Krishnan, Vish
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
349--372
Predicting the approval chance of a patent application is a challenging problem involving multiple facets. The most crucial facet is arguably the novelty {---} \textit{35 U.S. Code {\textsection} 102} rejects more recent applications that have very similar prior arts. Such novelty evaluations differ the patent approval prediction from conventional document classification {---} Successful patent applications may share similar writing patterns; however, too-similar newer applications would receive the opposite label, thus confusing standard document classifiers (e.g., BERT). To address this issue, we propose a novel framework that unifies the document classifier with handcrafted features, particularly time-dependent novelty scores. Specifically, we formulate the novelty scores by comparing each application with millions of prior arts using a hybrid of efficient filters and a neural bi-encoder. Moreover, we impose a new regularization term into the classification objective to enforce the monotonic change of approval prediction w.r.t. novelty scores. From extensive experiments on a large-scale USPTO dataset, we find that standard BERT fine-tuning can partially learn the correct relationship between novelty and approvals from inconsistent data. However, our time-dependent novelty features offer a boost on top of it. Also, our monotonic regularization, while shrinking the search space, can drive the optimizer to better local optima, yielding a further small performance gain.
null
null
10.18653/v1/2022.acl-long.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,694
inproceedings
heo-etal-2022-hypergraph
Hypergraph {T}ransformer: {W}eakly-Supervised Multi-hop Reasoning for Knowledge-based Visual Question Answering
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.29/
Heo, Yu-Jung and Kim, Eun-Sol and Choi, Woo Suk and Zhang, Byoung-Tak
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
373--390
Knowledge-based visual question answering (QA) aims to answer a question which requires visually-grounded external knowledge beyond image content itself. Answering complex questions that require multi-hop reasoning under weak supervision is considered as a challenging problem since i) no supervision is given to the reasoning process and ii) high-order semantics of multi-hop knowledge facts need to be captured. In this paper, we introduce a concept of hypergraph to encode high-level semantics of a question and a knowledge base, and to learn high-order associations between them. The proposed model, Hypergraph Transformer, constructs a question hypergraph and a query-aware knowledge hypergraph, and infers an answer by encoding inter-associations between two hypergraphs and intra-associations in both hypergraph itself. Extensive experiments on two knowledge-based visual QA and two knowledge-based textual QA demonstrate the effectiveness of our method, especially for multi-hop reasoning problem. Our source code is available at \url{https://github.com/yujungheo/kbvqa-public}.
null
null
10.18653/v1/2022.acl-long.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,695
inproceedings
li-etal-2022-cross-utterance
Cross-Utterance Conditioned {VAE} for Non-Autoregressive Text-to-Speech
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.30/
Li, Yang and Yu, Cheng and Sun, Guangzhi and Jiang, Hua and Sun, Fanglei and Zu, Weiqin and Wen, Ying and Yang, Yang and Wang, Jun
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
391--400
Modelling prosody variation is critical for synthesizing natural and expressive speech in end-to-end text-to-speech (TTS) systems. In this paper, a cross-utterance conditional VAE (CUC-VAE) is proposed to estimate a posterior probability distribution of the latent prosody features for each phoneme by conditioning on acoustic features, speaker information, and text features obtained from both past and future sentences. At inference time, instead of the standard Gaussian distribution used by VAE, CUC-VAE allows sampling from an utterance-specific prior distribution conditioned on cross-utterance information, which allows the prosody features generated by the TTS system to be related to the context and is more similar to how humans naturally produce prosody. The performance of CUC-VAE is evaluated via a qualitative listening test for naturalness, intelligibility and quantitative measurements, including word error rates and the standard deviation of prosody attributes. Experimental results on LJ-Speech and LibriTTS data show that the proposed CUC-VAE TTS system improves naturalness and prosody diversity with clear margins.
null
null
10.18653/v1/2022.acl-long.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,696
inproceedings
mireshghallah-etal-2022-mix
Mix and Match: Learning-free Controllable Text Generationusing Energy Language Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.31/
Mireshghallah, Fatemehsadat and Goyal, Kartik and Berg-Kirkpatrick, Taylor
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
401--415
Recent work on controlled text generation has either required attribute-based fine-tuning of the base language model (LM), or has restricted the parameterization of the attribute discriminator to be compatible with the base autoregressive LM. In this work, we propose Mix and Match LM, a global score-based alternative for controllable text generation that combines arbitrary pre-trained black-box models for achieving the desired attributes in the generated text without involving any fine-tuning or structural assumptions about the black-box models. We interpret the task of controllable generation as drawing samples from an energy-based model whose energy values are a linear combination of scores from black-box models that are separately responsible for fluency, the control attribute, and faithfulness to any conditioning context. We use a Metropolis-Hastings sampling scheme to sample from this energy-based model using bidirectional context and global attribute features. We validate the effectiveness of our approach on various controlled generation and style-based text revision tasks by outperforming recently proposed methods that involve extra training, fine-tuning, or restrictive assumptions over the form of models.
null
null
10.18653/v1/2022.acl-long.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,697
inproceedings
ramesh-kashyap-etal-2022-different
So Different Yet So Alike! Constrained Unsupervised Text Style Transfer
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.32/
Ramesh Kashyap, Abhinav and Hazarika, Devamanyu and Kan, Min-Yen and Zimmermann, Roger and Poria, Soujanya
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
416--431
Automatic transfer of text between domains has become popular in recent times. One of its aims is to preserve the semantic content while adapting to the target domain. However, it does not explicitly maintain other attributes between the source and translated text: e.g., text length and descriptiveness. Maintaining constraints in transfer has several downstream applications, including data augmentation and debiasing. We introduce a method for such constrained unsupervised text style transfer by introducing two complementary losses to the generative adversarial network (GAN) family of models. Unlike the competing losses used in GANs, we introduce cooperative losses where the discriminator and the generator cooperate and reduce the same loss. The first is a contrastive loss and the second is a classification loss {---} aiming to regularize the latent space further and bring similar sentences closer together. We demonstrate that such training retains lexical, syntactic and domain-specific constraints between domains for multiple benchmark datasets, including ones where more than one attribute change. We show that the complementary cooperative losses improve text quality, according to both automated and human evaluation measures.
null
null
10.18653/v1/2022.acl-long.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,698
inproceedings
du-etal-2022-e
e-{CARE}: a New Dataset for Exploring Explainable Causal Reasoning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.33/
Du, Li and Ding, Xiao and Xiong, Kai and Liu, Ting and Qin, Bing
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
432--446
Understanding causality has vital importance for various Natural Language Processing (NLP) applications. Beyond the labeled instances, conceptual explanations of the causality can provide deep understanding of the causal fact to facilitate the causal reasoning process. However, such explanation information still remains absent in existing causal reasoning resources. In this paper, we fill this gap by presenting a human-annotated explainable CAusal REasoning dataset (e-CARE), which contains over 20K causal reasoning questions, together with natural language formed explanations of the causal questions. Experimental results show that generating valid explanations for causal facts still remains especially challenging for the state-of-the-art models, and the explanation information can be helpful for promoting the accuracy and stability of causal reasoning models.
null
null
10.18653/v1/2022.acl-long.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,699
inproceedings
xu-etal-2022-fantastic
Fantastic Questions and Where to Find Them: {F}airytale{QA} {--} An Authentic Dataset for Narrative Comprehension
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.34/
Xu, Ying and Wang, Dakuo and Yu, Mo and Ritchie, Daniel and Yao, Bingsheng and Wu, Tongshuang and Zhang, Zheng and Li, Toby Jia-Jun and Bradford, Nora and Sun, Branda and Hoang, Tran Bao and Sang, Yisi and Hou, Yufang and Ma, Xiaojuan and Yang, Diyi and Peng, Nanyun and Yu, Zhou and Warschauer, Mark
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
447--460
Question answering (QA) is a fundamental means to facilitate assessment and training of narrative comprehension skills for both machines and young children, yet there is scarcity of high-quality QA datasets carefully designed to serve this purpose. In particular, existing datasets rarely distinguish fine-grained reading skills, such as the understanding of varying narrative elements. Drawing on the reading education research, we introduce FairytaleQA, a dataset focusing on narrative comprehension of kindergarten to eighth-grade students. Generated by educational experts based on an evidence-based theoretical framework, FairytaleQA consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories, covering seven types of narrative elements or relations. Our dataset is valuable in two folds: First, we ran existing QA models on our dataset and confirmed that this annotation helps assess models' fine-grained learning skills. Second, the dataset supports question generation (QG) task in the education domain. Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.
null
null
10.18653/v1/2022.acl-long.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,700
inproceedings
li-xiong-2022-kafsp
{K}a{FSP}: Knowledge-Aware Fuzzy Semantic Parsing for Conversational Question Answering over a Large-Scale Knowledge Base
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.35/
Li, Junzhuo and Xiong, Deyi
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
461--473
In this paper, we study two issues of semantic parsing approaches to conversational question answering over a large-scale knowledge base: (1) The actions defined in grammar are not sufficient to handle uncertain reasoning common in real-world scenarios. (2) Knowledge base information is not well exploited and incorporated into semantic parsing. To mitigate the two issues, we propose a knowledge-aware fuzzy semantic parsing framework (KaFSP). It defines fuzzy comparison operations in the grammar system for uncertain reasoning based on the fuzzy set theory. In order to enhance the interaction between semantic parsing and knowledge base, we incorporate entity triples from the knowledge base into a knowledge-aware entity disambiguation module. Additionally, we propose a multi-label classification framework to not only capture correlations between entity types and relations but also detect knowledge base information relevant to the current utterance. Both enhancements are based on pre-trained language models. Experiments on a large-scale conversational question answering benchmark demonstrate that the proposed KaFSP achieves significant improvements over previous state-of-the-art models, setting new SOTA results on 8 out of 10 question types, gaining improvements of over 10{\%} F1 or accuracy on 3 question types, and improving overall F1 from 83.01{\%} to 85.33{\%}. The source code of KaFSP is available at \url{https://github.com/tjunlp-lab/KaFSP}.
null
null
10.18653/v1/2022.acl-long.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,701
inproceedings
huang-etal-2022-multilingual
Multilingual Knowledge Graph Completion with Self-Supervised Adaptive Graph Alignment
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.36/
Huang, Zijie and Li, Zheng and Jiang, Haoming and Cao, Tianyu and Lu, Hanqing and Yin, Bing and Subbian, Karthik and Sun, Yizhou and Wang, Wei
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
474--485
Predicting missing facts in a knowledge graph (KG) is crucial as modern KGs are far from complete. Due to labor-intensive human labeling, this phenomenon deteriorates when handling knowledge represented in various languages. In this paper, we explore multilingual KG completion, which leverages limited seed alignment as a bridge, to embrace the collective knowledge from multiple languages. However, language alignment used in prior works is still not fully exploited: (1) alignment pairs are treated equally to maximally push parallel entities to be close, which ignores KG capacity inconsistency; (2) seed alignment is scarce and new alignment identification is usually in a noisily unsupervised manner. To tackle these issues, we propose a novel self-supervised adaptive graph alignment (SS-AGA) method. Specifically, SS-AGA fuses all KGs as a whole graph by regarding alignment as a new edge type. As such, information propagation and noise influence across KGs can be adaptively controlled via relation-aware attention weights. Meanwhile, SS-AGA features a new pair generator that dynamically captures potential alignment pairs in a self-supervised paradigm. Extensive experiments on both the public multilingual DBPedia KG and newly-created industrial multilingual E-commerce KG empirically demonstrate the effectiveness of SS-AGA
null
null
10.18653/v1/2022.acl-long.36
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,702
inproceedings
guo-etal-2022-modeling
Modeling Hierarchical Syntax Structure with Triplet Position for Source Code Summarization
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.37/
Guo, Juncai and Liu, Jin and Wan, Yao and Li, Li and Zhou, Pingyi
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
486--500
Automatic code summarization, which aims to describe the source code in natural language, has become an essential task in software maintenance. Our fellow researchers have attempted to achieve such a purpose through various machine learning-based approaches. One key challenge keeping these approaches from being practical lies in the lacking of retaining the semantic structure of source code, which has unfortunately been overlooked by the state-of-the-art. Existing approaches resort to representing the syntax structure of code by modeling the Abstract Syntax Trees (ASTs). However, the hierarchical structures of ASTs have not been well explored. In this paper, we propose CODESCRIBE to model the hierarchical syntax structure of code by introducing a novel triplet position for code summarization. Specifically, CODESCRIBE leverages the graph neural network and Transformer to preserve the structural and sequential information of code, respectively. In addition, we propose a pointer-generator network that pays attention to both the structure and sequential tokens of code for a better summary generation. Experiments on two real-world datasets in Java and Python demonstrate the effectiveness of our proposed approach when compared with several state-of-the-art baselines.
null
null
10.18653/v1/2022.acl-long.37
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,703
inproceedings
zheng-etal-2022-fewnlu
{F}ew{NLU}: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.38/
Zheng, Yanan and Zhou, Jing and Qian, Yujie and Ding, Ming and Liao, Chonghua and Jian, Li and Salakhutdinov, Ruslan and Tang, Jie and Ruder, Sebastian and Yang, Zhilin
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
501--516
The few-shot natural language understanding (NLU) task has attracted much recent attention. However, prior methods have been evaluated under a disparate set of protocols, which hinders fair comparison and measuring the progress of the field. To address this issue, we introduce an evaluation framework that improves previous evaluation procedures in three key aspects, i.e., test performance, dev-test correlation, and stability. Under this new evaluation framework, we re-evaluate several state-of-the-art few-shot methods for NLU tasks. Our framework reveals new insights: (1) both the absolute performance and relative gap of the methods were not accurately estimated in prior literature; (2) no single method dominates most tasks with consistent performance; (3) improvements of some methods diminish with a larger pretrained model; and (4) gains from different methods are often complementary and the best combined model performs close to a strong fully-supervised baseline. We open-source our toolkit, FewNLU, that implements our evaluation framework along with a number of state-of-the-art methods.
null
null
10.18653/v1/2022.acl-long.38
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,704
inproceedings
zhang-etal-2022-learn
Learn to Adapt for Generalized Zero-Shot Text Classification
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.39/
Zhang, Yiwen and Yuan, Caixia and Wang, Xiaojie and Bai, Ziwei and Liu, Yongbin
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
517--527
Generalized zero-shot text classification aims to classify textual instances from both previously seen classes and incrementally emerging unseen classes. Most existing methods generalize poorly since the learned parameters are only optimal for seen classes rather than for both classes, and the parameters keep stationary in predicting procedures. To address these challenges, we propose a novel Learn to Adapt (LTA) network using a variant meta-learning framework. Specifically, LTA trains an adaptive classifier by using both seen and virtual unseen classes to simulate a generalized zero-shot learning (GZSL) scenario in accordance with the test time, and simultaneously learns to calibrate the class prototypes and sample representations to make the learned parameters adaptive to incoming unseen classes. We claim that the proposed model is capable of representing all prototypes and samples from both classes to a more consistent distribution in a global space. Extensive experiments on five text classification datasets show that our model outperforms several competitive previous approaches by large margins. The code and the whole datasets are available at \url{https://github.com/Quareia/LTA}.
null
null
10.18653/v1/2022.acl-long.39
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,705
inproceedings
yang-etal-2022-tableformer
{T}able{F}ormer: Robust Transformer Modeling for Table-Text Encoding
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.40/
Yang, Jingfeng and Gupta, Aditya and Upadhyay, Shyam and He, Luheng and Goel, Rahul and Paul, Shachi
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
528--537
Understanding tables is an important aspect of natural language understanding. Existing models for table understanding require linearization of the table structure, where row or column order is encoded as an unwanted bias. Such spurious biases make the model vulnerable to row and column order perturbations. Additionally, prior work has not thoroughly modeled the table structures or table-text alignments, hindering the table-text understanding ability. In this work, we propose a robust and structurally aware table-text encoding architecture TableFormer, where tabular structural biases are incorporated completely through learnable attention biases. TableFormer is (1) strictly invariant to row and column orders, and, (2) could understand tables better due to its tabular inductive biases. Our evaluations showed that TableFormer outperforms strong baselines in all settings on SQA, WTQ and TabFact table reasoning datasets, and achieves state-of-the-art performance on SQA, especially when facing answer-invariant row and column order perturbations (6{\%} improvement over the best baseline), because previous SOTA models' performance drops by 4{\%} - 6{\%} when facing such perturbations while TableFormer is not affected.
null
null
10.18653/v1/2022.acl-long.40
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,706
inproceedings
xu-etal-2022-perceiving
Perceiving the World: Question-guided Reinforcement Learning for Text-based Games
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.41/
Xu, Yunqiu and Fang, Meng and Chen, Ling and Du, Yali and Zhou, Joey and Zhang, Chengqi
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
538--560
Text-based games provide an interactive way to study natural language processing. While deep reinforcement learning has shown effectiveness in developing the game playing agent, the low sample efficiency and the large action space remain to be the two major challenges that hinder the DRL from being applied in the real world. In this paper, we address the challenges by introducing world-perceiving modules, which automatically decompose tasks and prune actions by answering questions about the environment. We then propose a two-phase training framework to decouple language learning from reinforcement learning, which further improves the sample efficiency. The experimental results show that the proposed method significantly improves the performance and sample efficiency. Besides, it shows robustness against compound error and limited pre-training data.
null
null
10.18653/v1/2022.acl-long.41
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,707
inproceedings
jia-etal-2022-neural
Neural Label Search for Zero-Shot Multi-Lingual Extractive Summarization
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.42/
Jia, Ruipeng and Zhang, Xingxing and Cao, Yanan and Lin, Zheng and Wang, Shi and Wei, Furu
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
561--570
In zero-shot multilingual extractive text summarization, a model is typically trained on English summarization dataset and then applied on summarization datasets of other languages. Given English gold summaries and documents, sentence-level labels for extractive summarization are usually generated using heuristics. However, these monolingual labels created on English datasets may not be optimal on datasets of other languages, for that there is the syntactic or semantic discrepancy between different languages. In this way, it is possible to translate the English dataset to other languages and obtain different sets of labels again using heuristics. To fully leverage the information of these different sets of labels, we propose NLSSum (Neural Label Search for Summarization), which jointly learns hierarchical weights for these different sets of labels together with our summarization model. We conduct multilingual zero-shot summarization experiments on MLSUM and WikiLingua datasets, and we achieve state-of-the-art results using both human and automatic evaluations across these two datasets.
null
null
10.18653/v1/2022.acl-long.42
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,708
inproceedings
wang-etal-2022-shot
Few-Shot Class-Incremental Learning for Named Entity Recognition
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.43/
Wang, Rui and Yu, Tong and Zhao, Handong and Kim, Sungchul and Mitra, Subrata and Zhang, Ruiyi and Henao, Ricardo
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
571--582
Previous work of class-incremental learning for Named Entity Recognition (NER) relies on the assumption that there exists abundance of labeled data for the training of new classes. In this work, we study a more challenging but practical problem, \textit{i.e.}, few-shot class-incremental learning for NER, where an NER model is trained with only few labeled samples of the new classes, without forgetting knowledge of the old ones. To alleviate the problem of catastrophic forgetting in few-shot class-incremental learning, we reconstruct synthetic training data of the old classes using the trained NER model, augmenting the training of new classes. We further develop a framework that distills from the existing model with both synthetic data, and real data from the current training set. Experimental results show that our approach achieves significant improvements over existing baselines.
null
null
10.18653/v1/2022.acl-long.43
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,709
inproceedings
zhao-etal-2022-improving
Improving Meta-learning for Low-resource Text Classification and Generation via Memory Imitation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.44/
Zhao, Yingxiu and Tian, Zhiliang and Yao, Huaxiu and Zheng, Yinhe and Lee, Dongkyu and Song, Yiping and Sun, Jian and Zhang, Nevin
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
583--595
Building models of natural language processing (NLP) is challenging in low-resource scenarios where limited data are available. Optimization-based meta-learning algorithms achieve promising results in low-resource scenarios by adapting a well-generalized model initialization to handle new tasks. Nonetheless, these approaches suffer from the memorization overfitting issue, where the model tends to memorize the meta-training tasks while ignoring support sets when adapting to new tasks. To address this issue, we propose a memory imitation meta-learning (MemIML) method that enhances the model`s reliance on support sets for task adaptation. Specifically, we introduce a task-specific memory module to store support set information and construct an imitation module to force query sets to imitate the behaviors of support sets stored in the memory. A theoretical analysis is provided to prove the effectiveness of our method, and empirical results also demonstrate that our method outperforms competitive baselines on both text classification and generation tasks.
null
null
10.18653/v1/2022.acl-long.44
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,710
inproceedings
bandel-etal-2022-quality
Quality Controlled Paraphrase Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.45/
Bandel, Elron and Aharonov, Ranit and Shmueli-Scheuer, Michal and Shnayderman, Ilya and Slonim, Noam and Ein-Dor, Liat
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
596--609
Paraphrase generation has been widely used in various downstream tasks. Most tasks benefit mainly from high quality paraphrases, namely those that are semantically similar to, yet linguistically diverse from, the original sentence. Generating high-quality paraphrases is challenging as it becomes increasingly hard to preserve meaning as linguistic diversity increases. Recent works achieve nice results by controlling specific aspects of the paraphrase, such as its syntactic tree. However, they do not allow to directly control the quality of the generated paraphrase, and suffer from low flexibility and scalability. Here we propose QCPG, a quality-guided controlled paraphrase generation model, that allows directly controlling the quality dimensions. Furthermore, we suggest a method that given a sentence, identifies points in the quality control space that are expected to yield optimal generated paraphrases. We show that our method is able to generate paraphrases which maintain the original meaning while achieving higher diversity than the uncontrolled baseline. The models, the code, and the data can be found in \url{https://github.com/IBM/quality-controlled-paraphrase-generation}.
null
null
10.18653/v1/2022.acl-long.45
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,711
inproceedings
he-yiu-2022-controllable
Controllable Dictionary Example Generation: Generating Example Sentences for Specific Targeted Audiences
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.46/
He, Xingwei and Yiu, Siu Ming
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
610--627
Example sentences for targeted words in a dictionary play an important role to help readers understand the usage of words. Traditionally, example sentences in a dictionary are usually created by linguistics experts, which are labor-intensive and knowledge-intensive. In this paper, we introduce the problem of dictionary example sentence generation, aiming to automatically generate dictionary example sentences for targeted words according to the corresponding definitions. This task is challenging especially for polysemous words, because the generated sentences need to reflect different usages and meanings of these targeted words. Targeted readers may also have different backgrounds and educational levels. It is essential to generate example sentences that can be understandable for different backgrounds and levels of audiences. To solve these problems, we propose a controllable target-word-aware model for this task. Our proposed model can generate reasonable examples for targeted words, even for polysemous words. In addition, our model allows users to provide explicit control over attributes related to readability, such as length and lexical complexity, thus generating suitable examples for targeted audiences. Automatic and human evaluations on the Oxford dictionary dataset show that our model can generate suitable examples for targeted words with specific definitions while meeting the desired readability.
null
null
10.18653/v1/2022.acl-long.46
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,712
inproceedings
nagoudi-etal-2022-arat5
{A}ra{T}5: Text-to-Text Transformers for {A}rabic Language Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.47/
Nagoudi, El Moatez Billah and Elmadany, AbdelRahim and Abdul-Mageed, Muhammad
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
628--647
Transfer learning with a unified Transformer framework (T5) that converts all language problems into a text-to-text format was recently proposed as a simple and effective transfer learning approach. Although a multilingual version of the T5 model (mT5) was also introduced, it is not clear how well it can fare on non-English tasks involving diverse data. To investigate this question, we apply mT5 on a language with a wide variety of dialects{--}Arabic. For evaluation, we introduce a novel benchmark for ARabic language GENeration (ARGEN), covering seven important tasks. For model comparison, we pre-train three powerful Arabic T5-style models and evaluate them on ARGEN. Although pre-trained with {\textasciitilde}49 less data, our new models perform significantly better than mT5 on all ARGEN tasks (in 52 out of 59 test sets) and set several new SOTAs. Our models also establish new SOTA on the recently-proposed, large Arabic language understanding evaluation benchmark ARLUE (Abdul-Mageed et al., 2021). Our new models are publicly available. We also link to ARGEN datasets through our repository: \url{https://github.com/UBC-NLP/araT5}.
null
null
10.18653/v1/2022.acl-long.47
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,713
inproceedings
feng-etal-2022-legal
Legal Judgment Prediction via Event Extraction with Constraints
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.48/
Feng, Yi and Li, Chuanyi and Ng, Vincent
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
648--664
While significant progress has been made on the task of Legal Judgment Prediction (LJP) in recent years, the incorrect predictions made by SOTA LJP models can be attributed in part to their failure to (1) locate the key event information that determines the judgment, and (2) exploit the cross-task consistency constraints that exist among the subtasks of LJP. To address these weaknesses, we propose EPM, an Event-based Prediction Model with constraints, which surpasses existing SOTA models in performance on a standard LJP dataset.
null
null
10.18653/v1/2022.acl-long.48
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,714
inproceedings
kumar-2022-answer
Answer-level Calibration for Free-form Multiple Choice Question Answering
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.49/
Kumar, Sawan
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
665--679
Pre-trained language models have recently shown that training on large corpora using the language modeling objective enables few-shot and zero-shot capabilities on a variety of NLP tasks, including commonsense reasoning tasks. This is achieved using text interactions with the model, usually by posing the task as a natural language text completion problem. While using language model probabilities to obtain task specific scores has been generally useful, it often requires task-specific heuristics such as length normalization, or probability calibration. In this work, we consider the question answering format, where we need to choose from a set of (free-form) textual choices of unspecified lengths given a context. We present ALC (Answer-Level Calibration), where our main suggestion is to model context-independent biases in terms of the probability of a choice without the associated context and to subsequently remove it using an unsupervised estimate of similarity with the full context. We show that our unsupervised answer-level calibration consistently improves over or is competitive with baselines using standard evaluation metrics on a variety of tasks including commonsense reasoning tasks. Further, we show that popular datasets potentially favor models biased towards easy cues which are available independent of the context. We analyze such biases using an associated F1-score. Our analysis indicates that answer-level calibration is able to remove such biases and leads to a more robust measure of model capability.
null
null
10.18653/v1/2022.acl-long.49
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,715
inproceedings
dong-etal-2022-learning
Learning When to Translate for Streaming Speech
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.50/
Dong, Qian and Zhu, Yaoming and Wang, Mingxuan and Li, Lei
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
680--694
How to find proper moments to generate partial sentence translation given a streaming speech input? Existing approaches waiting-and-translating for a fixed duration often break the acoustic units in speech, since the boundaries between acoustic units in speech are not even. In this paper, we propose MoSST, a simple yet effective method for translating streaming speech content. Given a usually long speech sequence, we develop an efficient monotonic segmentation module inside an encoder-decoder model to accumulate acoustic information incrementally and detect proper speech unit boundaries for the input in speech translation task. Experiments on multiple translation directions of the MuST-C dataset show that outperforms existing methods and achieves the best trade-off between translation quality (BLEU) and latency. Our code is available at \url{https://github.com/dqqcasia/mosst}.
null
null
10.18653/v1/2022.acl-long.50
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,716
inproceedings
yang-etal-2022-compact
Compact Token Representations with Contextual Quantization for Efficient Document Re-ranking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.51/
Yang, Yingrui and Qiao, Yifan and Yang, Tao
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
695--707
Transformer based re-ranking models can achieve high search relevance through context- aware soft matching of query tokens with document tokens. To alleviate runtime complexity of such inference, previous work has adopted a late interaction architecture with pre-computed contextual token representations at the cost of a large online storage. This paper proposes contextual quantization of token embeddings by decoupling document-specific and document-independent ranking contributions during codebook-based compression. This allows effective online decompression and embedding composition for better search relevance. This paper presents an evaluation of the above compact token representation model in terms of relevance and space efficiency.
null
null
10.18653/v1/2022.acl-long.51
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,717
inproceedings
choi-etal-2022-early
Early Stopping Based on Unlabeled Samples in Text Classification
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.52/
Choi, HongSeok and Choi, Dongha and Lee, Hyunju
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
708--718
Early stopping, which is widely used to prevent overfitting, is generally based on a separate validation set. However, in low resource settings, validation-based stopping can be risky because a small validation set may not be sufficiently representative, and the reduction in the number of samples by validation split may result in insufficient samples for training. In this study, we propose an early stopping method that uses unlabeled samples. The proposed method is based on confidence and class distribution similarities. To further improve the performance, we present a calibration method to better estimate the class distribution of the unlabeled samples. The proposed method is advantageous because it does not require a separate validation set and provides a better stopping point by using a large unlabeled set. Extensive experiments are conducted on five text classification datasets and several stop-methods are compared. Our results show that the proposed model even performs better than using an additional validation set as well as the existing stop-methods, in both balanced and imbalanced data settings. Our code is available at \url{https://github.com/DMCB-GIST/BUS-stop}.
null
null
10.18653/v1/2022.acl-long.52
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,718
inproceedings
chen-etal-2022-meta
Meta-learning via Language Model In-context Tuning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.53/
Chen, Yanda and Zhong, Ruiqi and Zha, Sheng and Karypis, George and He, He
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
719--730
The goal of meta-learning is to learn to adapt to a new task with only a few labeled examples. Inspired by the recent progress in large language models, we propose $\textit{in-context tuning}$ (ICT), which recasts task adaptation and prediction as a simple sequence prediction problem: to form the input sequence, we concatenate the task instruction, labeled in-context examples, and the target input to predict; to meta-train the model to learn from in-context examples, we fine-tune a pre-trained language model (LM) to predict the target label given the input sequence on a collection of tasks.We benchmark our method on two collections of text classification tasks: LAMA and BinaryClfs. Compared to MAML which adapts the model through gradient descent, our method leverages the inductive bias of pre-trained LMs to perform pattern matching, and outperforms MAML by an absolute 6{\%} average AUC-ROC score on BinaryClfs, gaining more advantage with increasing model size. Compared to non-fine-tuned in-context learning (i.e. prompting a raw LM), in-context tuning meta-trains the model to learn from in-context examples. On BinaryClfs, ICT improves the average AUC-ROC score by an absolute 10{\%}, and reduces the variance due to example ordering by 6x and example choices by 2x.
null
null
10.18653/v1/2022.acl-long.53
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,719
inproceedings
yao-etal-2022-ais
It is {AI}`s Turn to Ask Humans a Question: Question-Answer Pair Generation for Children`s Story Books
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.54/
Yao, Bingsheng and Wang, Dakuo and Wu, Tongshuang and Zhang, Zheng and Li, Toby Jia-Jun and Yu, Mo and Xu, Ying
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
731--744
Existing question answering (QA) techniques are created mainly to answer questions asked by humans. But in educational applications, teachers often need to decide what questions they should ask, in order to help students to improve their narrative understanding capabilities. We design an automated question-answer generation (QAG) system for this education scenario: given a story book at the kindergarten to eighth-grade level as input, our system can automatically generate QA pairs that are capable of testing a variety of dimensions of a student`s comprehension skills. Our proposed QAG model architecture is demonstrated using a new expert-annotated FairytaleQA dataset, which has 278 child-friendly storybooks with 10,580 QA pairs. Automatic and human evaluations show that our model outperforms state-of-the-art QAG baseline systems. On top of our QAG system, we also start to build an interactive story-telling application for the future real-world deployment in this educational scenario.
null
null
10.18653/v1/2022.acl-long.54
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,720
inproceedings
zhang-etal-2022-prompt
Prompt-Based Rule Discovery and Boosting for Interactive Weakly-Supervised Learning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.55/
Zhang, Rongzhi and Yu, Yue and Shetty, Pranav and Song, Le and Zhang, Chao
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
745--758
Weakly-supervised learning (WSL) has shown promising results in addressing label scarcity on many NLP tasks, but manually designing a comprehensive, high-quality labeling rule set is tedious and difficult. We study interactive weakly-supervised learning{---}the problem of iteratively and automatically discovering novel labeling rules from data to improve the WSL model. Our proposed model, named PRBoost, achieves this goal via iterative prompt-based rule discovery and model boosting. It uses boosting to identify large-error instances and discovers candidate rules from them by prompting pre-trained LMs with rule templates. The candidate rules are judged by human experts, and the accepted rules are used to generate complementary weak labels and strengthen the current model. Experiments on four tasks show PRBoost outperforms state-of-the-art WSL baselines up to 7.1{\%}, and bridges the gaps with fully supervised models.
null
null
10.18653/v1/2022.acl-long.55
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,721
inproceedings
kobayashi-etal-2022-constrained
Constrained Multi-Task Learning for Bridging Resolution
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.56/
Kobayashi, Hideo and Hou, Yufang and Ng, Vincent
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
759--770
We examine the extent to which supervised bridging resolvers can be improved without employing additional labeled bridging data by proposing a novel constrained multi-task learning framework for bridging resolution, within which we (1) design cross-task consistency constraints to guide the learning process; (2) pre-train the entity coreference model in the multi-task framework on the large amount of publicly available coreference data; and (3) integrating prior knowledge encoded in rule-based resolvers. Our approach achieves state-of-the-art results on three standard evaluation corpora.
null
null
10.18653/v1/2022.acl-long.56
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,722
inproceedings
ghazarian-etal-2022-deam
{DEAM}: Dialogue Coherence Evaluation using {AMR}-based Semantic Manipulations
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.57/
Ghazarian, Sarik and Wen, Nuan and Galstyan, Aram and Peng, Nanyun
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
771--785
Automatic evaluation metrics are essential for the rapid development of open-domain dialogue systems as they facilitate hyper-parameter tuning and comparison between models. Although recently proposed trainable conversation-level metrics have shown encouraging results, the quality of the metrics is strongly dependent on the quality of training data. Prior works mainly resort to heuristic text-level manipulations (e.g. utterances shuffling) to bootstrap incoherent conversations (negative examples) from coherent dialogues (positive examples). Such approaches are insufficient to appropriately reflect the incoherence that occurs in interactions between advanced dialogue models and humans. To tackle this problem, we propose DEAM, a Dialogue coherence Evaluation metric that relies on Abstract Meaning Representation (AMR) to apply semantic-level Manipulations for incoherent (negative) data generation. AMRs naturally facilitate the injection of various types of incoherence sources, such as coreference inconsistency, irrelevancy, contradictions, and decrease engagement, at the semantic level, thus resulting in more natural incoherent samples. Our experiments show that DEAM achieves higher correlations with human judgments compared to baseline methods on several dialog datasets by significant margins. We also show that DEAM can distinguish between coherent and incoherent dialogues generated by baseline manipulations, whereas those baseline models cannot detect incoherent examples generated by DEAM. Our results demonstrate the potential of AMR-based semantic manipulations for natural negative example generation.
null
null
10.18653/v1/2022.acl-long.57
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,723
inproceedings
cao-wang-2022-hibrids
{HIBRIDS}: Attention with Hierarchical Biases for Structure-aware Long Document Summarization
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.58/
Cao, Shuyang and Wang, Lu
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
786--807
Document structure is critical for efficient information consumption. However, it is challenging to encode it efficiently into the modern Transformer architecture. In this work, we present HIBRIDS, which injects Hierarchical Biases foR Incorporating Document Structure into attention score calculation. We further present a new task, hierarchical question-summary generation, for summarizing salient content in the source document into a hierarchy of questions and summaries, where each follow-up question inquires about the content of its parent question-summary pair. We also annotate a new dataset with 6,153 question-summary hierarchies labeled on government reports. Experiment results show that our model produces better question-summary hierarchies than comparisons on both hierarchy quality and content coverage, a finding also echoed by human judges. Additionally, our model improves the generation of long-form summaries from long government reports and Wikipedia articles, as measured by ROUGE scores.
null
null
10.18653/v1/2022.acl-long.58
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,724
inproceedings
zhang-etal-2022-de
De-Bias for Generative Extraction in Unified {NER} Task
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.59/
Zhang, Shuai and Shen, Yongliang and Tan, Zeqi and Wu, Yiquan and Lu, Weiming
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
808--818
Named entity recognition (NER) is a fundamental task to recognize specific types of entities from a given sentence. Depending on how the entities appear in the sentence, it can be divided into three subtasks, namely, Flat NER, Nested NER, and Discontinuous NER. Among the existing approaches, only the generative model can be uniformly adapted to these three subtasks. However, when the generative model is applied to NER, its optimization objective is not consistent with the task, which makes the model vulnerable to the incorrect biases. In this paper, we analyze the incorrect biases in the generation process from a causality perspective and attribute them to two confounders: pre-context confounder and entity-order confounder. Furthermore, we design Intra- and Inter-entity Deconfounding Data Augmentation methods to eliminate the above confounders according to the theory of backdoor adjustment. Experiments show that our method can improve the performance of the generative NER model in various datasets.
null
null
10.18653/v1/2022.acl-long.59
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,725
inproceedings
sorensen-etal-2022-information
An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.60/
Sorensen, Taylor and Robinson, Joshua and Rytting, Christopher and Shaw, Alexander and Rogers, Kyle and Delorey, Alexia and Khalil, Mahmoud and Fulda, Nancy and Wingate, David
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
819--862
Pre-trained language models derive substantial linguistic and factual knowledge from the massive corpora on which they are trained, and prompt engineering seeks to align these models to specific tasks. Unfortunately, existing prompt engineering methods require significant amounts of labeled data, access to model parameters, or both. We introduce a new method for selecting prompt templates \textit{without labeled examples} and \textit{without direct access to the model}. Specifically, over a set of candidate templates, we choose the template that maximizes the mutual information between the input and the corresponding model output. Across 8 datasets representing 7 distinct NLP tasks, we show that when a template has high mutual information, it also has high accuracy on the task. On the largest model, selecting prompts with our method gets 90{\%} of the way from the average prompt accuracy to the best prompt accuracy and requires no ground truth labels.
null
null
10.18653/v1/2022.acl-long.60
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,726
inproceedings
wang-etal-2022-expanding
Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.61/
Wang, Xinyi and Ruder, Sebastian and Neubig, Graham
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
863--877
The performance of multilingual pretrained models is highly dependent on the availability of monolingual or parallel text present in a target language. Thus, the majority of the world`s languages cannot benefit from recent progress in NLP as they have no or limited textual data. To expand possibilities of using NLP technology in these under-represented languages, we systematically study strategies that relax the reliance on conventional language resources through the use of bilingual lexicons, an alternative resource with much better language coverage. We analyze different strategies to synthesize textual or labeled data using lexicons, and how this data can be combined with monolingual or parallel text when available. For 19 under-represented languages across 3 tasks, our methods lead to consistent improvements of up to 5 and 15 points with and without extra monolingual text respectively. Overall, our study highlights how NLP methods can be adapted to thousands more languages that are under-served by current technology.
null
null
10.18653/v1/2022.acl-long.61
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,727
inproceedings
feng-etal-2022-language
Language-agnostic {BERT} Sentence Embedding
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.62/
Feng, Fangxiaoyu and Yang, Yinfei and Cer, Daniel and Arivazhagan, Naveen and Wang, Wei
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
878--891
While BERT is an effective method for learning monolingual sentence embeddings for semantic similarity and embedding based transfer learning BERT based cross-lingual sentence embeddings have yet to be explored. We systematically investigate methods for learning multilingual sentence embeddings by combining the best methods for learning monolingual and cross-lingual representations including: masked language modeling (MLM), translation language modeling (TLM), dual encoder translation ranking, and additive margin softmax. We show that introducing a pre-trained multilingual language model dramatically reduces the amount of parallel training data required to achieve good performance by 80{\%}. Composing the best of these methods produces a model that achieves 83.7{\%} bi-text retrieval accuracy over 112 languages on Tatoeba, well above the 65.5{\%} achieved by LASER, while still performing competitively on monolingual transfer learning benchmarks. Parallel data mined from CommonCrawl using our best model is shown to train competitive NMT models for en-zh and en-de. We publicly release our best multilingual sentence embedding model for 109+ languages at \url{https://tfhub.dev/google/LaBSE}.
null
null
10.18653/v1/2022.acl-long.62
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,728
inproceedings
wan-etal-2022-nested
Nested Named Entity Recognition with Span-level Graphs
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.63/
Wan, Juncheng and Ru, Dongyu and Zhang, Weinan and Yu, Yong
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
892--903
Span-based methods with the neural networks backbone have great potential for the nested named entity recognition (NER) problem. However, they face problems such as degenerating when positive instances and negative instances largely overlap. Besides, the generalization ability matters a lot in nested NER, as a large proportion of entities in the test set hardly appear in the training set. In this work, we try to improve the span representation by utilizing retrieval-based span-level graphs, connecting spans and entities in the training data based on $n$-gram features. Specifically, we build the entity-entity graph and span-entity graph globally based on $n$-gram similarity to integrate the information of similar neighbor entities into the span representation. To evaluate our method, we conduct experiments on three common nested NER datasets, ACE2004, ACE2005, and GENIA datasets. Experimental results show that our method achieves general improvements on all three benchmarks (+$0.30 \sim 0.85$ micro-F1), and obtains special superiority on low frequency entities (+$0.56 \sim 2.08$ recall).
null
null
10.18653/v1/2022.acl-long.63
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,729
inproceedings
luo-etal-2022-cogtaskonomy
{C}og{T}askonomy: Cognitively Inspired Task Taxonomy Is Beneficial to Transfer Learning in {NLP}
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.64/
Luo, Yifei and Xu, Minghui and Xiong, Deyi
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
904--920
Is there a principle to guide transfer learning across tasks in natural language processing (NLP)? Taxonomy (Zamir et al., 2018) finds that a structure exists among visual tasks, as a principle underlying transfer learning for them. In this paper, we propose a cognitively inspired framework, CogTaskonomy, to learn taxonomy for NLP tasks. The framework consists of Cognitive Representation Analytics (CRA) and Cognitive-Neural Mapping (CNM). The former employs Representational Similarity Analysis, which is commonly used in computational neuroscience to find a correlation between brain-activity measurement and computational modeling, to estimate task similarity with task-specific sentence representations. The latter learns to detect task relations by projecting neural representations from NLP models to cognitive signals (i.e., fMRI voxels). Experiments on 12 NLP tasks, where BERT/TinyBERT are used as the underlying models for transfer learning, demonstrate that the proposed CogTaxonomy is able to guide transfer learning, achieving performance competitive to the Analytic Hierarchy Process (Saaty, 1987) used in visual Taskonomy (Zamir et al., 2018) but without requiring exhaustive pairwise $O(m^2)$ task transferring. Analyses further discover that CNM is capable of learning model-agnostic task taxonomy.
null
null
10.18653/v1/2022.acl-long.64
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,730
inproceedings
su-etal-2022-rocbert
{R}o{CB}ert: Robust {C}hinese Bert with Multimodal Contrastive Pretraining
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.65/
Su, Hui and Shi, Weiwei and Shen, Xiaoyu and Xiao, Zhou and Ji, Tuo and Fang, Jiarui and Zhou, Jie
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
921--931
Large-scale pretrained language models have achieved SOTA results on NLP tasks. However, they have been shown vulnerable to adversarial attacks especially for logographic languages like Chinese. In this work, we propose RoCBert: a pretrained Chinese Bert that is robust to various forms of adversarial attacks like word perturbation, synonyms, typos, etc. It is pretrained with the contrastive learning objective which maximizes the label consistency under different synthesized adversarial examples. The model takes as input multimodal information including the semantic, phonetic and visual features. We show all these features areimportant to the model robustness since the attack can be performed in all the three forms. Across 5 Chinese NLU tasks, RoCBert outperforms strong baselines under three blackbox adversarial algorithms without sacrificing the performance on clean testset. It also performs the best in the toxic content detection task under human-made attacks.
null
null
10.18653/v1/2022.acl-long.65
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,731
inproceedings
dong-etal-2022-premise
Premise-based Multimodal Reasoning: Conditional Inference on Joint Textual and Visual Clues
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.66/
Dong, Qingxiu and Qin, Ziwei and Xia, Heming and Feng, Tian and Tong, Shoujie and Meng, Haoran and Xu, Lin and Wei, Zhongyu and Zhan, Weidong and Chang, Baobao and Li, Sujian and Liu, Tianyu and Sui, Zhifang
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
932--946
It is a common practice for recent works in vision language cross-modal reasoning to adopt a binary or multi-choice classification formulation taking as input a set of source image(s) and textual query. In this work, we take a sober look at such an {\textquotedblleft}unconditional{\textquotedblright} formulation in the sense that no prior knowledge is specified with respect to the source image(s). Inspired by the designs of both visual commonsense reasoning and natural language inference tasks, we propose a new task termed {\textquotedblleft}Premise-based Multi-modal Reasoning{\textquotedblright} (PMR) where a textual premise is the background presumption on each source image. The PMR dataset contains 15,360 manually annotated samples which are created by a multi-phase crowd-sourcing process. With selected high-quality movie screenshots and human-curated premise templates from 6 pre-defined categories, we ask crowd-source workers to write one true hypothesis and three distractors (4 choices) given the premise and image through a cross-check procedure.
null
null
10.18653/v1/2022.acl-long.66
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,732
inproceedings
shen-etal-2022-parallel
Parallel Instance Query Network for Named Entity Recognition
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.67/
Shen, Yongliang and Wang, Xiaobin and Tan, Zeqi and Xu, Guangwei and Xie, Pengjun and Huang, Fei and Lu, Weiming and Zhuang, Yueting
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
947--961
Named entity recognition (NER) is a fundamental task in natural language processing. Recent works treat named entity recognition as a reading comprehension task, constructing type-specific queries manually to extract entities. This paradigm suffers from three issues. First, type-specific queries can only extract one type of entities per inference, which is inefficient. Second, the extraction for different types of entities is isolated, ignoring the dependencies between them. Third, query construction relies on external knowledge and is difficult to apply to realistic scenarios with hundreds of entity types. To deal with them, we propose Parallel Instance Query Network (PIQN), which sets up global and learnable instance queries to extract entities from a sentence in a parallel manner. Each instance query predicts one entity, and by feeding all instance queries simultaneously, we can query all entities in parallel. Instead of being constructed from external knowledge, instance queries can learn their different query semantics during training. For training the model, we treat label assignment as a one-to-many Linear Assignment Problem (LAP) and dynamically assign gold entities to instance queries with minimal assignment cost. Experiments on both nested and flat NER datasets demonstrate that our proposed method outperforms previous state-of-the-art models.
null
null
10.18653/v1/2022.acl-long.67
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,733
inproceedings
liu-etal-2022-prophetchat
{P}rophet{C}hat: Enhancing Dialogue Generation with Simulation of Future Conversation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.68/
Liu, Chang and Tan, Xu and Tao, Chongyang and Fu, Zhenxin and Zhao, Dongyan and Liu, Tie-Yan and Yan, Rui
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
962--973
Typical generative dialogue models utilize the dialogue history to generate the response. However, since one dialogue utterance can often be appropriately answered by multiple distinct responses, generating a desired response solely based on the historical information is not easy. Intuitively, if the chatbot can foresee in advance what the user would talk about (i.e., the dialogue future) after receiving its response, it could possibly provide a more informative response. Accordingly, we propose a novel dialogue generation framework named ProphetChat that utilizes the simulated dialogue futures in the inference phase to enhance response generation. To enable the chatbot to foresee the dialogue future, we design a beam-search-like roll-out strategy for dialogue future simulation using a typical dialogue generation model and a dialogue selector. With the simulated futures, we then utilize the ensemble of a history-to-response generator and a future-to-response generator to jointly generate a more informative response. Experiments on two popular open-domain dialogue datasets demonstrate that ProphetChat can generate better responses over strong baselines, which validates the advantages of incorporating the simulated dialogue futures.
null
null
10.18653/v1/2022.acl-long.68
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,734
inproceedings
yavuz-etal-2022-modeling
Modeling Multi-hop Question Answering as Single Sequence Prediction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.69/
Yavuz, Semih and Hashimoto, Kazuma and Zhou, Yingbo and Keskar, Nitish Shirish and Xiong, Caiming
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
974--990
Fusion-in-decoder (Fid) (Izacard and Grave, 2020) is a generative question answering (QA) model that leverages passage retrieval with a pre-trained transformer and pushed the state of the art on single-hop QA. However, the complexity of multi-hop QA hinders the effectiveness of the generative QA approach. In this work, we propose a simple generative approach (PathFid) that extends the task beyond just answer generation by explicitly modeling the reasoning process to resolve the answer for multi-hop questions. By linearizing the hierarchical reasoning path of supporting passages, their key sentences, and finally the factoid answer, we cast the problem as a single sequence prediction task. To facilitate complex reasoning with multiple clues, we further extend the unified flat representation of multiple input documents by encoding cross-passage interactions. Our extensive experiments demonstrate that PathFid leads to strong performance gains on two multi-hop QA datasets: HotpotQA and IIRC. Besides the performance gains, PathFid is more interpretable, which in turn yields answers that are more faithfully grounded to the supporting passages and facts compared to the baseline Fid model.
null
null
10.18653/v1/2022.acl-long.69
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,735
inproceedings
wu-etal-2022-learning
Learning Disentangled Semantic Representations for Zero-Shot Cross-Lingual Transfer in Multilingual Machine Reading Comprehension
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.70/
Wu, Linjuan and Wu, Shaojuan and Zhang, Xiaowang and Xiong, Deyi and Chen, Shizhan and Zhuang, Zhiqiang and Feng, Zhiyong
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
991--1000
Multilingual pre-trained models are able to zero-shot transfer knowledge from rich-resource to low-resource languages in machine reading comprehension (MRC). However, inherent linguistic discrepancies in different languages could make answer spans predicted by zero-shot transfer violate syntactic constraints of the target language. In this paper, we propose a novel multilingual MRC framework equipped with a Siamese Semantic Disentanglement Model (S2DM) to disassociate semantics from syntax in representations learned by multilingual pre-trained models. To explicitly transfer only semantic knowledge to the target language, we propose two groups of losses tailored for semantic and syntactic encoding and disentanglement. Experimental results on three multilingual MRC datasets (i.e., XQuAD, MLQA, and TyDi QA) demonstrate the effectiveness of our proposed approach over models based on mBERT and XLM-100.
null
null
10.18653/v1/2022.acl-long.70
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,736
inproceedings
liu-etal-2022-multi-granularity
Multi-Granularity Structural Knowledge Distillation for Language Model Compression
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.71/
Liu, Chang and Tao, Chongyang and Feng, Jiazhan and Zhao, Dongyan
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1001--1011
Transferring the knowledge to a small model through distillation has raised great interest in recent years. Prevailing methods transfer the knowledge derived from mono-granularity language units (e.g., token-level or sample-level), which is not enough to represent the rich semantics of a text and may lose some vital knowledge. Besides, these methods form the knowledge as individual representations or their simple dependencies, neglecting abundant structural relations among intermediate representations. To overcome the problems, we present a novel knowledge distillation framework that gathers intermediate representations from multiple semantic granularities (e.g., tokens, spans and samples) and forms the knowledge as more sophisticated structural relations specified as the pair-wise interactions and the triplet-wise geometric angles based on multi-granularity representations. Moreover, we propose distilling the well-organized multi-granularity structural knowledge to the student hierarchically across layers. Experimental results on GLUE benchmark demonstrate that our method outperforms advanced distillation methods.
null
null
10.18653/v1/2022.acl-long.71
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,737
inproceedings
guo-etal-2022-auto
Auto-Debias: Debiasing Masked Language Models with Automated Biased Prompts
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.72/
Guo, Yue and Yang, Yi and Abbasi, Ahmed
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1012--1023
Human-like biases and undesired social stereotypes exist in large pretrained language models. Given the wide adoption of these models in real-world applications, mitigating such biases has become an emerging and important task. In this paper, we propose an automatic method to mitigate the biases in pretrained language models. Different from previous debiasing work that uses external corpora to fine-tune the pretrained models, we instead directly probe the biases encoded in pretrained models through prompts. Specifically, we propose a variant of the beam search method to automatically search for \textit{biased prompts} such that the cloze-style completions are the most different with respect to different demographic groups. Given the identified biased prompts, we then propose a distribution alignment loss to mitigate the biases. Experiment results on standard datasets and metrics show that our proposed \textbf{Auto-Debias} approach can significantly reduce biases, including gender and racial bias, in pretrained language models such as BERT, RoBERTa and ALBERT. Moreover, the improvement in fairness does not decrease the language models' understanding abilities, as shown using the GLUE benchmark.
null
null
10.18653/v1/2022.acl-long.72
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,738
inproceedings
liu-etal-2022-go
Where to Go for the Holidays: Towards Mixed-Type Dialogs for Clarification of User Goals
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.73/
Liu, Zeming and Xu, Jun and Lei, Zeyang and Wang, Haifeng and Niu, Zheng-Yu and Wu, Hua
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1024--1034
Most dialog systems posit that users have figured out clear and specific goals before starting an interaction. For example, users have determined the departure, the destination, and the travel time for booking a flight. However, in many scenarios, limited by experience and knowledge, users may know what they need, but still struggle to figure out clear and specific goals by determining all the necessary slots. In this paper, we identify this challenge, and make a step forward by collecting a new human-to-human mixed-type dialog corpus. It contains 5k dialog sessions and 168k utterances for 4 dialog types and 5 domains. Within each session, an agent first provides user-goal-related knowledge to help figure out clear and specific goals, and then help achieve them. Furthermore, we propose a mixed-type dialog model with a novel Prompt-based continual learning mechanism. Specifically, the mechanism enables the model to continually strengthen its ability on any specific type by utilizing existing dialog corpora effectively.
null
null
10.18653/v1/2022.acl-long.73
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,739
inproceedings
li-etal-2022-semi
Semi-supervised Domain Adaptation for Dependency Parsing with Dynamic Matching Network
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.74/
Li, Ying and Li, Shuaike and Zhang, Min
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1035--1045
Supervised parsing models have achieved impressive results on in-domain texts. However, their performances drop drastically on out-of-domain texts due to the data distribution shift. The shared-private model has shown its promising advantages for alleviating this problem via feature separation, whereas prior works pay more attention to enhance shared features but neglect the in-depth relevance of specific ones. To address this issue, we for the first time apply a dynamic matching network on the shared-private model for semi-supervised cross-domain dependency parsing. Meanwhile, considering the scarcity of target-domain labeled data, we leverage unlabeled data from two aspects, i.e., designing a new training strategy to improve the capability of the dynamic matching network and fine-tuning BERT to obtain domain-related contextualized representations. Experiments on benchmark datasets show that our proposed model consistently outperforms various baselines, leading to new state-of-the-art results on all domains. Detailed analysis on different matching strategies demonstrates that it is essential to learn suitable matching weights to emphasize useful features and ignore useless or even harmful ones. Besides, our proposed model can be directly extended to multi-source domain adaptation and achieves best performances among various baselines, further verifying the effectiveness and robustness.
null
null
10.18653/v1/2022.acl-long.74
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,740
inproceedings
zhou-srikumar-2022-closer
A Closer Look at How Fine-tuning Changes {BERT}
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.75/
Zhou, Yichu and Srikumar, Vivek
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1046--1061
Given the prevalence of pre-trained contextualized representations in today`s NLP, there have been many efforts to understand what information they contain, and why they seem to be universally successful. The most common approach to use these representations involves fine-tuning them for an end task. Yet, how fine-tuning changes the underlying embedding space is less studied. In this work, we study the English BERT family and use two probing techniques to analyze how fine-tuning changes the space. We hypothesize that fine-tuning affects classification performance by increasing the distances between examples associated with different labels. We confirm this hypothesis with carefully designed experiments on five different NLP tasks. Via these experiments, we also discover an exception to the prevailing wisdom that {\textquotedblleft}fine-tuning always improves performance{\textquotedblright}. Finally, by comparing the representations before and after fine-tuning, we discover that fine-tuning does not introduce arbitrary changes to representations; instead, it adjusts the representations to downstream tasks while largely preserving the original spatial structure of the data points.
null
null
10.18653/v1/2022.acl-long.75
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,741
inproceedings
hong-etal-2022-sentence
Sentence-aware Contrastive Learning for Open-Domain Passage Retrieval
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.76/
Wu, Bohong and Zhang, Zhuosheng and Wang, Jinyuan and Zhao, Hai
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1062--1074
Training dense passage representations via contrastive learning has been shown effective for Open-Domain Passage Retrieval (ODPR). Existing studies focus on further optimizing by improving negative sampling strategy or extra pretraining. However, these studies keep unknown in capturing passage with internal representation conflicts from improper modeling granularity. Specifically, under our observation that a passage can be organized by multiple semantically different sentences, modeling such a passage as a unified dense vector is not optimal. This work thus presents a refined model on the basis of a smaller granularity, contextual sentences, to alleviate the concerned conflicts. In detail, we introduce an in-passage negative sampling strategy to encourage a diverse generation of sentence representations within the same passage. Experiments on three benchmark datasets verify the efficacy of our method, especially on datasets where conflicts are severe. Extensive experiments further present good transferability of our method across datasets.
null
null
10.18653/v1/2022.acl-long.76
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,742
inproceedings
sanyal-etal-2022-fairr
{F}ai{RR}: Faithful and Robust Deductive Reasoning over Natural Language
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.77/
Sanyal, Soumya and Singh, Harman and Ren, Xiang
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1075--1093
Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in natural language. Recent works show that such models can also produce the reasoning steps (i.e., the proof graph) that emulate the model`s logical reasoning process. Currently, these black-box models generate both the proof graph and intermediate inferences within the same model and thus may be unfaithful. In this work, we frame the deductive logical reasoning task by defining three modular components: rule selection, fact selection, and knowledge composition. The rule and fact selection steps select the candidate rule and facts to be used and then the knowledge composition combines them to generate new inferences. This ensures model faithfulness by assured causal relation from the proof step to the inference reasoning. To test our framework, we propose FaiRR (Faithful and Robust Reasoner) where the above three components are independently modeled by transformers. We observe that FaiRR is robust to novel language perturbations, and is faster at inference than previous works on existing reasoning datasets. Additionally, in contrast to black-box generative models, the errors made by FaiRR are more interpretable due to the modular approach.
null
null
10.18653/v1/2022.acl-long.77
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,743
inproceedings
cheng-etal-2022-hitab
{H}i{T}ab: A Hierarchical Table Dataset for Question Answering and Natural Language Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.78/
Cheng, Zhoujun and Dong, Haoyu and Wang, Zhiruo and Jia, Ran and Guo, Jiaqi and Gao, Yan and Han, Shi and Lou, Jian-Guang and Zhang, Dongmei
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1094--1110
Tables are often created with hierarchies, but existing works on table reasoning mainly focus on flat tables and neglect hierarchical tables. Hierarchical tables challenge numerical reasoning by complex hierarchical indexing, as well as implicit relationships of calculation and semantics. We present a new dataset, HiTab, to study question answering (QA) and natural language generation (NLG) over hierarchical tables. HiTab is a cross-domain dataset constructed from a wealth of statistical reports and Wikipedia pages, and has unique characteristics: (1) nearly all tables are hierarchical, and (2) QA pairs are not proposed by annotators from scratch, but are revised from real and meaningful sentences authored by analysts. (3) to reveal complex numerical reasoning in statistical reports, we provide fine-grained annotations of quantity and entity alignment. Experiments suggest that this HiTab presents a strong challenge for existing baselines and a valuable benchmark for future research. Targeting hierarchical structure, we devise a hierarchy-aware logical form for symbolic reasoning over tables, which shows high effectiveness. Targeting table reasoning, we leverage entity and quantity alignment to explore partially supervised training in QA and conditional generation in NLG, and largely reduce spurious predictions in QA and produce better descriptions in NLG.
null
null
10.18653/v1/2022.acl-long.78
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,744
inproceedings
lu-etal-2022-doctor
Doctor Recommendation in Online Health Forums via Expertise Learning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.79/
Lu, Xiaoxin and Zhang, Yubo and Li, Jing and Zong, Shi
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1111--1123
Huge volumes of patient queries are daily generated on online health forums, rendering manual doctor allocation a labor-intensive task. To better help patients, this paper studies a novel task of doctor recommendation to enable automatic pairing of a patient to a doctor with relevant expertise. While most prior work in recommendation focuses on modeling target users from their past behavior, we can only rely on the limited words in a query to infer a patient`s needs for privacy reasons. For doctor modeling, we study the joint effects of their profiles and previous dialogues with other patients and explore their interactions via self-learning. The learned doctor embeddings are further employed to estimate their capabilities of handling a patient query with a multi-head attention mechanism. For experiments, a large-scale dataset is collected from Chunyu Yisheng, a Chinese online health forum, where our model exhibits the state-of-the-art results, outperforming baselines only consider profiles and past dialogues to characterize a doctor.
null
null
10.18653/v1/2022.acl-long.79
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,745
inproceedings
zhu-etal-2022-continual
Continual Prompt Tuning for Dialog State Tracking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.80/
Zhu, Qi and Li, Bing and Mi, Fei and Zhu, Xiaoyan and Huang, Minlie
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1124--1137
A desirable dialog system should be able to continually learn new skills without forgetting old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually training a model often leads to a well-known catastrophic forgetting issue. In this paper, we present Continual Prompt Tuning, a parameter-efficient framework that not only avoids forgetting but also enables knowledge transfer between tasks. To avoid forgetting, we only learn and store a few prompt tokens' embeddings for each task while freezing the backbone pre-trained model. To achieve bi-directional knowledge transfer among tasks, we propose several techniques (continual prompt initialization, query fusion, and memory replay) to transfer knowledge from preceding tasks and a memory-guided technique to transfer knowledge from subsequent tasks. Extensive experiments demonstrate the effectiveness and efficiency of our proposed method on continual learning for dialog state tracking, compared with state-of-the-art baselines.
null
null
10.18653/v1/2022.acl-long.80
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,746
inproceedings
fu-etal-2022-theres
There`s a Time and Place for Reasoning Beyond the Image
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.81/
Fu, Xingyu and Zhou, Ben and Chandratreya, Ishaan and Vondrick, Carl and Roth, Dan
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1138--1149
Images are often more significant than only the pixels to human eyes, as we can infer, associate, and reason with contextual information from other sources to establish a more complete picture. For example, in Figure 1, we can find a way to identify the news articles related to the picture through segment-wise understandings of the signs, the buildings, the crowds, and more. This reasoning could provide the time and place the image was taken, which will help us in subsequent tasks, such as automatic storyline construction, correction of image source in intended effect photographs, and upper-stream processing such as image clustering for certain location or time. In this work, we formulate this problem and introduce TARA: a dataset with 16k images with their associated news, time, and location, automatically extracted from New York Times, and an additional 61k examples as distant supervision from WIT. On top of the extractions, we present a crowdsourced subset in which we believe it is possible to find the images' spatio-temporal information for evaluation purpose. We show that there exists a 70{\%} gap between a state-of-the-art joint model and human performance, which is slightly filled by our proposed model that uses segment-wise reasoning, motivating higher-level vision-language joint models that can conduct open-ended reasoning with world knowledge. The data and code are publicly available at \url{https://github.com/zeyofu/TARA}.
null
null
10.18653/v1/2022.acl-long.81
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,747
inproceedings
cheng-etal-2022-fortap
{FORTAP}: Using Formulas for Numerical-Reasoning-Aware Table Pretraining
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.82/
Cheng, Zhoujun and Dong, Haoyu and Jia, Ran and Wu, Pengfei and Han, Shi and Cheng, Fan and Zhang, Dongmei
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1150--1166
Tables store rich numerical data, but numerical reasoning over tables is still a challenge. In this paper, we find that the spreadsheet formula, a commonly used language to perform computations on numerical values in spreadsheets, is a valuable supervision for numerical reasoning in tables. Considering large amounts of spreadsheets available on the web, we propose FORTAP, the first exploration to leverage spreadsheet formulas for table pretraining. Two novel self-supervised pretraining objectives are derived from formulas, numerical reference prediction (NRP) and numerical calculation prediction (NCP). While our proposed objectives are generic for encoders, to better capture spreadsheet table layouts and structures, FORTAP is built upon TUTA, the first transformer-based method for spreadsheet table pretraining with tree attention. FORTAP outperforms state-of-the-art methods by large margins on three representative datasets of formula prediction, question answering, and cell type classification, showing the great potential of leveraging formulas for table pretraining.
null
null
10.18653/v1/2022.acl-long.82
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,748
inproceedings
shankar-2022-multimodal
Multimodal fusion via cortical network inspired losses
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.83/
Shankar, Shiv
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1167--1178
Information integration from different modalities is an active area of research. Human beings and, in general, biological neural systems are quite adept at using a multitude of signals from different sensory perceptive fields to interact with the environment and each other. Recent work in deep fusion models via neural networks has led to substantial improvements over unimodal approaches in areas like speech recognition, emotion recognition and analysis, captioning and image description. However, such research has mostly focused on architectural changes allowing for fusion of different modalities while keeping the model complexity manageable. Inspired by neuroscientific ideas about multisensory integration and processing, we investigate the effect of introducing neural dependencies in the loss functions. Experiments on multimodal sentiment analysis tasks with different models show that our approach provides a consistent performance boost.
null
null
10.18653/v1/2022.acl-long.83
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,749
inproceedings
zhang-etal-2022-modeling
Modeling Temporal-Modal Entity Graph for Procedural Multimodal Machine Comprehension
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.84/
Zhang, Huibin and Zhang, Zhengkun and Zhang, Yao and Wang, Jun and Li, Yufan and Jiang, Ning and Wei, Xin and Yang, Zhenglu
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1179--1189
Procedural Multimodal Documents (PMDs) organize textual instructions and corresponding images step by step. Comprehending PMDs and inducing their representations for the downstream reasoning tasks is designated as Procedural MultiModal Machine Comprehension (M3C). In this study, we approach Procedural M3C at a fine-grained level (compared with existing explorations at a document or sentence level), that is, entity. With delicate consideration, we model entity both in its temporal and cross-modal relation and propose a novel Temporal-Modal Entity Graph (TMEG). Specifically, graph structure is formulated to capture textual and visual entities and trace their temporal-modal evolution. In addition, a graph aggregation module is introduced to conduct graph encoding and reasoning. Comprehensive experiments across three Procedural M3C tasks are conducted on a traditional dataset RecipeQA and our new dataset CraftQA, which can better evaluate the generalization of TMEG.
null
null
10.18653/v1/2022.acl-long.84
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,750
inproceedings
saha-etal-2022-explanation
Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.85/
Saha, Swarnadeep and Yadav, Prateek and Bansal, Mohit
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1190--1208
Pre-trained sequence-to-sequence language models have led to widespread success in many natural language generation tasks. However, there has been relatively less work on analyzing their ability to generate structured outputs such as graphs. Unlike natural language, graphs have distinct structural and semantic properties in the context of a downstream NLP task, e.g., generating a graph that is connected and acyclic can be attributed to its structural constraints, while the semantics of a graph can refer to how meaningfully an edge represents the relation between two node concepts. In this work, we study pre-trained language models that generate explanation graphs in an end-to-end manner and analyze their ability to learn the structural constraints and semantics of such graphs. We first show that with limited supervision, pre-trained language models often generate graphs that either violate these constraints or are semantically incoherent. Since curating large amount of human-annotated graphs is expensive and tedious, we propose simple yet effective ways of graph perturbations via node and edge edit operations that lead to structurally and semantically positive and negative graphs. Next, we leverage these graphs in different contrastive learning models with Max-Margin and InfoNCE losses. Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs and also generalize to other similar graph generation tasks. Lastly, we show that human errors are the best negatives for contrastive learning and also that automatically generating more such human-like negative graphs can lead to further improvements.
null
null
10.18653/v1/2022.acl-long.85
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,751
inproceedings
basu-roy-chowdhury-etal-2022-unsupervised
Unsupervised Extractive Opinion Summarization Using Sparse Coding
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.86/
Basu Roy Chowdhury, Somnath and Zhao, Chao and Chaturvedi, Snigdha
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1209--1225
Opinion summarization is the task of automatically generating summaries that encapsulate information expressed in multiple user reviews. We present Semantic Autoencoder (SemAE) to perform extractive opinion summarization in an unsupervised manner. SemAE uses dictionary learning to implicitly capture semantic information from the review text and learns a latent representation of each sentence over semantic units. Our extractive summarization algorithm leverages the representations to identify representative opinions among hundreds of reviews. SemAE is also able to perform controllable summarization to generate aspect-specific summaries using only a few samples. We report strong performance on SPACE and AMAZON datasets and perform experiments to investigate the functioning of our model.
null
null
10.18653/v1/2022.acl-long.86
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,752
inproceedings
michalopoulos-etal-2022-lexsubcon
{L}ex{S}ub{C}on: Integrating Knowledge from Lexical Resources into Contextual Embeddings for Lexical Substitution
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.87/
Michalopoulos, George and McKillop, Ian and Wong, Alexander and Chen, Helen
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1226--1236
Lexical substitution is the task of generating meaningful substitutes for a word in a given textual context. Contextual word embedding models have achieved state-of-the-art results in the lexical substitution task by relying on contextual information extracted from the replaced word within the sentence. However, such models do not take into account structured knowledge that exists in external lexical databases. We introduce LexSubCon, an end-to-end lexical substitution framework based on contextual embedding models that can identify highly-accurate substitute candidates. This is achieved by combining contextual information with knowledge from structured lexical resources. Our approach involves: (i) introducing a novel mix-up embedding strategy to the target word`s embedding through linearly interpolating the pair of the target input embedding and the average embedding of its probable synonyms; (ii) considering the similarity of the sentence-definition embeddings of the target word and its proposed candidates; and, (iii) calculating the effect of each substitution on the semantics of the sentence through a fine-tuned sentence similarity model. Our experiments show that LexSubCon outperforms previous state-of-the-art methods by at least 2{\%} over all the official lexical substitution metrics on LS07 and CoInCo benchmark datasets that are widely used for lexical substitution tasks.
null
null
10.18653/v1/2022.acl-long.87
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,753
inproceedings
zhou-etal-2022-think
Think Before You Speak: Explicitly Generating Implicit Commonsense Knowledge for Response Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.88/
Zhou, Pei and Gopalakrishnan, Karthik and Hedayatnia, Behnam and Kim, Seokhwan and Pujara, Jay and Ren, Xiang and Liu, Yang and Hakkani-Tur, Dilek
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1237--1252
Implicit knowledge, such as common sense, is key to fluid human conversations. Current neural response generation (RG) models are trained to generate responses directly, omitting unstated implicit knowledge. In this paper, we present Think-Before-Speaking (TBS), a generative approach to first externalize implicit commonsense knowledge ($think$) and use this knowledge to generate responses ($speak$). We argue that externalizing implicit knowledge allows more efficient learning, produces more informative responses, and enables more explainable models. We analyze different choices to collect knowledge-aligned dialogues, represent implicit knowledge, and transition between knowledge and dialogues. Empirical results show TBS models outperform end-to-end and knowledge-augmented RG baselines on most automatic metrics and generate more informative, specific, and commonsense-following responses, as evaluated by human annotators. TBS also generates $knowledge$ that makes sense and is relevant to the dialogue around 85{\%} of the time
null
null
10.18653/v1/2022.acl-long.88
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,754
inproceedings
liu-etal-2022-flow
Flow-Adapter Architecture for Unsupervised Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.89/
Liu, Yihong and Jabbar, Haris and Schuetze, Hinrich
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1253--1266
In this work, we propose a flow-adapter architecture for unsupervised NMT. It leverages normalizing flows to explicitly model the distributions of sentence-level latent representations, which are subsequently used in conjunction with the attention mechanism for the translation task. The primary novelties of our model are: (a) capturing language-specific sentence representations separately for each language using normalizing flows and (b) using a simple transformation of these latent representations for translating from one language to another. This architecture allows for unsupervised training of each language independently. While there is prior work on latent variables for supervised MT, to the best of our knowledge, this is the first work that uses latent variables and normalizing flows for unsupervised MT. We obtain competitive results on several unsupervised MT benchmarks.
null
null
10.18653/v1/2022.acl-long.89
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,755
inproceedings
ghalandari-etal-2022-efficient
Efficient Unsupervised Sentence Compression by Fine-tuning Transformers with Reinforcement Learning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.90/
Ghalandari, Demian and Hokamp, Chris and Ifrim, Georgiana
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1267--1280
Sentence compression reduces the length of text by removing non-essential content while preserving important facts and grammaticality. Unsupervised objective driven methods for sentence compression can be used to create customized models without the need for ground-truth training data, while allowing flexibility in the objective function(s) that are used for learning and inference. Recent unsupervised sentence compression approaches use custom objectives to guide discrete search; however, guided search is expensive at inference time. In this work, we explore the use of reinforcement learning to train effective sentence compression models that are also fast when generating predictions. In particular, we cast the task as binary sequence labelling and fine-tune a pre-trained transformer using a simple policy gradient approach. Our approach outperforms other unsupervised models while also being more efficient at inference time.
null
null
10.18653/v1/2022.acl-long.90
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,756
inproceedings
huang-etal-2022-tracing
Tracing Origins: Coreference-aware Machine Reading Comprehension
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.acl-long.91/
Huang, Baorong and Zhang, Zhuosheng and Zhao, Hai
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1281--1292
Machine reading comprehension is a heavily-studied research and test field for evaluating new pre-trained language models (PrLMs) and fine-tuning strategies, and recent studies have enriched the pre-trained language models with syntactic, semantic and other linguistic information to improve the performance of the models. In this paper, we imitate the human reading process in connecting the anaphoric expressions and explicitly leverage the coreference information of the entities to enhance the word embeddings from the pre-trained language model, in order to highlight the coreference mentions of the entities that must be identified for coreference-intensive question answering in QUOREF, a relatively new dataset that is specifically designed to evaluate the coreference-related performance of a model. We use two strategies to fine-tune a pre-trained language model, namely, placing an additional encoder layer after a pre-trained language model to focus on the coreference mentions or constructing a relational graph convolutional network to model the coreference relations. We demonstrate that the explicit incorporation of coreference information in the fine-tuning stage performs better than the incorporation of the coreference information in pre-training a language model.
null
null
10.18653/v1/2022.acl-long.91
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,757