entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
fraser-etal-2022-moral
Does Moral Code have a Moral Code? Probing Delphi`s Moral Philosophy
Verma, Apurv and Pruksachatkun, Yada and Chang, Kai-Wei and Galstyan, Aram and Dhamala, Jwala and Cao, Yang Trista
jul
2022
Seattle, U.S.A.
Association for Computational Linguistics
https://aclanthology.org/2022.trustnlp-1.3/
Fraser, Kathleen C. and Kiritchenko, Svetlana and Balkir, Esma
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
26--42
In an effort to guarantee that machine learning model outputs conform with human moral values, recent work has begun exploring the possibility of explicitly training models to learn the difference between right and wrong. This is typically done in a bottom-up fashion, by exposing the model to different scenarios, annotated with human moral judgements. One question, however, is whether the trained models actually learn any consistent, higher-level ethical principles from these datasets {--} and if so, what? Here, we probe the Allen AI Delphi model with a set of standardized morality questionnaires, and find that, despite some inconsistencies, Delphi tends to mirror the moral principles associated with the demographic groups involved in the annotation process. We question whether this is desirable and discuss how we might move forward with this knowledge.
null
null
10.18653/v1/2022.trustnlp-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,412
inproceedings
castelli-moreau-ph-d-2022-cycle
The Cycle of Trust and Responsibility in Outsourced {AI}
Verma, Apurv and Pruksachatkun, Yada and Chang, Kai-Wei and Galstyan, Aram and Dhamala, Jwala and Cao, Yang Trista
jul
2022
Seattle, U.S.A.
Association for Computational Linguistics
https://aclanthology.org/2022.trustnlp-1.4/
Castelli, Maximilian and Moreau, Linda C.
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
43--48
Artificial Intelligence (AI) and Machine Learning (ML) are rapidly becoming must-have capabilities. According to a 2019 Forbes Insights Report, {\textquotedblleft}seventy-nine percent [of executives] agree that AI is already having a transformational impact on workflows and tools for knowledge workers, but only 5{\%} of executives consider their companies to be industry-leading in terms of taking advantage of AI-powered processes.{\textquotedblright} (Forbes 2019) A major reason for this may be a shortage of on-staff expertise in AI/ML. This paper explores the intertwined issues of trust, adoption, training, and ethics of outsourcing AI development to a third party. We describe our experiences as a provider of outsourced natural language processing (NLP). We discuss how trust and accountability co-evolve as solutions mature from proof-of-concept to production-ready.
null
null
10.18653/v1/2022.trustnlp-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,413
inproceedings
mosca-etal-2022-explaining
Explaining Neural {NLP} Models for the Joint Analysis of Open-and-Closed-Ended Survey Answers
Verma, Apurv and Pruksachatkun, Yada and Chang, Kai-Wei and Galstyan, Aram and Dhamala, Jwala and Cao, Yang Trista
jul
2022
Seattle, U.S.A.
Association for Computational Linguistics
https://aclanthology.org/2022.trustnlp-1.5/
Mosca, Edoardo and Harmann, Katharina and Eder, Tobias and Groh, Georg
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
49--63
Large-scale surveys are a widely used instrument to collect data from a target audience. Beyond the single individual, an appropriate analysis of the answers can reveal trends and patterns and thus generate new insights and knowledge for researchers. Current analysis practices employ shallow machine learning methods or rely on (biased) human judgment. This work investigates the usage of state-of-the-art NLP models such as BERT to automatically extract information from both open- and closed-ended questions. We also leverage explainability methods at different levels of granularity to further derive knowledge from the analysis model. Experiments on EMS{---}a survey-based study researching influencing factors affecting a student`s career goals{---}show that the proposed approach can identify such factors both at the input- and higher concept-level.
null
null
10.18653/v1/2022.trustnlp-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,414
inproceedings
zheng-etal-2022-irrationality
The Irrationality of Neural Rationale Models
Verma, Apurv and Pruksachatkun, Yada and Chang, Kai-Wei and Galstyan, Aram and Dhamala, Jwala and Cao, Yang Trista
jul
2022
Seattle, U.S.A.
Association for Computational Linguistics
https://aclanthology.org/2022.trustnlp-1.6/
Zheng, Yiming and Booth, Serena and Shah, Julie and Zhou, Yilun
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
64--73
Neural rationale models are popular for interpretable predictions of NLP tasks. In these, a selector extracts segments of the input text, called rationales, and passes these segments to a classifier for prediction. Since the rationale is the only information accessible to the classifier, it is plausibly defined as the explanation. Is such a characterization unconditionally correct? In this paper, we argue to the contrary, with both philosophical perspectives and empirical evidence suggesting that rationale models are, perhaps, less rational and interpretable than expected. We call for more rigorous evaluations of these models to ensure desired properties of interpretability are indeed achieved. The code for our experiments is at \url{https://github.com/yimingz89/Neural-Rationale-Analysis}.
null
null
10.18653/v1/2022.trustnlp-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,415
inproceedings
kwon-mihindukulasooriya-2022-empirical
An Empirical Study on Pseudo-log-likelihood Bias Measures for Masked Language Models Using Paraphrased Sentences
Verma, Apurv and Pruksachatkun, Yada and Chang, Kai-Wei and Galstyan, Aram and Dhamala, Jwala and Cao, Yang Trista
jul
2022
Seattle, U.S.A.
Association for Computational Linguistics
https://aclanthology.org/2022.trustnlp-1.7/
Kwon, Bum Chul and Mihindukulasooriya, Nandana
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
74--79
In this paper, we conduct an empirical study on a bias measure, log-likelihood Masked Language Model (MLM) scoring, on a benchmark dataset. Previous work evaluates whether MLMs are biased or not for certain protected attributes (e.g., race) by comparing the log-likelihood scores of sentences that contain stereotypical characteristics with one category (e.g., black) versus another (e.g., white). We hypothesized that this approach might be too sensitive to the choice of contextual words than the meaning of the sentence. Therefore, we computed the same measure after paraphrasing the sentences with different words but with same meaning. Our results demonstrate that the log-likelihood scoring can be more sensitive to utterance of specific words than to meaning behind a given sentence. Our paper reveals a shortcoming of the current log-likelihood-based bias measures for MLMs and calls for new ways to improve the robustness of it
null
null
10.18653/v1/2022.trustnlp-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,416
inproceedings
balkir-etal-2022-challenges
Challenges in Applying Explainability Methods to Improve the Fairness of {NLP} Models
Verma, Apurv and Pruksachatkun, Yada and Chang, Kai-Wei and Galstyan, Aram and Dhamala, Jwala and Cao, Yang Trista
jul
2022
Seattle, U.S.A.
Association for Computational Linguistics
https://aclanthology.org/2022.trustnlp-1.8/
Balkir, Esma and Kiritchenko, Svetlana and Nejadgholi, Isar and Fraser, Kathleen
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
80--92
Motivations for methods in explainable artificial intelligence (XAI) often include detecting, quantifying and mitigating bias, and contributing to making machine learning models fairer. However, exactly how an XAI method can help in combating biases is often left unspecified. In this paper, we briefly review trends in explainability and fairness in NLP research, identify the current practices in which explainability methods are applied to detect and mitigate bias, and investigate the barriers preventing XAI methods from being used more widely in tackling fairness issues.
null
null
10.18653/v1/2022.trustnlp-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,417
inproceedings
patil-etal-2022-l3cube
{L}3{C}ube-{M}aha{H}ate: A Tweet-based {M}arathi Hate Speech Detection Dataset and {BERT} Models
Kumar, Ritesh and Ojha, Atul Kr. and Zampieri, Marcos and Malmasi, Shervin and Kadar, Daniel
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.trac-1.1/
Patil, Hrushikesh and Velankar, Abhishek and Joshi, Raviraj
Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)
1--9
Social media platforms are used by a large number of people prominently to express their thoughts and opinions. However, these platforms have contributed to a sub stantial amount of hateful and abusive content as well. Therefore, it is impor tant to curb the spread of hate speech on these platforms. In India, Marathi is one of the most popular languages used by a wide audience. In this work, we present L3Cube-MahaHate, the first ma jor Hate Speech Dataset in Marathi. The dataset is curated from Twitter, anno tated manually. Our dataset consists of over 00 distinct tweets labeled into four major classes i.e hate, offensive, pro fane, and not. We present the approaches used for collecting and annotating the data and the challenges faced during the pro cess. Finally, we present baseline classi fication results using deep learning mod els based on CNN, LSTM, and Transform ers. We explore mono-lingual and multi lingual variants of BERT like MahaBERT, IndicBERT, mBERT, and xlm-RoBERTa and show that mono-lingual models per form better than their multi-lingual coun terparts. The MahaBERT model provides the best results on L3Cube-MahaHate Corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,419
inproceedings
das-etal-2022-one
Which One Is More Toxic? Findings from Jigsaw Rate Severity of Toxic Comments
Kumar, Ritesh and Ojha, Atul Kr. and Zampieri, Marcos and Malmasi, Shervin and Kadar, Daniel
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.trac-1.2/
Das, Millon and Saha, Punyajoy and Das, Mithun
Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)
10--15
The proliferation of online hate speech has necessitated the creation of algorithms which can detect toxicity. Most of the past research focuses on this detection as a classification task, but assigning an absolute toxicity label is often tricky. Hence, few of the past works transform the same task into a regression. This paper shows the comparative evaluation of different transformers and traditional machine learning models on a recently released toxicity severity measurement dataset by Jigsaw. We further demonstrate the issues with the model predictions using explainability analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,420
inproceedings
verma-etal-2022-attention
Can Attention-based Transformers Explain or Interpret Cyberbullying Detection?
Kumar, Ritesh and Ojha, Atul Kr. and Zampieri, Marcos and Malmasi, Shervin and Kadar, Daniel
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.trac-1.3/
Verma, Kanishk and Milosevic, Tijana and Davis, Brian
Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)
16--29
Automated textual cyberbullying detection is known to be a challenging task. It is sometimes expected that messages associated with bullying will either be a) abusive, b) targeted at a specific individual or group, or c) have a negative sentiment. Transfer learning by fine-tuning pre-trained attention-based transformer language models (LMs) has achieved near state-of-the-art (SOA) precision in identifying textual fragments as being bullying-related or not. This study looks closely at two SOA LMs, BERT and HateBERT, fine-tuned on real-life cyberbullying datasets from multiple social networking platforms. We intend to determine whether these finely calibrated pre-trained LMs learn textual cyberbullying attributes or syntactical features in the text. The results of our comprehensive experiments show that despite the fact that attention weights are drawn more strongly to syntactical features of the text at every layer, attention weights cannot completely account for the decision-making of such attention-based transformers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,421
inproceedings
kumari-etal-2022-bias
Bias, Threat and Aggression Identification Using Machine Learning Techniques on Multilingual Comments
Kumar, Ritesh and Ojha, Atul Kr. and Zampieri, Marcos and Malmasi, Shervin and Kadar, Daniel
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.trac-1.4/
Kumari, Kirti and Srivastav, Shaury and Suman, Rajiv Ranjan
Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)
30--36
In this paper, we presented our team ''$IIITRanchi${\textquotedblright} for the Trolling, Aggression and Cyberbullying (TRAC-3) 2022 shared tasks. Aggression and its different forms on social media and other platforms had tremendous growth on the Internet. In this work we have tried upon different aspects of aggression, aggression intensity, bias of different forms and their usage online and its identification using different Machine Learning techniques. We have classified each sample at seven different tasks namely aggression level, aggression intensity, discursive role, gender bias, religious bias, caste/class bias and ethnicity/racial bias as specified in the shared tasks. Both of our teams tried machine learning classifiers and achieved the good results. Overall, our team ''$IIITRanchi${\textquotedblright} ranked first position in this shared tasks competition.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,422
inproceedings
markov-daelemans-2022-role
The Role of Context in Detecting the Target of Hate Speech
Kumar, Ritesh and Ojha, Atul Kr. and Zampieri, Marcos and Malmasi, Shervin and Kadar, Daniel
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.trac-1.5/
Markov, Ilia and Daelemans, Walter
Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)
37--42
Online hate speech detection is an inherently challenging task that has recently received much attention from the natural language processing community. Despite a substantial increase in performance, considerable challenges remain and include encoding contextual information into automated hate speech detection systems. In this paper, we focus on detecting the target of hate speech in Dutch social media: whether a hateful Facebook comment is directed against migrants or not (i.e., against someone else). We manually annotate the relevant conversational context and investigate the effect of different aspects of context on performance when adding it to a Dutch transformer-based pre-trained language model, BERTje. We show that performance of the model can be significantly improved by integrating relevant contextual information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,423
inproceedings
barbarestani-etal-2022-annotating
Annotating Targets of Toxic Language at the Span Level
Kumar, Ritesh and Ojha, Atul Kr. and Zampieri, Marcos and Malmasi, Shervin and Kadar, Daniel
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.trac-1.6/
Barbarestani, Baran and Maks, Isa and Vossen, Piek
Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)
43--51
In this paper, we discuss an interpretable framework to integrate toxic language annotations. Most data sets address only one aspect of the complex relationship in toxic communication and are inconsistent with each other. Enriching annotations with more details and information is however of great importance in order to develop high-performing and comprehensive explainable language models. Such systems should recognize and interpret both expressions that are toxic as well as expressions that make reference to specific targets to combat toxic language. We therefore created a crowd-annotation task to mark the spans of words that refer to target communities as an extension of the HateXplain data set. We present a quantitative and qualitative analysis of the annotations. We also fine-tuned RoBERTa-base on our data and experimented with different data thresholds to measure their effect on the classification. The F1-score of our best model on the test set is 79{\%}. The annotations are freely available and can be combined with the existing HateXplain annotation to build richer and more complete models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,424
inproceedings
kirk-etal-2022-data
Is More Data Better? Re-thinking the Importance of Efficiency in Abusive Language Detection with Transformers-Based Active Learning
Kumar, Ritesh and Ojha, Atul Kr. and Zampieri, Marcos and Malmasi, Shervin and Kadar, Daniel
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.trac-1.7/
Kirk, Hannah and Vidgen, Bertie and Hale, Scott A.
Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)
52--61
Annotating abusive language is expensive, logistically complex and creates a risk of psychological harm. However, most machine learning research has prioritized maximizing effectiveness (i.e., F1 or accuracy score) rather than data efficiency (i.e., minimizing the amount of data that is annotated). In this paper, we use simulated experiments over two datasets at varying percentages of abuse to demonstrate that transformers-based active learning is a promising approach to substantially raise efficiency whilst still maintaining high effectiveness, especially when abusive content is a smaller percentage of the dataset. This approach requires a fraction of labeled data to reach performance equivalent to training over the full dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,425
inproceedings
barrett-etal-2022-lightweight
A Lightweight Yet Robust Approach to Textual Anomaly Detection
Kumar, Ritesh and Ojha, Atul Kr. and Zampieri, Marcos and Malmasi, Shervin and Kadar, Daniel
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.trac-1.8/
Barrett, Leslie and Kingan, Robert and Ortan, Alexandra and Seshadri, Madhavan
Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)
62--67
Highly imbalanced textual datasets continue to pose a challenge for supervised learning models. However, viewing such imbalanced text data as an anomaly detection (AD) problem has advantages for certain tasks such as detecting hate speech, or inappropriate and/or offensive language in large social media feeds. There the unwanted content tends to be both rare and non-uniform with respect to its thematic character, and better fits the definition of an anomaly than a class. Several recent approaches to textual AD use transformer models, achieving good results but with trade-offs in pre-training and inflexibility with respect to new domains. In this paper we compare two linear models within the NMF family, which also have a recent history in textual AD. We introduce a new approach based on an alternative regularization of the NMF objective. Our results surpass other linear AD models and are on par with deep models, performing comparably well even in very small outlier concentrations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,426
inproceedings
litvak-etal-2022-detection
Detection of Negative Campaign in Israeli Municipal Elections
Kumar, Ritesh and Ojha, Atul Kr. and Zampieri, Marcos and Malmasi, Shervin and Kadar, Daniel
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.trac-1.9/
Litvak, Marina and Vanetik, Natalia and Talker, Sagiv and Machlouf, Or
Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)
68--74
Political competitions are complex settings where candidates use campaigns to promote their chances to be elected. One choice focuses on conducting a positive campaign that highlights the candidate`s achievements, leadership skills, and future programs. The alternative is to focus on a negative campaign that emphasizes the negative aspects of the competing person and is aimed at offending opponents or the opponent`s supporters. In this proposal, we concentrate on negative campaigns in Israeli elections. This work introduces an empirical case study on automatic detection of negative campaigns, using machine learning and natural language processing approaches, applied to the Hebrew-language data from Israeli municipal elections. Our contribution is multi-fold: (1) We provide TONIC{---}daTaset fOr Negative polItical Campaign in Hebrew{---}which consists of annotated posts from Facebook related to Israeli municipal elections; (2) We introduce results of a case study, that explored several research questions. \textbf{RQ1}: Which classifier and representation perform best for this task? We employed several traditional classifiers which are known for their good performance in IR tasks and two pre-trained models based on BERT architecture; several standard representations were employed with traditional ML models. \textbf{RQ2}: Does a negative campaign always contain offensive language? Can a model, trained to detect offensive language, also detect negative campaigns? We are trying to answer this question by reporting results for the transfer learning from a dataset annotated with offensive language to our dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,427
inproceedings
goldzycher-schneider-2022-hypothesis
Hypothesis Engineering for Zero-Shot Hate Speech Detection
Kumar, Ritesh and Ojha, Atul Kr. and Zampieri, Marcos and Malmasi, Shervin and Kadar, Daniel
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.trac-1.10/
Goldzycher, Janis and Schneider, Gerold
Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)
75--90
Standard approaches to hate speech detection rely on sufficient available hate speech annotations. Extending previous work that repurposes natural language inference (NLI) models for zero-shot text classification, we propose a simple approach that combines multiple hypotheses to improve English NLI-based zero-shot hate speech detection. We first conduct an error analysis for vanilla NLI-based zero-shot hate speech detection and then develop four strategies based on this analysis. The strategies use multiple hypotheses to predict various aspects of an input text and combine these predictions into a final verdict. We find that the zero-shot baseline used for the initial error analysis already outperforms commercial systems and fine-tuned BERT-based hate speech detection models on HateCheck. The combination of the proposed strategies further increases the zero-shot accuracy of 79.4{\%} on HateCheck by 7.9 percentage points (pp), and the accuracy of 69.6{\%} on ETHOS by 10.0pp.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,428
inproceedings
montero-etal-2022-multilevel
Multilevel Hypernode Graphs for Effective and Efficient Entity Linking
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.1/
Montero, David and Mart{\'i}nez, Javier and Yebes, Javier
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
1--10
Information extraction on documents still remains a challenge, especially when dealing with unstructured documents with complex and variable layouts. Graph Neural Networks seem to be a promising approach to overcome these difficulties due to their flexible and sparse nature, but they have not been exploited yet. In this work, we present a multi-level graph-based model that performs entity building and linking on unstructured documents, purely based on GNNs, and extremely light (0.3 million parameters). We also propose a novel strategy for an optimal propagation of the information between the graph levels based on hypernodes. The conducted experiments on public and private datasets demonstrate that our model is suitable for solving the tasks, and that the proposed propagation strategy is optimal and outperforms other approaches.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,430
inproceedings
nikishina-etal-2022-cross
Cross-Modal Contextualized Hidden State Projection Method for Expanding of Taxonomic Graphs
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.2/
Nikishina, Irina and Vakhitova, Alsu and Tutubalina, Elena and Panchenko, Alexander
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
11--24
Taxonomy is a graph of terms organized hierarchically using is-a (hypernymy) relations. We suggest novel candidate-free task formulation for the taxonomy enrichment task. To solve the task, we leverage lexical knowledge from the pre-trained models to predict new words missing in the taxonomic resource. We propose a method that combines graph-, and text-based contextualized representations from transformer networks to predict new entries to the taxonomy. We have evaluated the method suggested for this task against text-only baselines based on BERT and fastText representations. The results demonstrate that incorporation of graph embedding is beneficial in the task of hyponym prediction using contextualized models. We hope the new challenging task will foster further research in automatic text graph construction methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,431
inproceedings
feng-etal-2022-sharing
Sharing Parameter by Conjugation for Knowledge Graph Embeddings in Complex Space
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.3/
Feng, Xincan and Qu, Zhi and Cheng, Yuchang and Watanabe, Taro and Yugami, Nobuhiro
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
25--34
A Knowledge Graph (KG) is the directed graphical representation of entities and relations in the real world. KG can be applied in diverse Natural Language Processing (NLP) tasks where knowledge is required. The need to scale up and complete KG automatically yields Knowledge Graph Embedding (KGE), a shallow machine learning model that is suffering from memory and training time consumption issues. To mitigate the computational load, we propose a parameter-sharing method, i.e., using conjugate parameters for complex numbers employed in KGE models. Our method improves memory efficiency by 2x in relation embedding while achieving comparable performance to the state-of-the-art non-conjugate models, with faster, or at least comparable, training time. We demonstrated the generalizability of our method on two best-performing KGE models $5^{\bigstar}\mathrm{E}$ (CITATION) and $\mathrm{ComplEx}$ (CITATION) on five benchmark datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,432
inproceedings
heja-ligeti-nagy-2022-clique
A Clique-based Graphical Approach to Detect Interpretable Adjectival Senses in {H}ungarian
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.4/
H{\'e}ja, Enik{\H{o}} and Ligeti-Nagy, No{\'e}mi
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
35--43
The present paper introduces an ongoing research which aims to detect interpretable adjectival senses from monolingual corpora applying an unsupervised WSI approach. According to our expectations the findings of our investigation are going to contribute to the work of lexicographers, linguists and also facilitate the creation of benchmarks with semantic information for the NLP community. For doing so, we set up four criteria to distinguish between senses. We experiment with a graphical approach to model our criteria and then perform a detailed, linguistically motivated manual evaluation of the results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,433
inproceedings
gokhan-etal-2022-gusum
{GUSUM}: Graph-based Unsupervised Summarization Using Sentence Features Scoring and Sentence-{BERT}
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.5/
Gokhan, Tuba and Smith, Phillip and Lee, Mark
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
44--53
Unsupervised extractive document summarization aims to extract salient sentences from a document without requiring a labelled corpus. In existing graph-based methods, vertex and edge weights are usually created by calculating sentence similarities. In this paper, we develop a Graph-Based Unsupervised Summarization(GUSUM) method for extractive text summarization based on the principle of including the most important sentences while excluding sentences with similar meanings in the summary. We modify traditional graph ranking algorithms with recent sentence embedding models and sentence features and modify how sentence centrality is computed. We first define the sentence feature scores represented at the vertices, indicating the importance of each sentence in the document. After this stage, we use Sentence-BERT for obtaining sentence embeddings to better capture the sentence meaning. In this way, we define the edges of a graph where semantic similarities are represented. Next we create an undirected graph that includes sentence significance and similarities between sentences. In the last stage, we determine the most important sentences in the document with the ranking method we suggested on the graph created. Experiments on CNN/Daily Mail, New York Times, arXiv, and PubMed datasets show our approach achieves high performance on unsupervised graph-based summarization when evaluated both automatically and by humans.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,434
inproceedings
wold-2022-effectiveness
The Effectiveness of Masked Language Modeling and Adapters for Factual Knowledge Injection
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.6/
Wold, Sondre
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
54--59
This paper studies the problem of injecting factual knowledge into large pre-trained language models. We train adapter modules on parts of the ConceptNet knowledge graph using the masked language modeling objective and evaluate the success of the method by a series of probing experiments on the LAMA probe. Mean P@K curves for different configurations indicate that the technique is effective, increasing the performance on sub-sets of the LAMA probe for large values of k by adding as little as 2.1{\%} additional parameters to the original models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,435
inproceedings
dong-etal-2022-text
Text-Aware Graph Embeddings for Donation Behavior Prediction
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.7/
Dong, MeiXing and Xu, Xueming and Mihalcea, Rada
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
60--69
Predicting user behavior is essential for a large number of applications including recommender and dialog systems, and more broadly in domains such as healthcare, education, and economics. In this paper, we show that we can effectively predict donation behavior by using text-aware graph models, building upon graphs that connect user behaviors and their interests. Using a university donation dataset, we show that the graph representation significantly improves over learning from textual representations. Moreover, we show how incorporating implicit information inferred from text associated with the graph entities brings additional improvements. Our results demonstrate the role played by text-aware graph representations in predicting donation behavior.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,436
inproceedings
sinha-etal-2022-word
Word Sense Disambiguation of {F}rench Lexicographical Examples Using Lexical Networks
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.8/
Sinha, Aman and Ollinger, Sandrine and Constant, Mathieu
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
70--76
This paper focuses on the task of word sense disambiguation (WSD) on lexicographic examples relying on the French Lexical Network (fr-LN). For this purpose, we exploit the lexical and relational properties of the network, that we integrated in a feedforward neural WSD model on top of pretrained French BERT embeddings. We provide a comparative study with various models and further show the impact of our approach regarding polysemic units.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,437
inproceedings
aksenova-etal-2022-rudsi
{R}u{DSI}: Graph-based Word Sense Induction Dataset for {R}ussian
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.9/
Aksenova, Anna and Gavrishina, Ekaterina and Rykov, Elisei and Kutuzov, Andrey
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
77--88
We present RuDSI, a new benchmark for word sense induction (WSI) in Russian. The dataset was created using manual annotation and semi-automatic clustering of Word Usage Graphs (WUGs). RuDSI is completely data-driven (based on texts from Russian National Corpus), with no external word senses imposed on annotators. We present and analyze RuDSI, describe our annotation workflow, show how graph clustering parameters affect the dataset, report the performance that several baseline WSI methods obtain on RuDSI and discuss possibilities for improving these scores.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,438
inproceedings
sakketou-etal-2022-temporal
Temporal Graph Analysis of Misinformation Spreaders in Social Media
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.10/
Plepi, Joan and Sakketou, Flora and Geiss, Henri-Jacques and Flek, Lucie
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
89--104
Proactively identifying misinformation spreaders is an important step towards mitigating the impact of fake news on our society. Although the news domain is subject to rapid changes over time, the temporal dynamics of the spreaders' language and network have not been explored yet. In this paper, we analyze the users' time-evolving semantic similarities and social interactions and show that such patterns can, on their own, indicate misinformation spreading. Building on these observations, we propose a dynamic graph-based framework that leverages the dynamic nature of the users' network for detecting fake news spreaders. We validate our design choice through qualitative analysis and demonstrate the contributions of our model`s components through a series of exploratory and ablative experiments on two datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,439
inproceedings
valentino-etal-2022-textgraphs
{T}ext{G}raphs 2022 Shared Task on Natural Language Premise Selection
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.11/
Valentino, Marco and Ferreira, Deborah and Thayaparan, Mokanarangan and Freitas, Andr{\'e} and Ustalov, Dmitry
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
105--113
The Shared Task on Natural Language Premise Selection (NLPS) asks participants to retrieve the set of premises that are most likely to be useful for proving a given mathematical statement from a supporting knowledge base. While previous editions of the TextGraphs shared tasks series targeted multi-hop inference for explanation regeneration in the context of science questions (Thayaparan et al., 2021; Jansen and Ustalov, 2020, 2019), NLPS aims to assess the ability of state-of-the-art approaches to operate on a mixture of natural and mathematical language and model complex multi-hop reasoning dependencies between statements. To this end, this edition of the shared task makes use of a large set of approximately 21k mathematical statements extracted from the PS-ProofWiki dataset (Ferreira and Freitas, 2020a). In this summary paper, we present the results of the 1st edition of the NLPS task, providing a description of the evaluation data, and the participating systems. Additionally, we perform a detailed analysis of the results, evaluating various aspects involved in mathematical language processing and multi-hop inference. The best-performing system achieved a MAP of 15.39, improving the performance of a TF-IDF baseline by approximately 3.0 MAP.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,440
inproceedings
tran-etal-2022-ijs
{IJS} at {T}ext{G}raphs-16 Natural Language Premise Selection Task: Will Contextual Information Improve Natural Language Premise Selection?
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.12/
Tran, Thi Hong Hanh and Martinc, Matej and Doucet, Antoine and Pollak, Senja
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
114--118
Natural Language Premise Selection (NLPS) is a mathematical Natural Language Processing (NLP) task that retrieves a set of applicable relevant premises to support the end-user finding the proof for a particular statement. In this research, we evaluate the impact of Transformer-based contextual information and different fundamental similarity scores toward NLPS. The results demonstrate that the contextual representation is better at capturing meaningful information despite not being pretrained in the mathematical background compared to the statistical approach (e.g., the TF-IDF) with a boost of around 3.00{\%} MAP@500.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,441
inproceedings
trust-etal-2022-snlp
{SNLP} at {T}ext{G}raphs 2022 Shared Task: Unsupervised Natural Language Premise Selection in Mathematical Texts Using Sentence-{MPN}et
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.13/
Trust, Paul and Kadusabe, Provia and Younis, Haseeb and Minghim, Rosane and Milios, Evangelos and Zahran, Ahmed
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
119--123
This paper describes our system for the submission to the TextGraphs 2022 shared task at COLING 2022: Natural Language Premise Selection (NLPS) from mathematical texts. The task of NLPS is about selecting mathematical statements called premises in a knowledge base written in natural language and mathematical formulae that are most likely to be used to prove a particular mathematical proof. We formulated this task as an unsupervised semantic similarity task by first obtaining contextualized embeddings of both the premises and mathematical proofs using sentence transformers. We then obtained the cosine similarity between the embeddings of premises and proofs and then selected premises with the highest cosine scores as the most probable. Our system improves over the baseline system that uses bag of words models based on term frequency inverse document frequency in terms of mean average precision (MAP) by about 23.5{\%} (0.1516 versus 0.1228).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,442
inproceedings
dastgheib-asgari-2022-keyword
Keyword-based Natural Language Premise Selection for an Automatic Mathematical Statement Proving
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.14/
Dastgheib, Doratossadat and Asgari, Ehsaneddin
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
124--126
Extraction of supportive premises for a mathematical problem can contribute to profound success in improving automatic reasoning systems. One bottleneck in automated theorem proving is the lack of a proper semantic information retrieval system for mathematical texts. In this paper, we show the effect of keyword extraction in the natural language premise selection (NLPS) shared task proposed in TextGraph-16 that seeks to select the most relevant sentences supporting a given mathematical statement.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,443
inproceedings
kovriguina-etal-2022-textgraphs
{T}ext{G}raphs-16 Natural Language Premise Selection Task: Zero-Shot Premise Selection with Prompting Generative Language Models
Ustalov, Dmitry and Gao, Yanjun and Panchenko, Alexander and Valentino, Marco and Thayaparan, Mokanarangan and Nguyen, Thien Huu and Penn, Gerald and Ramesh, Arti and Jana, Abhik
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.textgraphs-1.15/
Kovriguina, Liubov and Teucher, Roman and Wardenga, Robert
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
127--132
Automated theorem proving can benefit a lot from methods employed in natural language processing, knowledge graphs and information retrieval: this non-trivial task combines formal languages understanding, reasoning, similarity search. We tackle this task by enhancing semantic similarity ranking with prompt engineering, which has become a new paradigm in natural language understanding. None of our approaches requires additional training. Despite encouraging results reported by prompt engineering approaches for a range of NLP tasks, for the premise selection task vanilla re-ranking by prompting GPT-3 doesn`t outperform semantic similarity ranking with SBERT, but merging of the both rankings shows better results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,444
inproceedings
lamberti-arraes-2022-lexicon
Lexicon-driven approach for Terminology: specialized resources on the environment in {B}razilian {P}ortuguese
Costa, Rute and Carvalho, Sara and Ani{\'c}, Ana Ostro{\v{s}}ki and Khan, Anas Fahad
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.term-1.1/
Lamberti Arraes, Fl{\'a}via
Proceedings of the Workshop on Terminology in the 21st century: many faces, many places
1--7
This paper presents a terminological research carried out to account for terms of the environment in Brazilian Portuguese based on a lexico-semantic perspective for Terminology (L`Homme, 2015, 2016, 2017, 2020; L`Homme et al., 2014, 2020). This work takes place in the context of a collaboration for the development of DiCoEnviro (Dictionnaire Fondamental de l`Environnment {--} Fundamental Dictionary on the environment), a multilingual terminological resource developed by the Observatoire de Linguistique Sens Texte at the University of Montreal, Canada. By following a methodolgy especially devised to develop terminological work based on a lexicon-driven approach (L`Homme et al., 2020), the terminological analysis reveals how the linguistic behavior of terms may be unveiled and how this is effective for identifying the meaning of a term and supporting meaning distinctions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,446
inproceedings
silecchia-etal-2022-knowledge
Knowledge Representation and Language Simplification of Human Rights
Costa, Rute and Carvalho, Sara and Ani{\'c}, Ana Ostro{\v{s}}ki and Khan, Anas Fahad
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.term-1.2/
Silecchia, Sara and Vezzani, Federica and Di Nunzio, Giorgio Maria
Proceedings of the Workshop on Terminology in the 21st century: many faces, many places
8--12
In this paper, we propose the description of a very recent interdisciplinary project aiming at analysing both the conceptual and linguistic dimensions of humanitarian rights terminology. This analysis will result in the form of a new knowledge-based multilingual terminological resource which is designed in order to meet the FAIR principles for Open Science and will serve, in the future, as a prototype for the development of a new software for the simplified rewriting of international legal texts relating to human rights. Given the early stage of the project, we will focus on the description of its rationale, the planned workflow, and the theoretical approach which will be adopted to achieve the main goal of this ambitious research project.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,447
inproceedings
skeppstedt-etal-2022-converting
Converting from the {N}ordic Terminological Record Format to the {TBX} Format
Costa, Rute and Carvalho, Sara and Ani{\'c}, Ana Ostro{\v{s}}ki and Khan, Anas Fahad
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.term-1.3/
Skeppstedt, Maria and Mattson, Marie and Ahltorp, Magnus and Domeij, Rickard
Proceedings of the Workshop on Terminology in the 21st century: many faces, many places
13--18
Rikstermbanken (Sweden`s National Term Bank), which was launched in 2009, uses the Nordic Terminological Record Format (NTRF) for organising its terminological data. Since then, new terminology formats have been established as standards, e.g., the Termbase eXchange format (TBX). We here describe work carried out by the Institute for Language and Folklore within the Federated eTranslation TermBank Network Action. This network develops a technical infrastructure for facilitating sharing of terminology resources throughout Europe. To be able to share some of the term collections of Rikstermbanken within this network and export them to Eurotermbank, we have implemented a conversion from the Nordic Terminological Record Format, as used in Rikstermbanken, to the TBX format.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,448
inproceedings
banerjee-etal-2022-dataset
A Dataset for Term Extraction in {H}indi
Costa, Rute and Carvalho, Sara and Ani{\'c}, Ana Ostro{\v{s}}ki and Khan, Anas Fahad
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.term-1.4/
Banerjee, Shubhanker and Chakravarthi, Bharathi Raja and McCrae, John Philip
Proceedings of the Workshop on Terminology in the 21st century: many faces, many places
19--25
Automatic Term Extraction (ATE) is one of the core problems in natural language processing and forms a key component of text mining pipelines of domain specific corpora. Complex low-level tasks such as machine translation and summarization for domain specific texts necessitate the use of term extraction systems. However, the development of these systems requires the use of large annotated datasets and thus there has been little progress made on this front for under-resourced languages. As a part of ongoing research, we present a dataset for term extraction from Hindi texts in this paper. To the best of our knowledge, this is the first dataset that provides term annotated documents for Hindi. Furthermore, we have evaluated this dataset on statistical term extraction methods and the results obtained indicate the problems associated with development of term extractors for under-resourced languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,449
inproceedings
nazar-lindemann-2022-terminology
Terminology extraction using co-occurrence patterns as predictors of semantic relevance
Costa, Rute and Carvalho, Sara and Ani{\'c}, Ana Ostro{\v{s}}ki and Khan, Anas Fahad
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.term-1.5/
Nazar, Rogelio and Lindemann, David
Proceedings of the Workshop on Terminology in the 21st century: many faces, many places
26--29
We propose a method for automatic term extraction based on a statistical measure that ranks term candidates according to their semantic relevance to a specialised domain. As a measure of relevance we use term co-occurrence, defined as the repeated instantiation of two terms in the same sentences, in indifferent order and at variable distances. In this way, term candidates are ranked higher if they show a tendency to co-occur with a selected group of other units, as opposed to those showing more uniform distributions. No external resources are needed for the application of the method, but performance improves when provided with a pre-existing term list. We present results of the application of this method to a Spanish-English Linguistics corpus, and the evaluation compares favourably with a standard method based on reference corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,450
inproceedings
jerdhaf-etal-2022-evaluating
Evaluating Pre-Trained Language Models for Focused Terminology Extraction from {S}wedish Medical Records
Costa, Rute and Carvalho, Sara and Ani{\'c}, Ana Ostro{\v{s}}ki and Khan, Anas Fahad
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.term-1.6/
Jerdhaf, Oskar and Santini, Marina and Lundberg, Peter and Bjerner, Tomas and Al-Abasse, Yosef and Jonsson, Arne and Vakili, Thomas
Proceedings of the Workshop on Terminology in the 21st century: many faces, many places
30--32
In the experiments briefly presented in this abstract, we compare the performance of a generalist Swedish pre-trained language model with a domain-specific Swedish pre-trained model on the downstream task of focussed terminology extraction of implant terms, which are terms that indicate the presence of implants in the body of patients. The fine-tuning is identical for both models. For the search strategy we rely on KD-Tree that we feed with two different lists of term seeds, one with noise and one without noise. Results shows that the use of a domain-specific pre-trained language model has a positive impact on focussed terminology extraction only when using term seeds without noise.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,451
inproceedings
rigouts-terryn-etal-2022-terminer
{D}-Terminer: Online Demo for Monolingual and Bilingual Automatic Term Extraction
Costa, Rute and Carvalho, Sara and Ani{\'c}, Ana Ostro{\v{s}}ki and Khan, Anas Fahad
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.term-1.7/
Rigouts Terryn, Ayla and Hoste, Veronique and Lefever, Els
Proceedings of the Workshop on Terminology in the 21st century: many faces, many places
33--40
This contribution presents D-Terminer: an open access, online demo for monolingual and multilingual automatic term extraction from parallel corpora. The monolingual term extraction is based on a recurrent neural network, with a supervised methodology that relies on pretrained embeddings. Candidate terms can be tagged in their original context and there is no need for a large corpus, as the methodology will work even for single sentences. With the bilingual term extraction from parallel corpora, potentially equivalent candidate term pairs are extracted from translation memories and manual annotation of the results shows that good equivalents are found for most candidate terms. Accompanying the release of the demo is an updated version of the ACTER Annotated Corpora for Term Extraction Research (version 1.5).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,452
inproceedings
gaspari-etal-2022-introducing
Introducing the Digital Language Equality Metric: Technological Factors
Aldabe, Itziar and Altuna, Bego{\~n}a and Farwell, Aritz and Rigau, German
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.tdle-1.1/
Gaspari, Federico and Gallagher, Owen and Rehm, Georg and Giagkou, Maria and Piperidis, Stelios and Dunne, Jane and Way, Andy
Proceedings of the Workshop Towards Digital Language Equality within the 13th Language Resources and Evaluation Conference
1--12
This paper introduces the concept of Digital Language Equality (DLE) developed by the EU-funded European Language Equality (ELE) project, and describes the associated DLE Metric with a focus on its technological factors (TFs), which are complemented by situational contextual factors. This work aims at objectively describing the level of technological support of all European languages and lays the foundation to implement a large-scale EU-wide programme to ensure that these languages can continue to exist and prosper in the digital age, to serve the present and future needs of their speakers. The paper situates this ongoing work with a strong European focus in the broader context of related efforts, and explains how the DLE Metric can help track the progress towards DLE for all languages of Europe, focusing in particular on the role played by the TFs. These are derived from the European Language Grid (ELG) Catalogue, that provides the empirical basis to measure the level of digital readiness of all European languages. The DLE Metric scores can be consulted through an online interactive dashboard to show the level of technological support of each European language and track the overall progress toward DLE.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,454
inproceedings
grutzner-zahn-rehm-2022-introducing
Introducing the Digital Language Equality Metric: Contextual Factors
Aldabe, Itziar and Altuna, Bego{\~n}a and Farwell, Aritz and Rigau, German
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.tdle-1.2/
Gr{\"utzner-Zahn, Annika and Rehm, Georg
Proceedings of the Workshop Towards Digital Language Equality within the 13th Language Resources and Evaluation Conference
13--26
In our digital age, digital language equality is an important goal to enable participation in society for all citizens, independent of the language they speak. To assess the current state of play with regard to Europe`s languages, we developed, in the project European Language Equality, a metric for digital language equality that consists of two parts, technological and contextual (i.e., non-technological) factors. We present a metric for calculating the contextual factors for over 80 European languages. For each language, a score is calculated that reflects the broader context or socio-economic ecosystem of a language, which has, for a given language, a direct impact for technology and resource development; it is important to note, though, that Language Technologies and Resources related aspects are reflected by the technological factors. To reduce the vast number of potential contextual factors to an adequate number, five different configurations were calculated and evaluated with a panel of experts. The best results were achieved by a configuration in which 12 manually curated factors were included. In the factor selection process, attention was paid to data quality, automatic updatability, inclusion of data from different domains, and a balance between different data types. The evaluation shows that this specific configuration is stable for the official EU languages; while for regional and minority languages, as well as national non-official EU languages, there is room for improvement.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,455
inproceedings
giagkou-etal-2022-collaborative
Collaborative Metadata Aggregation and Curation in Support of Digital Language Equality Monitoring
Aldabe, Itziar and Altuna, Bego{\~n}a and Farwell, Aritz and Rigau, German
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.tdle-1.3/
Giagkou, Maria and Piperidis, Stelios and Labropoulou, Penny and Deligiannis, Miltos and Kolovou, Athanasia and Voukoutis, Leon
Proceedings of the Workshop Towards Digital Language Equality within the 13th Language Resources and Evaluation Conference
27--35
The European Language Equality (ELE) project develops a strategic research, innovation and implementation agenda (SRIA) and a roadmap for achieving full digital language equality in Europe by 2030. Key component of the SRIA development is an accurate estimation of the current standing of languages with respect to their technological readiness. In this paper we present the empirical basis on which such estimation is grounded, its starting point and in particular the automatic and collaborative methods used for extending it. We focus on the collaborative expert activities, the challenges posed, and the solutions adopted. We also briefly present the dashboard application developed for querying and visualising the empirical data as well as monitoring and comparing the evolution of technological support within and across languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,456
inproceedings
artola-rigau-2022-measuring
Measuring {HLT} Research Equality of {E}uropean Languages
Aldabe, Itziar and Altuna, Bego{\~n}a and Farwell, Aritz and Rigau, German
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.tdle-1.4/
Artola, Gorka and Rigau, German
Proceedings of the Workshop Towards Digital Language Equality within the 13th Language Resources and Evaluation Conference
36--45
This work explores quantitative indicators that could potentially measure the equality and inequality research levels among the languages of the European Union in the field of human language technologies (HLT research equality). Our ultimate goal is to investigate European language equality in HLT research considering the number of papers published on several HLT research venues that mention each language with respect to their estimated number of speakers. This way, inequalities affecting HLT research in Europe will depend on other factors such as history, political status, GDP, level of social or technological development, etc. We have identified several groups of EU languages in the proposed measurement of HLT research equality, each group comprising languages with large differences in the number of speakers. We have discovered a relative equality among surprisingly different languages in terms of number of speakers and also relevant inequalities within the most spoken languages. All data and code will be released upon acceptance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,457
inproceedings
tadic-etal-2022-national
National Language Technology Platform for Public Administration
Aldabe, Itziar and Altuna, Bego{\~n}a and Farwell, Aritz and Rigau, German
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.tdle-1.5/
Tadi{\'c}, Marko and Farka{\v{s}}, Da{\v{s}}a and Filko, Matea and Vasi{\c{l}}evskis, Art{\={u}}rs and Vasi{\c{l}}jevs, Andrejs and Ziedi{\c{n}}{\v{s}}, J{\={a}}nis and Motika, {\v{Z}}eljka and Fishel, Mark and Loftsson, Hrafn and Gu{\dh}nason, J{\'o}n and Borg, Claudia and Cortis, Keith and Attard, Judie and Spiteri, Donatienne
Proceedings of the Workshop Towards Digital Language Equality within the 13th Language Resources and Evaluation Conference
46--51
This article presents the work in progress on the collaborative project of several European countries to develop National Language Technology Platform (NLTP). The project aims at combining the most advanced Language Technology tools and solutions in a new, state-of-the-art, Artificial Intelligence driven, National Language Technology Platform for five EU/EEA official and lower-resourced languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,458
inproceedings
de-dios-flores-etal-2022-nos
The N{\'o}s Project: Opening routes for the {G}alician language in the field of language technologies
Aldabe, Itziar and Altuna, Bego{\~n}a and Farwell, Aritz and Rigau, German
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.tdle-1.6/
de-Dios-Flores, Iria and Magari{\~n}os, Carmen and Vladu, Adina Ioana and Ortega, John E. and Pichel, Jos{\'e} Ramom and Garc{\'i}a, Marcos and Gamallo, Pablo and Fern{\'a}ndez Rei, Elisa and Bugar{\'i}n-Diz, Alberto and Gonz{\'a}lez Gonz{\'a}lez, Manuel and Barro, Sen{\'e}n and Regueira, Xos{\'e} Luis
Proceedings of the Workshop Towards Digital Language Equality within the 13th Language Resources and Evaluation Conference
52--61
The development of language technologies (LTs) such as machine translation, text analytics, and dialogue systems is essential in the current digital society, culture and economy. These LTs, widely supported in languages in high demand worldwide, such as English, are also necessary for smaller and less economically powerful languages, as they are a driving force in the democratization of the communities that use them due to their great social and cultural impact. As an example, dialogue systems allow us to communicate with machines in our own language; machine translation increases access to contents in different languages, thus facilitating intercultural relations; and text-to-speech and speech-to-text systems broaden different categories of users' access to technology. In the case of Galician (co-official language, together with Spanish, in the autonomous region of Galicia, located in northwestern Spain), incorporating the language into state-of-the-art AI applications can not only significantly favor its prestige (a decisive factor in language normalization), but also guarantee citizens' language rights, reduce social inequality, and narrow the digital divide. This is the main motivation behind the N{\'o}s Project (Proxecto N{\'o}s), which aims to have a significant contribution to the development of LTs in Galician (currently considered a low-resource language) by providing openly licensed resources, tools, and demonstrators in the area of intelligent technologies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,459
article
chang-bergen-2022-word
Word Acquisition in Neural Language Models
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.1/
Chang, Tyler A. and Bergen, Benjamin K.
null
1--16
We investigate how neural language models acquire individual words during training, extracting learning curves and ages of acquisition for over 600 words on the MacArthur-Bates Communicative Development Inventory (Fenson et al., 2007). Drawing on studies of word acquisition in children, we evaluate multiple predictors for words' ages of acquisition in LSTMs, BERT, and GPT-2. We find that the effects of concreteness, word length, and lexical class are pointedly different in children and language models, reinforcing the importance of interaction and sensorimotor experience in child language acquisition. Language models rely far more on word frequency than children, but, like children, they exhibit slower learning of words in longer utterances. Interestingly, models follow consistent patterns during training for both unidirectional and bidirectional models, and for both LSTM and Transformer architectures. Models predict based on unigram token frequencies early in training, before transitioning loosely to bigram probabilities, eventually converging on more nuanced predictions. These results shed light on the role of distributional learning mechanisms in children, while also providing insights for more human-like language acquisition in language models.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00444
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,474
article
gantt-etal-2022-decomposing
Decomposing and Recomposing Event Structure
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.2/
Gantt, William and Glass, Lelia and White, Aaron Steven
null
17--34
We present an event structure classification empirically derived from inferential properties annotated on sentence- and document-level Universal Decompositional Semantics (UDS) graphs. We induce this classification jointly with semantic role, entity, and event-event relation classifications using a document-level generative model structured by these graphs. To support this induction, we augment existing annotations found in the UDS1.0 dataset, which covers the entirety of the English Web Treebank, with an array of inferential properties capturing fine-grained aspects of the temporal and aspectual structure of events. The resulting dataset (available at decomp.io) is the largest annotation of event structure and (partial) event coreference to date.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00445
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,475
article
nan-etal-2022-fetaqa
{F}e{T}a{QA}: Free-form Table Question Answering
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.3/
Nan, Linyong and Hsieh, Chiachun and Mao, Ziming and Lin, Xi Victoria and Verma, Neha and Zhang, Rui and Kry{\'s}ci{\'n}ski, Wojciech and Schoelkopf, Hailey and Kong, Riley and Tang, Xiangru and Mutuma, Mutethia and Rosand, Ben and Trindade, Isabel and Bandaru, Renusree and Cunningham, Jacob and Xiong, Caiming and Radev, Dragomir and Radev, Dragomir
null
35--49
Existing table question answering datasets contain abundant factual questions that primarily evaluate a QA system`s comprehension of query and tabular data. However, restricted by their short-form answers, these datasets fail to include question{--}answer interactions that represent more advanced and naturally occurring information needs: questions that ask for reasoning and integration of information pieces retrieved from a structured knowledge source. To complement the existing datasets and to reveal the challenging nature of the table-based question answering task, we introduce FeTaQA, a new dataset with 10K Wikipedia-based table, question, free-form answer, supporting table cells pairs. FeTaQA is collected from noteworthy descriptions of Wikipedia tables that contain information people tend to seek; generation of these descriptions requires advanced processing that humans perform on a daily basis: Understand the question and table, retrieve, integrate, infer, and conduct text planning and surface realization to generate an answer. We provide two benchmark methods for the proposed task: a pipeline method based on semantic parsing-based QA systems and an end-to-end method based on large pretrained text generation models, and show that FeTaQA poses a challenge for both methods.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00446
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,476
article
kreutzer-etal-2022-quality
Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.4/
Kreutzer, Julia and Caswell, Isaac and Wang, Lisa and Wahab, Ahsan and van Esch, Daan and Ulzii-Orshikh, Nasanbayar and Tapo, Allahsera and Subramani, Nishant and Sokolov, Artem and Sikasote, Claytone and Setyawan, Monang and Sarin, Supheakmungkol and Samb, Sokhar and Sagot, Beno{\^it and Rivera, Clara and Rios, Annette and Papadimitriou, Isabel and Osei, Salomey and Suarez, Pedro Ortiz and Orife, Iroro and Ogueji, Kelechi and Rubungo, Andre Niyongabo and Nguyen, Toan Q. and M{\"uller, Mathias and M{\"uller, Andr{\'e and Muhammad, Shamsuddeen Hassan and Muhammad, Nanda and Mnyakeni, Ayanda and Mirzakhalov, Jamshidbek and Matangira, Tapiwanashe and Leong, Colin and Lawson, Nze and Kudugunta, Sneha and Jernite, Yacine and Jenny, Mathias and Firat, Orhan and Dossou, Bonaventure F. P. and Dlamini, Sakhile and de Silva, Nisansa and {\c{Cabuk Ball{\i, Sakine and Biderman, Stella and Battisti, Alessia and Baruwa, Ahmed and Bapna, Ankur and Baljekar, Pallavi and Azime, Israel Abebe and Awokoya, Ayodele and Ataman, Duygu and Ahia, Orevaoghene and Ahia, Oghenefego and Agrawal, Sweta and Adeyemi, Mofetoluwa
null
50--72
With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50{\%} sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00447
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,477
article
clark-etal-2022-canine
Canine: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.5/
Clark, Jonathan H. and Garrette, Dan and Turc, Iulia and Wieting, John
null
73--91
Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model`s ability to adapt. In this paper, we present Canine, a neural encoder that operates directly on character sequences{---}without explicit tokenization or vocabulary{---}and a pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias. To use its finer-grained input effectively and efficiently, Canine combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes context. Canine outperforms a comparable mBert model by 5.7 F1 on TyDi QA, a challenging multilingual benchmark, despite having fewer model parameters.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00448
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,478
article
davani-etal-2022-dealing
Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.6/
Mostafazadeh Davani, Aida and D{\'i}az, Mark and Prabhakaran, Vinodkumar
null
92--110
Majority voting and averaging are common approaches used to resolve annotator disagreements and derive single ground truth labels from multiple annotations. However, annotators may systematically disagree with one another, often reflecting their individual biases and values, especially in the case of subjective tasks such as detecting affect, aggression, and hate speech. Annotator disagreements may capture important nuances in such tasks that are often ignored while aggregating annotations to a single ground truth. In order to address this, we investigate the efficacy of multi-annotator models. In particular, our multi-task based approach treats predicting each annotators' judgements as separate subtasks, while sharing a common learned representation of the task. We show that this approach yields same or better performance than aggregating labels in the data prior to training across seven different binary classification tasks. Our approach also provides a way to estimate uncertainty in predictions, which we demonstrate better correlate with annotation disagreements than traditional methods. Being able to model uncertainty is especially useful in deployment scenarios where knowing when not to make a prediction is important.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00449
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,479
article
geva-etal-2022-break
Break, Perturb, Build: Automatic Perturbation of Reasoning Paths Through Question Decomposition
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.7/
Geva, Mor and Wolfson, Tomer and Berant, Jonathan
null
111--126
Recent efforts to create challenge benchmarks that test the abilities of natural language understanding models have largely depended on human annotations. In this work, we introduce the {\textquotedblleft}Break, Perturb, Build{\textquotedblright} (BPB) framework for automatic reasoning-oriented perturbation of question-answer pairs. BPB represents a question by decomposing it into the reasoning steps that are required to answer it, symbolically perturbs the decomposition, and then generates new question-answer pairs. We demonstrate the effectiveness of BPB by creating evaluation sets for three reading comprehension (RC) benchmarks, generating thousands of high-quality examples without human intervention. We evaluate a range of RC models on our evaluation sets, which reveals large performance gaps on generated examples compared to the original data. Moreover, symbolic perturbations enable fine-grained analysis of the strengths and limitations of models. Last, augmenting the training data with examples generated by BPB helps close the performance gaps, without any drop on the original data distribution.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00450
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,480
article
nishida-matsumoto-2022-domain
Out-of-Domain Discourse Dependency Parsing via Bootstrapping: An Empirical Analysis on Its Effectiveness and Limitation
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.8/
Nishida, Noriki and Matsumoto, Yuji
null
127--144
Discourse parsing has been studied for decades. However, it still remains challenging to utilize discourse parsing for real-world applications because the parsing accuracy degrades significantly on out-of-domain text. In this paper, we report and discuss the effectiveness and limitations of bootstrapping methods for adapting modern BERT-based discourse dependency parsers to out-of-domain text without relying on additional human supervision. Specifically, we investigate self-training, co-training, tri-training, and asymmetric tri-training of graph-based and transition-based discourse dependency parsing models, as well as confidence measures and sample selection criteria in two adaptation scenarios: monologue adaptation between scientific disciplines and dialogue genre adaptation. We also release COVID-19 Discourse Dependency Treebank (COVID19-DTB), a new manually annotated resource for discourse dependency parsing of biomedical paper abstracts. The experimental results show that bootstrapping is significantly and consistently effective for unsupervised domain adaptation of discourse dependency parsing, but the low coverage of accurately predicted pseudo labels is a bottleneck for further improvement. We show that active learning can mitigate this limitation.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00451
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,481
article
ramesh-etal-2022-samanantar
Samanantar: The Largest Publicly Available Parallel Corpora Collection for 11 {I}ndic Languages
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.9/
Ramesh, Gowtham and Doddapaneni, Sumanth and Bheemaraj, Aravinth and Jobanputra, Mayank and AK, Raghavan and Sharma, Ajitesh and Sahoo, Sujit and Diddee, Harshita and J, Mahalakshmi and Kakwani, Divyanshu and Kumar, Navneet and Pradeep, Aswin and Nagaraj, Srihari and Deepak, Kumar and Raghavan, Vivek and Kunchukuttan, Anoop and Kumar, Pratyush and Khapra, Mitesh Shantadevi
null
145--162
We present Samanantar, the largest publicly available parallel corpora collection for Indic languages. The collection contains a total of 49.7 million sentence pairs between English and 11 Indic languages (from two language families). Specifically, we compile 12.4 million sentence pairs from existing, publicly available parallel corpora, and additionally mine 37.4 million sentence pairs from the Web, resulting in a 4{\texttimes} increase. We mine the parallel sentences from the Web by combining many corpora, tools, and methods: (a) Web-crawled monolingual corpora, (b) document OCR for extracting sentences from scanned documents, (c) multilingual representation models for aligning sentences, and (d) approximate nearest neighbor search for searching in a large collection of sentences. Human evaluation of samples from the newly mined corpora validate the high quality of the parallel sentences across 11 languages. Further, we extract 83.4 million sentence pairs between all 55 Indic language pairs from the English-centric parallel corpus using English as the pivot language. We trained multilingual NMT models spanning all these languages on Samanantar which outperform existing models and baselines on publicly available benchmarks, such as FLORES, establishing the utility of Samanantar. Our data and models are available publicly at Samanantar and we hope they will help advance research in NMT and multilingual NLP for Indic languages.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00452
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,482
article
laban-etal-2022-summac
{S}umma{C}: Re-Visiting {NLI}-based Models for Inconsistency Detection in Summarization
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.10/
Laban, Philippe and Schnabel, Tobias and Bennett, Paul N. and Hearst, Marti A.
null
163--177
In the summarization domain, a key requirement for summaries is to be factually consistent with the input document. Previous work has found that natural language inference (NLI) models do not perform competitively when applied to inconsistency detection. In this work, we revisit the use of NLI for inconsistency detection, finding that past work suffered from a mismatch in input granularity between NLI datasets (sentence-level), and inconsistency detection (document level). We provide a highly effective and light-weight method called SummaCConv that enables NLI models to be successfully used for this task by segmenting documents into sentence units and aggregating scores between pairs of sentences. We furthermore introduce a new benchmark called SummaC (Summary Consistency) which consists of six large inconsistency detection datasets. On this dataset, SummaCConv obtains state-of-the-art results with a balanced accuracy of 74.4{\%}, a 5{\%} improvement compared with prior work.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00453
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,483
article
guo-etal-2022-survey
A Survey on Automated Fact-Checking
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.11/
Guo, Zhijiang and Schlichtkrull, Michael and Vlachos, Andreas
null
178--206
Fact-checking has become increasingly important due to the speed with which both information and misinformation can spread in the modern media ecosystem. Therefore, researchers have been exploring how fact-checking can be automated, using techniques based on natural language processing, machine learning, knowledge representation, and databases to automatically predict the veracity of claims. In this paper, we survey automated fact-checking stemming from natural language processing, and discuss its connections to related tasks and disciplines. In this process, we present an overview of existing datasets and models, aiming to unify the various definitions given and identify common concepts. Finally, we highlight challenges for future research.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00454
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,484
article
singhania-etal-2022-predicting
Predicting Document Coverage for Relation Extraction
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.12/
Singhania, Sneha and Razniewski, Simon and Weikum, Gerhard
null
207--223
This paper presents a new task of predicting the coverage of a text document for relation extraction (RE): Does the document contain many relational tuples for a given entity? Coverage predictions are useful in selecting the best documents for knowledge base construction with large input corpora. To study this problem, we present a dataset of 31,366 diverse documents for 520 entities. We analyze the correlation of document coverage with features like length, entity mention frequency, Alexa rank, language complexity, and information retrieval scores. Each of these features has only moderate predictive power. We employ methods combining features with statistical models like TF-IDF and language models like BERT. The model combining features and BERT, HERB, achieves an F1 score of up to 46{\%}. We demonstrate the utility of coverage predictions on two use cases: KB construction and claim refutation.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00456
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,485
article
macavaney-etal-2022-abnirml
{ABNIRML}: Analyzing the Behavior of Neural {IR} Models
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.13/
MacAvaney, Sean and Feldman, Sergey and Goharian, Nazli and Downey, Doug and Cohan, Arman
null
224--239
Pretrained contextualized language models such as BERT and T5 have established a new state-of-the-art for ad-hoc search. However, it is not yet well understood why these methods are so effective, what makes some variants more effective than others, and what pitfalls they may have. We present a new comprehensive framework for Analyzing the Behavior of Neural IR ModeLs (ABNIRML), which includes new types of diagnostic probes that allow us to test several characteristics{---}such as writing styles, factuality, sensitivity to paraphrasing and word order{---}that are not addressed by previous techniques. To demonstrate the value of the framework, we conduct an extensive empirical study that yields insights into the factors that contribute to the neural model`s gains, and identify potential unintended biases the models exhibit. Some of our results confirm conventional wisdom, for example, that recent neural ranking models rely less on exact term overlap with the query, and instead leverage richer linguistic information, evidenced by their higher sensitivity to word and sentence order. Other results are more surprising, such as that some models (e.g., T5 and ColBERT) are biased towards factually correct (rather than simply relevant) texts. Further, some characteristics vary even for the same base language model, and other characteristics can appear due to random variations during model training.1
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00457
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,486
article
feng-etal-2022-neuro
Neuro-symbolic Natural Logic with Introspective Revision for Natural Language Inference
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.14/
Feng, Yufei and Yang, Xiaoyu and Zhu, Xiaodan and Greenspan, Michael
null
240--256
We introduce a neuro-symbolic natural logic framework based on reinforcement learning with introspective revision. The model samples and rewards specific reasoning paths through policy gradient, in which the introspective revision algorithm modifies intermediate symbolic reasoning steps to discover reward-earning operations as well as leverages external knowledge to alleviate spurious reasoning and training inefficiency. The framework is supported by properly designed local relation models to avoid input entangling, which helps ensure the interpretability of the proof paths. The proposed model has built-in interpretability and shows superior capability in monotonicity inference, systematic generalization, and interpretability, compared with previous models on the existing datasets.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00458
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,487
article
dhingra-etal-2022-time
Time-Aware Language Models as Temporal Knowledge Bases
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.15/
Dhingra, Bhuwan and Cole, Jeremy R. and Eisenschlos, Julian Martin and Gillick, Daniel and Eisenstein, Jacob and Cohen, William W.
null
257--273
Many facts come with an expiration date, from the name of the President to the basketball team Lebron James plays for. However, most language models (LMs) are trained on snapshots of data collected at a specific moment in time. This can limit their utility, especially in the closed-book setting where the pretraining corpus must contain the facts the model should memorize. We introduce a diagnostic dataset aimed at probing LMs for factual knowledge that changes over time and highlight problems with LMs at either end of the spectrum{---}those trained on specific slices of temporal data, as well as those trained on a wide range of temporal data. To mitigate these problems, we propose a simple technique for jointly modeling text with its timestamp. This improves memorization of seen facts from the training time period, as well as calibration on predictions about unseen facts from future time periods. We also show that models trained with temporal context can be efficiently {\textquotedblleft}refreshed{\textquotedblright} as new data arrives, without the need for retraining from scratch.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00459
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,488
article
de-cao-etal-2022-multilingual
Multilingual Autoregressive Entity Linking
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.16/
De Cao, Nicola and Wu, Ledell and Popat, Kashyap and Artetxe, Mikel and Goyal, Naman and Plekhanov, Mikhail and Zettlemoyer, Luke and Cancedda, Nicola and Riedel, Sebastian and Petroni, Fabio
null
274--290
We present mGENRE, a sequence-to- sequence system for the Multilingual Entity Linking (MEL) problem{---}the task of resolving language-specific mentions to a multilingual Knowledge Base (KB). For a mention in a given language, mGENRE predicts the name of the target entity left-to-right, token-by-token in an autoregressive fashion. The autoregressive formulation allows us to effectively cross-encode mention string and entity names to capture more interactions than the standard dot product between mention and entity vectors. It also enables fast search within a large KB even for mentions that do not appear in mention tables and with no need for large-scale vector indices. While prior MEL works use a single representation for each entity, we match against entity names of as many languages as possible, which allows exploiting language connections between source input and target name. Moreover, in a zero-shot setting on languages with no training data at all, mGENRE treats the target language as a latent variable that is marginalized at prediction time. This leads to over 50{\%} improvements in average accuracy. We show the efficacy of our approach through extensive evaluation including experiments on three popular MEL benchmarks where we establish new state-of-the-art results. Source code available at \url{https://github.com/facebookresearch/GENRE}.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00460
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,489
article
xue-etal-2022-byt5
{B}y{T}5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.17/
Xue, Linting and Barua, Aditya and Constant, Noah and Al-Rfou, Rami and Narang, Sharan and Kale, Mihir and Roberts, Adam and Raffel, Colin
null
291--306
Most widely used pre-trained language models operate on sequences of tokens corresponding to word or subword units. By comparison, token-free models that operate directly on raw text (bytes or characters) have many benefits: They can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Because byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.1
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00461
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,490
article
raifer-etal-2022-designing
Designing an Automatic Agent for Repeated Language{--}based Persuasion Games
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.18/
Raifer, Maya and Rotman, Guy and Apel, Reut and Tennenholtz, Moshe and Reichart, Roi
null
307--324
Persuasion games are fundamental in economics and AI research and serve as the basis for important applications. However, work on this setup assumes communication with stylized messages that do not consist of rich human language. In this paper we consider a repeated sender (expert) {--} receiver (decision maker) game, where the sender is fully informed about the state of the world and aims to persuade the receiver to accept a deal by sending one of several possible natural language reviews. We design an automatic expert that plays this repeated game, aiming to achieve the maximal payoff. Our expert is implemented within the Monte Carlo Tree Search (MCTS) algorithm, with deep learning models that exploit behavioral and linguistic signals in order to predict the next action of the decision maker, and the future payoff of the expert given the state of the game and a candidate review. We demonstrate the superiority of our expert over strong baselines and its adaptability to different decision makers and potential proposed deals.1
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00462
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,491
article
saparov-mitchell-2022-towards
Towards General Natural Language Understanding with Probabilistic Worldbuilding
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.19/
Saparov, Abulhair and Mitchell, Tom M.
null
325--342
We introduce the Probabilistic Worldbuilding Model (PWM), a new fully symbolic Bayesian model of semantic parsing and reasoning, as a first step in a research program toward more domain- and task-general NLU and AI. Humans create internal mental models of their observations that greatly aid in their ability to understand and reason about a large variety of problems. In PWM, the meanings of sentences, acquired facts about the world, and intermediate steps in reasoning are all expressed in a human-readable formal language, with the design goal of interpretability. PWM is Bayesian, designed specifically to be able to generalize to new domains and new tasks. We derive and implement an inference algorithm that reads sentences by parsing and abducing updates to its latent world model that capture the semantics of those sentences, and evaluate it on two out-of-domain question-answering datasets: (1) ProofWriter and (2) a new dataset we call FictionalGeoQA, designed to be more representative of real language but still simple enough to focus on evaluating reasoning ability, while being robust against heuristics. Our method outperforms baselines on both, thereby demonstrating its value as a proof-of-concept.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00463
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,492
article
somayajula-etal-2022-multi
A Multi-Level Optimization Framework for End-to-End Text Augmentation
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.20/
Somayajula, Sai Ashish and Song, Linfeng and Xie, Pengtao
null
343--358
Text augmentation is an effective technique in alleviating overfitting in NLP tasks. In existing methods, text augmentation and downstream tasks are mostly performed separately. As a result, the augmented texts may not be optimal to train the downstream model. To address this problem, we propose a three-level optimization framework to perform text augmentation and the downstream task end-to- end. The augmentation model is trained in a way tailored to the downstream task. Our framework consists of three learning stages. A text summarization model is trained to perform data augmentation at the first stage. Each summarization example is associated with a weight to account for its domain difference with the text classification data. At the second stage, we use the model trained at the first stage to perform text augmentation and train a text classification model on the augmented texts. At the third stage, we evaluate the text classification model trained at the second stage and update weights of summarization examples by minimizing the validation loss. These three stages are performed end-to-end. We evaluate our method on several text classification datasets where the results demonstrate the effectiveness of our method. Code is available at \url{https://github.com/Sai-Ashish/End-to-End-Text-Augmentation}.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00464
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,493
article
pruthi-etal-2022-evaluating
Evaluating Explanations: How Much Do Explanations from the Teacher Aid Students?
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.21/
Pruthi, Danish and Bansal, Rachit and Dhingra, Bhuwan and Baldini Soares, Livio and Collins, Michael and Lipton, Zachary C. and Neubig, Graham and Cohen, William W.
null
359--375
While many methods purport to explain predictions by highlighting salient features, what aims these explanations serve and how they ought to be evaluated often go unstated. In this work, we introduce a framework to quantify the value of explanations via the accuracy gains that they confer on a student model trained to simulate a teacher model. Crucially, the explanations are available to the student during training, but are not available at test time. Compared with prior proposals, our approach is less easily gamed, enabling principled, automatic, model-agnostic evaluation of attributions. Using our framework, we compare numerous attribution methods for text classification and question answering, and observe quantitative differences that are consistent (to a moderate to high degree) across different student model architectures and learning strategies.1
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00465
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,494
article
shen-etal-2022-vila
{VILA}: Improving Structured Content Extraction from Scientific {PDF}s Using Visual Layout Groups
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.22/
Shen, Zejiang and Lo, Kyle and Wang, Lucy Lu and Kuehl, Bailey and Weld, Daniel S. and Downey, Doug
null
376--392
Accurately extracting structured content from PDFs is a critical first step for NLP over scientific papers. Recent work has improved extraction accuracy by incorporating elementary layout information, for example, each token`s 2D position on the page, into language model pretraining. We introduce new methods that explicitly model VIsual LAyout (VILA) groups, that is, text lines or text blocks, to further improve performance. In our I-VILA approach, we show that simply inserting special tokens denoting layout group boundaries into model inputs can lead to a 1.9{\%} Macro F1 improvement in token classification. In the H-VILA approach, we show that hierarchical encoding of layout-groups can result in up to 47{\%} inference time reduction with less than 0.8{\%} Macro F1 loss. Unlike prior layout-aware approaches, our methods do not require expensive additional pretraining, only fine-tuning, which we show can reduce training cost by up to 95{\%}. Experiments are conducted on a newly curated evaluation suite, S2-VLUE, that unifies existing automatically labeled datasets and includes a new dataset of manual annotations covering diverse papers from 19 scientific disciplines. Pre-trained weights, benchmark datasets, and source code are available at \url{https://github.com/allenai/VILA}.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00466
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,495
article
liu-prudhommeaux-2022-data
Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.23/
Liu, Zoey and Prud{'}hommeaux, Emily
null
393--413
Common designs of model evaluation typically focus on monolingual settings, where different models are compared according to their performance on a single data set that is assumed to be representative of all possible data for the task at hand. While this may be reasonable for a large data set, this assumption is difficult to maintain in low-resource scenarios, where artifacts of the data collection can yield data sets that are outliers, potentially making conclusions about model performance coincidental. To address these concerns, we investigate model generalizability in crosslinguistic low-resource scenarios. Using morphological segmentation as the test case, we compare three broad classes of models with different parameterizations, taking data from 11 languages across 6 language families. In each experimental setting, we evaluate all models on a first data set, then examine their performance consistency when introducing new randomly sampled data sets with the same size and when applying the trained models to unseen test sets of varying sizes. The results demonstrate that the extent of model generalization depends on the characteristics of the data set, and does not necessarily rely heavily on the data set size. Among the characteristics that we studied, the ratio of morpheme overlap and that of the average number of morphemes per word between the training and test sets are the two most prominent factors. Our findings suggest that future work should adopt random sampling to construct data sets with different sizes in order to make more responsible claims about model evaluation.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00467
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,496
article
ben-david-etal-2022-pada
{PADA}: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.24/
Ben-David, Eyal and Oved, Nadav and Reichart, Roi
null
414--433
Natural Language Processing algorithms have made incredible progress, but they still struggle when applied to out-of-distribution examples. We address a challenging and underexplored version of this domain adaptation problem, where an algorithm is trained on several source domains, and then applied to examples from unseen domains that are unknown at training time. Particularly, no examples, labeled or unlabeled, or any other knowledge about the target domain are available to the algorithm at training time. We present PADA: An example-based autoregressive Prompt learning algorithm for on-the-fly Any-Domain Adaptation, based on the T5 language model. Given a test example, PADA first generates a unique prompt for it and then, conditioned on this prompt, labels the example with respect to the NLP prediction task. PADA is trained to generate a prompt that is a token sequence of unrestricted length, consisting of Domain Related Features (DRFs) that characterize each of the source domains. Intuitively, the generated prompt is a unique signature that maps the test example to a semantic space spanned by the source domains. In experiments with 3 tasks (text classification and sequence tagging), for a total of 14 multi-source adaptation scenarios, PADA substantially outperforms strong baselines.1
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00468
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,497
article
guan-etal-2022-lot
{LOT}: A Story-Centric Benchmark for Evaluating {C}hinese Long Text Understanding and Generation
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.25/
Guan, Jian and Feng, Zhuoer and Chen, Yamei and He, Ruilin and Mao, Xiaoxi and Fan, Changjie and Huang, Minlie
null
434--451
Standard multi-task benchmarks are essential for developing pretraining models that can generalize to various downstream tasks. Existing benchmarks for natural language processing (NLP) usually focus only on understanding or generating short texts. However, long text modeling requires many distinct abilities in contrast to short texts, such as the modeling of long-range discourse and commonsense relations, and the coherence and controllability of generation. The lack of standardized benchmarks makes it difficult to assess these abilities of a model and fairly compare different models, especially Chinese models. Therefore, we propose a story-centric benchmark named LOT for evaluating Chinese long text modeling, which aggregates two understanding tasks and two generation tasks. We construct new datasets for these tasks based on human-written Chinese stories with hundreds of words. Furthermore, we release an encoder-decoder-based Chinese long text pretraining model named LongLM with up to 1 billion parameters. We pretrain LongLM on 120G Chinese novels with two generative tasks including text infilling and conditional continuation. Extensive experiments show that LongLM outperforms similar-sized pretraining models substantially on both the understanding and generation tasks in LOT.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00469
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,498
article
naplava-etal-2022-czech
{C}zech Grammar Error Correction with a Large and Diverse Corpus
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.26/
N{\'a}plava, Jakub and Straka, Milan and Strakov{\'a}, Jana and Rosen, Alexandr
null
452--467
We introduce a large and diverse Czech corpus annotated for grammatical error correction (GEC) with the aim to contribute to the still scarce data resources in this domain for languages other than English. The Grammar Error Correction Corpus for Czech (GECCC) offers a variety of four domains, covering error distributions ranging from high error density essays written by non-native speakers, to website texts, where errors are expected to be much less common. We compare several Czech GEC systems, including several Transformer-based ones, setting a strong baseline to future research. Finally, we meta-evaluate common GEC metrics against human judgments on our data. We make the new Czech GEC corpus publicly available under the CC BY-SA 4.0 license at \url{http://hdl.handle.net/11234/1-4639}.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00470
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,499
article
adlakha-etal-2022-topiocqa
{T}opi{OCQA}: Open-domain Conversational Question Answering with Topic Switching
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.27/
Adlakha, Vaibhav and Dhuliawala, Shehzaad and Suleman, Kaheer and de Vries, Harm and Reddy, Siva
null
468--483
In a conversational question answering scenario, a questioner seeks to extract information about a topic through a series of interdependent questions and answers. As the conversation progresses, they may switch to related topics, a phenomenon commonly observed in information-seeking search sessions. However, current datasets for conversational question answering are limiting in two ways: 1) they do not contain topic switches; and 2) they assume the reference text for the conversation is given, that is, the setting is not open-domain. We introduce TopiOCQA (pronounced Tapioca), an open-domain conversational dataset with topic switches based on Wikipedia. TopiOCQA contains 3,920 conversations with information-seeking questions and free-form answers. On average, a conversation in our dataset spans 13 question-answer turns and involves four topics (documents). TopiOCQA poses a challenging test-bed for models, where efficient retrieval is required on multiple turns of the same conversation, in conjunction with constructing valid responses using conversational history. We evaluate several baselines, by combining state-of-the-art document retrieval methods with neural reader models. Our best model achieves F1 of 55.8, falling short of human performance by 14.2 points, indicating the difficulty of our dataset. Our dataset and code are available at \url{https://mcgill-nlp.github.io/topiocqa}.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00471
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,500
article
sarwar-etal-2022-neighborhood
A Neighborhood Framework for Resource-Lean Content Flagging
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.28/
Sarwar, Sheikh Muhammad and Zlatkova, Dimitrina and Hardalov, Momchil and Dinkov, Yoan and Augenstein, Isabelle and Nakov, Preslav
null
484--502
We propose a novel framework for cross- lingual content flagging with limited target- language data, which significantly outperforms prior work in terms of predictive performance. The framework is based on a nearest-neighbor architecture. It is a modern instantiation of the vanilla k-nearest neighbor model, as we use Transformer representations in all its components. Our framework can adapt to new source- language instances, without the need to be retrained from scratch. Unlike prior work on neighborhood-based approaches, we encode the neighborhood information based on query{--} neighbor interactions. We propose two encoding schemes and we show their effectiveness using both qualitative and quantitative analysis. Our evaluation results on eight languages from two different datasets for abusive language detection show sizable improvements of up to 9.5 F1 points absolute (for Italian) over strong baselines. On average, we achieve 3.6 absolute F1 points of improvement for the three languages in the Jigsaw Multilingual dataset and 2.14 points for the WUL dataset.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00472
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,501
article
geigle-etal-2022-retrieve
Retrieve Fast, Rerank Smart: Cooperative and Joint Approaches for Improved Cross-Modal Retrieval
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.29/
Geigle, Gregor and Pfeiffer, Jonas and Reimers, Nils and Vuli{\'c}, Ivan and Gurevych, Iryna
null
503--521
Current state-of-the-art approaches to cross- modal retrieval process text and visual input jointly, relying on Transformer-based architectures with cross-attention mechanisms that attend over all words and objects in an image. While offering unmatched retrieval performance, such models: 1) are typically pretrained from scratch and thus less scalable, 2) suffer from huge retrieval latency and inefficiency issues, which makes them impractical in realistic applications. To address these crucial gaps towards both improved and efficient cross- modal retrieval, we propose a novel fine-tuning framework that turns any pretrained text-image multi-modal model into an efficient retrieval model. The framework is based on a cooperative retrieve-and-rerank approach that combines: 1) twin networks (i.e., a bi-encoder) to separately encode all items of a corpus, enabling efficient initial retrieval, and 2) a cross-encoder component for a more nuanced (i.e., smarter) ranking of the retrieved small set of items. We also propose to jointly fine- tune the two components with shared weights, yielding a more parameter-efficient model. Our experiments on a series of standard cross-modal retrieval benchmarks in monolingual, multilingual, and zero-shot setups, demonstrate improved accuracy and huge efficiency benefits over the state-of-the-art cross- encoders.1
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00473
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,502
article
goyal-etal-2022-flores
The {F}lores-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.30/
Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc{'}Aurelio and Guzm{\'a}n, Francisco and Fan, Angela
null
522--538
One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the Flores-101 evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are fully aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00474
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,503
article
trivedi-etal-2022-musique
♫ {M}u{S}i{Q}ue: Multihop Questions via Single-hop Question Composition
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.31/
Trivedi, Harsh and Balasubramanian, Niranjan and Khot, Tushar and Sabharwal, Ashish
null
539--554
Multihop reasoning remains an elusive goal as existing multihop benchmarks are known to be largely solvable via shortcuts. Can we create a question answering (QA) dataset that, by construction, requires proper multihop reasoning? To this end, we introduce a bottom{--}up approach that systematically selects composable pairs of single-hop questions that are connected, that is, where one reasoning step critically relies on information from another. This bottom{--}up methodology lets us explore a vast space of questions and add stringent filters as well as other mechanisms targeting connected reasoning. It provides fine-grained control over the construction process and the properties of the resulting k-hop questions. We use this methodology to create MuSiQue-Ans, a new multihop QA dataset with 25K 2{--}4 hop questions. Relative to existing datasets, MuSiQue-Ans is more difficult overall (3{\texttimes} increase in human{--}machine gap), and harder to cheat via disconnected reasoning (e.g., a single-hop model has a 30-point drop in F1). We further add unanswerable contrast questions to produce a more stringent dataset, MuSiQue-Full. We hope our datasets will help the NLP community develop models that perform genuine multihop reasoning.1
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00475
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,504
article
liu-etal-2022-relational
Relational Memory-Augmented Language Models
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.32/
Liu, Qi and Yogatama, Dani and Blunsom, Phil
null
555--572
We present a memory-augmented approach to condition an autoregressive language model on a knowledge graph. We represent the graph as a collection of relation triples and retrieve relevant relations for a given context to improve text generation. Experiments on WikiText-103, WMT19, and enwik8 English datasets demonstrate that our approach produces a better language model in terms of perplexity and bits per character. We also show that relational memory improves coherence, is complementary to token-based memory, and enables causal interventions. Our model provides a simple yet effective way to combine an autoregressive language model and a knowledge graph for more coherent and logical generation.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00476
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,505
article
sun-etal-2022-sentence
Sentence Similarity Based on Contexts
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.33/
Sun, Xiaofei and Meng, Yuxian and Ao, Xiang and Wu, Fei and Zhang, Tianwei and Li, Jiwei and Fan, Chun
null
573--588
Existing methods to measure sentence similarity are faced with two challenges: (1) labeled datasets are usually limited in size, making them insufficient to train supervised neural models; and (2) there is a training-test gap for unsupervised language modeling (LM) based models to compute semantic scores between sentences, since sentence-level semantics are not explicitly modeled at training. This results in inferior performances in this task. In this work, we propose a new framework to address these two issues. The proposed framework is based on the core idea that the meaning of a sentence should be defined by its contexts, and that sentence similarity can be measured by comparing the probabilities of generating two sentences given the same context. The proposed framework is able to generate high-quality, large-scale dataset with semantic similarity scores between two sentences in an unsupervised manner, with which the train-test gap can be largely bridged. Extensive experiments show that the proposed framework achieves significant performance boosts over existing baselines under both the supervised and unsupervised settings across different datasets.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00477
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,506
article
chakrabarty-etal-2022-rocket
It`s not Rocket Science: Interpreting Figurative Language in Narratives
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.34/
Chakrabarty, Tuhin and Choi, Yejin and Shwartz, Vered
null
589--606
Figurative language is ubiquitous in English. Yet, the vast majority of NLP research focuses on literal language. Existing text representations by design rely on compositionality, while figurative language is often non- compositional. In this paper, we study the interpretation of two non-compositional figurative languages (idioms and similes). We collected datasets of fictional narratives containing a figurative expression along with crowd-sourced plausible and implausible continuations relying on the correct interpretation of the expression. We then trained models to choose or generate the plausible continuation. Our experiments show that models based solely on pre-trained language models perform substantially worse than humans on these tasks. We additionally propose knowledge-enhanced models, adopting human strategies for interpreting figurative language types: inferring meaning from the context and relying on the constituent words' literal meanings. The knowledge-enhanced models improve the performance on both the discriminative and generative tasks, further bridging the gap from human performance.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00478
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,507
article
li-etal-2022-ultra
Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.35/
Li, Bangzheng and Yin, Wenpeng and Chen, Muhao
null
607--622
The task of ultra-fine entity typing (UFET) seeks to predict diverse and free-form words or phrases that describe the appropriate types of entities mentioned in sentences. A key challenge for this task lies in the large number of types and the scarcity of annotated data per type. Existing systems formulate the task as a multi-way classification problem and train directly or distantly supervised classifiers. This causes two issues: (i) the classifiers do not capture the type semantics because types are often converted into indices; (ii) systems developed in this way are limited to predicting within a pre-defined type set, and often fall short of generalizing to types that are rarely seen or unseen in training. This work presents LITE🍻, a new approach that formulates entity typing as a natural language inference (NLI) problem, making use of (i) the indirect supervision from NLI to infer type information meaningfully represented as textual hypotheses and alleviate the data scarcity issue, as well as (ii) a learning-to-rank objective to avoid the pre-defining of a type set. Experiments show that, with limited training data, LITE obtains state-of-the-art performance on the UFET task. In addition, LITE demonstrates its strong generalizability by not only yielding best results on other fine-grained entity typing benchmarks, more importantly, a pre-trained LITE system works well on new data containing unseen types.1
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00479
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,508
article
xu-lapata-2022-document
Document Summarization with Latent Queries
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.36/
Xu, Yumo and Lapata, Mirella
null
623--638
The availability of large-scale datasets has driven the development of neural models that create generic summaries for single or multiple documents. For query-focused summarization (QFS), labeled training data in the form of queries, documents, and summaries is not readily available. We provide a unified modeling framework for any kind of summarization, under the assumption that all summaries are a response to a query, which is observed in the case of QFS and latent in the case of generic summarization. We model queries as discrete latent variables over document tokens, and learn representations compatible with observed and unobserved query verbalizations. Our framework formulates summarization as a generative process, and jointly optimizes a latent query model and a conditional language model. Despite learning from generic summarization data only, our approach outperforms strong comparison systems across benchmarks, query types, document settings, and target domains.1
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00480
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,509
article
morio-etal-2022-end
End-to-end Argument Mining with Cross-corpora Multi-task Learning
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.37/
Morio, Gaku and Ozaki, Hiroaki and Morishita, Terufumi and Yanai, Kohsuke
null
639--658
Mining an argument structure from text is an important step for tasks such as argument search and summarization. While studies on argument(ation) mining have proposed promising neural network models, they usually suffer from a shortage of training data. To address this issue, we expand the training data with various auxiliary argument mining corpora and propose an end-to-end cross-corpus training method called Multi-Task Argument Mining (MT-AM). To evaluate our approach, we conducted experiments for the main argument mining tasks on several well-established argument mining corpora. The results demonstrate that MT-AM generally outperformed the models trained on a single corpus. Also, the smaller the target corpus was, the better the MT-AM performed. Our extensive analyses suggest that the improvement of MT-AM depends on several factors of transferability among auxiliary and target corpora.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00481
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,510
article
gupta-etal-2022-model
Is My Model Using the Right Evidence? Systematic Probes for Examining Evidence-Based Tabular Reasoning
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.38/
Gupta, Vivek and Bhat, Riyaz A. and Ghosal, Atreya and Shrivastava, Manish and Singh, Maneesh and Srikumar, Vivek
null
659--679
Neural models command state-of-the-art performance across NLP tasks, including ones involving {\textquotedblleft}reasoning{\textquotedblright}. Models claiming to reason about the evidence presented to them should attend to the correct parts of the input while avoiding spurious patterns therein, be self-consistent in their predictions across inputs, and be immune to biases derived from their pre-training in a nuanced, context- sensitive fashion. Do the prevalent *BERT- family of models do so? In this paper, we study this question using the problem of reasoning on tabular data. Tabular inputs are especially well-suited for the study{---}they admit systematic probes targeting the properties listed above. Our experiments demonstrate that a RoBERTa-based model, representative of the current state-of-the-art, fails at reasoning on the following counts: it (a) ignores relevant parts of the evidence, (b) is over- sensitive to annotation artifacts, and (c) relies on the knowledge encoded in the pre-trained language model rather than the evidence presented in its tabular inputs. Finally, through inoculation experiments, we show that fine- tuning the model on perturbed data does not help it overcome the above challenges.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00482
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,511
article
wang-etal-2022-uncertainty
Uncertainty Estimation and Reduction of Pre-trained Models for Text Regression
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.39/
Wang, Yuxia and Beck, Daniel and Baldwin, Timothy and Verspoor, Karin
null
680--696
State-of-the-art classification and regression models are often not well calibrated, and cannot reliably provide uncertainty estimates, limiting their utility in safety-critical applications such as clinical decision-making. While recent work has focused on calibration of classifiers, there is almost no work in NLP on calibration in a regression setting. In this paper, we quantify the calibration of pre- trained language models for text regression, both intrinsically and extrinsically. We further apply uncertainty estimates to augment training data in low-resource domains. Our experiments on three regression tasks in both self-training and active-learning settings show that uncertainty estimation can be used to increase overall performance and enhance model generalization.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00483
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,512
article
puduppully-etal-2022-data
Data-to-text Generation with Variational Sequential Planning
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.40/
Puduppully, Ratish and Fu, Yao and Lapata, Mirella
null
697--715
We consider the task of data-to-text generation, which aims to create textual output from non-linguistic input. We focus on generating long-form text, that is, documents with multiple paragraphs, and propose a neural model enhanced with a planning component responsible for organizing high-level information in a coherent and meaningful way. We infer latent plans sequentially with a structured variational model, while interleaving the steps of planning and generation. Text is generated by conditioning on previous variational decisions and previously generated text. Experiments on two data-to-text benchmarks (RotoWire and MLB) show that our model outperforms strong baselines and is sample-efficient in the face of limited training data (e.g., a few hundred instances).
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00484
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,513
article
schick-schutze-2022-true
True Few-Shot Learning with {P}rompts{---}{A} Real-World Perspective
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.41/
Schick, Timo and Sch{\"utze, Hinrich
null
716--731
Prompt-based approaches excel at few-shot learning. However, Perez et al. (2021) recently cast doubt on their performance as they had difficulty getting good results in a {\textquotedblleft}true{\textquotedblright} few-shot setting in which prompts and hyperparameters cannot be tuned on a dev set. In view of this, we conduct an extensive study of Pet, a method that combines textual instructions with example-based finetuning. We show that, if correctly configured, Pet performs strongly in true few-shot settings without a dev set. Crucial for this strong performance is a number of design choices, including Pet`s ability to intelligently handle multiple prompts. We put our findings to a real-world test by running Pet on RAFT, a benchmark of tasks taken from realistic NLP applications for which no labeled dev or test sets are available. Pet achieves a new state of the art on RAFT and performs close to non-expert humans for 7 out of 11 tasks. These results demonstrate that prompt-based learners can successfully be applied in true few-shot settings and underpin our belief that learning from instructions will play an important role on the path towards human-like few-shot learning capabilities.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00485
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,514
article
sridhar-etal-2022-heterogeneous
Heterogeneous Supervised Topic Models
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.42/
Sridhar, Dhanya and Daum{\'e} III, Hal and Blei, David
null
732--745
Researchers in the social sciences are often interested in the relationship between text and an outcome of interest, where the goal is to both uncover latent patterns in the text and predict outcomes for unseen texts. To this end, this paper develops the heterogeneous supervised topic model (HSTM), a probabilistic approach to text analysis and prediction. HSTMs posit a joint model of text and outcomes to find heterogeneous patterns that help with both text analysis and prediction. The main benefit of HSTMs is that they capture heterogeneity in the relationship between text and the outcome across latent topics. To fit HSTMs, we develop a variational inference algorithm based on the auto-encoding variational Bayes framework. We study the performance of HSTMs on eight datasets and find that they consistently outperform related methods, including fine-tuned black-box models. Finally, we apply HSTMs to analyze news articles labeled with pro- or anti-tone. We find evidence of differing language used to signal a pro- and anti-tone.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00487
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,515
article
atanasova-etal-2022-fact
Fact Checking with Insufficient Evidence
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.43/
Atanasova, Pepa and Simonsen, Jakob Grue and Lioma, Christina and Augenstein, Isabelle
null
746--763
Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, SufficientFacts1, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21{\%} accuracy), whereas it is easiest for omitted date modifiers (63{\%} accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00486
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,516
article
elazar-etal-2022-text
Text-based {NP} Enrichment
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.44/
Elazar, Yanai and Basmov, Victoria and Goldberg, Yoav and Tsarfaty, Reut
null
764--784
Understanding the relations between entities denoted by NPs in a text is a critical part of human-like natural language understanding. However, only a fraction of such relations is covered by standard NLP tasks and benchmarks nowadays. In this work, we propose a novel task termed text-based NP enrichment (TNE), in which we aim to enrich each NP in a text with all the preposition-mediated relations{---}either explicit or implicit{---}that hold between it and other NPs in the text. The relations are represented as triplets, each denoted by two NPs related via a preposition. Humans recover such relations seamlessly, while current state-of-the-art models struggle with them due to the implicit nature of the problem. We build the first large-scale dataset for the problem, provide the formal framing and scope of annotation, analyze the data, and report the results of fine-tuned language models on the task, demonstrating the challenge it poses to current technology. A webpage with a data-exploration UI, a demo, and links to the code, models, and leaderboard, to foster further research into this challenging problem can be found at: yanaiela.github.io/TNE/.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00488
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,517
article
lan-etal-2022-minimum
Minimum Description Length Recurrent Neural Networks
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.45/
Lan, Nur and Geyer, Michal and Chemla, Emmanuel and Katzir, Roni
null
785--799
We train neural networks to optimize a Minimum Description Length score, that is, to balance between the complexity of the network and its accuracy at a task. We show that networks optimizing this objective function master tasks involving memory challenges and go beyond context-free languages. These learners master languages such as anbn, anbncn, anb2n, anbmcn +m, and they perform addition. Moreover, they often do so with 100{\%} accuracy. The networks are small, and their inner workings are transparent. We thus provide formal proofs that their perfect accuracy holds not only on a given test set, but for any input sequence. To our knowledge, no other connectionist model has been shown to capture the underlying grammars for these languages in full generality.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00489
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,518
article
hao-etal-2022-formal
Formal Language Recognition by Hard Attention Transformers: Perspectives from Circuit Complexity
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.46/
Hao, Yiding and Angluin, Dana and Frank, Robert
null
800--810
This paper analyzes three formal models of Transformer encoders that differ in the form of their self-attention mechanism: unique hard attention (UHAT); generalized unique hard attention (GUHAT), which generalizes UHAT; and averaging hard attention (AHAT). We show that UHAT and GUHAT Transformers, viewed as string acceptors, can only recognize formal languages in the complexity class AC0, the class of languages recognizable by families of Boolean circuits of constant depth and polynomial size. This upper bound subsumes Hahn`s (2020) results that GUHAT cannot recognize the DYCK languages or the PARITY language, since those languages are outside AC0 (Furst et al., 1984). In contrast, the non-AC0 languages MAJORITY and DYCK-1 are recognizable by AHAT networks, implying that AHAT can recognize languages that UHAT and GUHAT cannot.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00490
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,519
article
freitag-etal-2022-high
High Quality Rather than High Model Probability: Minimum {B}ayes Risk Decoding with Neural Metrics
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.47/
Freitag, Markus and Grangier, David and Tan, Qijun and Liang, Bowen
null
811--825
In Neural Machine Translation, it is typically assumed that the sentence with the highest estimated probability should also be the translation with the highest quality as measured by humans. In this work, we question this assumption and show that model estimates and translation quality only vaguely correlate. We apply Minimum Bayes Risk (MBR) decoding on unbiased samples to optimize diverse automated metrics of translation quality as an alternative inference strategy to beam search. Instead of targeting the hypotheses with the highest model probability, MBR decoding extracts the hypotheses with the highest estimated quality. Our experiments show that the combination of a neural translation model with a neural reference-based metric, Bleurt, results in significant improvement in human evaluations. This improvement is obtained with translations different from classical beam-search output: These translations have much lower model likelihood and are less favored by surface metrics like Bleu.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00491
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,520
article
he-etal-2022-generate
Generate, Annotate, and Learn: {NLP} with Synthetic Text
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.48/
He, Xuanli and Nassar, Islam and Kiros, Jamie and Haffari, Gholamreza and Norouzi, Mohammad
null
826--842
This paper studies the use of language models as a source of synthetic unlabeled text for NLP. We formulate a general framework called {\textquotedblleft}generate, annotate, and learn (GAL){\textquotedblright} to take advantage of synthetic text within knowledge distillation, self-training, and few-shot learning applications. To generate high-quality task-specific text, we either fine-tune LMs on inputs from the task of interest, or prompt large LMs with few examples. We use the best available classifier to annotate synthetic text with soft pseudo labels for knowledge distillation and self-training, and use LMs to obtain hard labels for few-shot learning. We train new supervised models on the combination of labeled and pseudo-labeled data, which results in significant gains across several applications. We investigate key components of GAL and present theoretical and empirical arguments against the use of class-conditional LMs to generate synthetic labeled text instead of unlabeled text. GAL achieves new state-of-the-art knowledge distillation results for 6-layer transformers on the GLUE leaderboard.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00492
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,521
article
merrill-etal-2022-saturated
Saturated Transformers are Constant-Depth Threshold Circuits
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.49/
Merrill, William and Sabharwal, Ashish and Smith, Noah A.
null
843--856
Transformers have become a standard neural network architecture for many NLP problems, motivating theoretical analysis of their power in terms of formal languages. Recent work has shown that transformers with hard attention are quite limited in power (Hahn, 2020), as they can be simulated by constant-depth AND/OR circuits (Hao et al., 2022). However, hard attention is a strong assumption, which may complicate the relevance of these results in practice. In this work, we analyze the circuit complexity of transformers with saturated attention: a generalization of hard attention that more closely captures the attention patterns learnable in practical transformers. We first show that saturated transformers transcend the known limitations of hard-attention transformers. We then prove saturated transformers with floating-point values can be simulated by constant-depth threshold circuits, giving the class TC0 as an upper bound on the formal languages they recognize.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00493
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,522
article
mielke-etal-2022-reducing
Reducing Conversational Agents' Overconfidence Through Linguistic Calibration
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.50/
Mielke, Sabrina J. and Szlam, Arthur and Dinan, Emily and Boureau, Y-Lan
null
857--872
While improving neural dialogue agents' factual accuracy is the object of much research, another important aspect of communication, less studied in the setting of neural dialogue, is transparency about ignorance. In this work, we analyze to what extent state-of-the-art chit-chat models are linguistically calibrated in the sense that their verbalized expression of doubt (or confidence) matches the likelihood that the model`s responses are factually incorrect (or correct). We find that these models are poorly calibrated, yet we show that likelihood of correctness can accurately be predicted. By incorporating such metacognitive features into the training of a controllable generation model, we obtain a dialogue agent with greatly improved linguistic calibration.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00494
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,523
article
osborne-etal-2022-survey
A Survey of Text Games for Reinforcement Learning Informed by Natural Language
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.51/
Osborne, Philip and N{\~o}mm, Heido and Freitas, Andr{\'e}
null
873--887
Reinforcement Learning has shown success in a number of complex virtual environments. However, many challenges still exist towards solving problems with natural language as a core component. Interactive Fiction Games (or Text Games) are one such problem type that offer a set of safe, partially observable environments where natural language is required as part of the Reinforcement Learning solution. Therefore, this survey`s aim is to assist in the development of new Text Game problem settings and solutions for Reinforcement Learning informed by natural language. Specifically, this survey: 1) introduces the challenges in Text Game Reinforcement Learning problems, 2) outlines the generation tools for rendering Text Games and the subsequent environments generated, and 3) compares the agent architectures currently applied to provide a systematic review of benchmark methodologies and opportunities for future researchers.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00495
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,524
article
dary-etal-2022-dependency
Dependency Parsing with Backtracking using Deep Reinforcement Learning
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.52/
Dary, Franck and Petit, Maxime and Nasr, Alexis
null
888--903
Greedy algorithms for NLP such as transition-based parsing are prone to error propagation. One way to overcome this problem is to allow the algorithm to backtrack and explore an alternative solution in cases where new evidence contradicts the solution explored so far. In order to implement such a behavior, we use reinforcement learning and let the algorithm backtrack in cases where such an action gets a better reward than continuing to explore the current solution. We test this idea on both POS tagging and dependency parsing and show that backtracking is an effective means to fight against error propagation.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00496
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,525
article
agarwal-nenkova-2022-temporal
Temporal Effects on Pre-trained Models for Language Processing Tasks
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.53/
Agarwal, Oshin and Nenkova, Ani
null
904--921
Keeping the performance of language technologies optimal as time passes is of great practical interest. We study temporal effects on model performance on downstream language tasks, establishing a nuanced terminology for such discussion and identifying factors essential to conduct a robust study. We present experiments for several tasks in English where the label correctness is not dependent on time and demonstrate the importance of distinguishing between temporal model deterioration and temporal domain adaptation for systems using pre-trained representations. We find that, depending on the task, temporal model deterioration is not necessarily a concern. Temporal domain adaptation, however, is beneficial in all cases, with better performance for a given time period possible when the system is trained on temporally more recent data. Therefore, we also examine the efficacy of two approaches for temporal domain adaptation without human annotations on new data. Self-labeling shows consistent improvement and notably, for named entity recognition, leads to better temporal adaptation than even human annotations.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00497
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,526
article
nikolaus-etal-2022-learning
Learning {E}nglish with {P}eppa {P}ig
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.54/
Nikolaus, Mitja and Alishahi, Afra and Chrupa{\l}a, Grzegorz
null
922--936
Recent computational models of the acquisition of spoken language via grounding in perception exploit associations between spoken and visual modalities and learn to represent speech and visual data in a joint vector space. A major unresolved issue from the point of ecological validity is the training data, typically consisting of images or videos paired with spoken descriptions of what is depicted. Such a setup guarantees an unrealistically strong correlation between speech and the visual data. In the real world the coupling between the linguistic and the visual modality is loose, and often confounded by correlations with non-semantic aspects of the speech signal. Here we address this shortcoming by using a dataset based on the children`s cartoon Peppa Pig. We train a simple bi-modal architecture on the portion of the data consisting of dialog between characters, and evaluate on segments containing descriptive narrations. Despite the weak and confounded signal in this training data, our model succeeds at learning aspects of the visual semantics of spoken language.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00498
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,527
article
cui-etal-2022-compositional
Compositional Generalization in Multilingual Semantic Parsing over {W}ikidata
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.55/
Cui, Ruixiang and Aralikatte, Rahul and Lent, Heather and Hershcovich, Daniel
null
937--955
Semantic parsing (SP) allows humans to leverage vast knowledge resources through natural interaction. However, parsers are mostly designed for and evaluated on English resources, such as CFQ (Keysers et al., 2020), the current standard benchmark based on English data generated from grammar rules and oriented towards Freebase, an outdated knowledge base. We propose a method for creating a multilingual, parallel dataset of question-query pairs, grounded in Wikidata. We introduce such a dataset, which we call Multilingual Compositional Wikidata Questions (MCWQ), and use it to analyze the compositional generalization of semantic parsers in Hebrew, Kannada, Chinese, and English. While within- language generalization is comparable across languages, experiments on zero-shot cross- lingual transfer demonstrate that cross-lingual compositional generalization fails, even with state-of-the-art pretrained multilingual encoders. Furthermore, our methodology, dataset, and results will facilitate future research on SP in more realistic and diverse settings than has been possible with existing resources.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00499
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,528
article
naik-etal-2022-adapting
Adapting to the Long Tail: A Meta-Analysis of Transfer Learning Research for Language Understanding Tasks
Roark, Brian and Nenkova, Ani
null
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.tacl-1.56/
Naik, Aakanksha and Lehman, Jill and Ros{\'e}, Carolyn
null
956--980
Natural language understanding (NLU) has made massive progress driven by large benchmarks, but benchmarks often leave a long tail of infrequent phenomena underrepresented. We reflect on the question: Have transfer learning methods sufficiently addressed the poor performance of benchmark-trained models on the long tail? We conceptualize the long tail using macro-level dimensions (underrepresented genres, topics, etc.), and perform a qualitative meta-analysis of 100 representative papers on transfer learning research for NLU. Our analysis asks three questions: (i) Which long tail dimensions do transfer learning studies target? (ii) Which properties of adaptation methods help improve performance on the long tail? (iii) Which methodological gaps have greatest negative impact on long tail performance? Our answers highlight major avenues for future research in transfer learning for the long tail. Lastly, using our meta-analysis framework, we perform a case study comparing the performance of various adaptation methods on clinical narratives, which provides interesting insights that may enable us to make progress along these future avenues.
Transactions of the Association for Computational Linguistics
10
10.1162/tacl_a_00500
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,529