entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
wang-etal-2022-extracting
Extracting and Inferring Personal Attributes from Dialogue
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.6/
Wang, Zhilin and Zhou, Xuhui and Koncel-Kedziorski, Rik and Marin, Alex and Xia, Fei
Proceedings of the 4th Workshop on NLP for Conversational AI
58--69
Personal attributes represent structured information about a person, such as their hobbies, pets, family, likes and dislikes. We introduce the tasks of extracting and inferring personal attributes from human-human dialogue, and analyze the linguistic demands of these tasks. To meet these challenges, we introduce a simple and extensible model that combines an autoregressive language model utilizing constrained attribute generation with a discriminative reranker. Our model outperforms strong baselines on extracting personal attributes as well as inferring personal attributes that are not contained verbatim in utterances and instead requires commonsense reasoning and lexical inferences, which occur frequently in everyday conversation. Finally, we demonstrate the benefit of incorporating personal attributes in social chit-chat and task-oriented dialogue settings.
null
null
10.18653/v1/2022.nlp4convai-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,606
inproceedings
tredici-etal-2022-rewriting
From Rewriting to Remembering: Common Ground for Conversational {QA} Models
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.7/
Del Tredici, Marco and Shen, Xiaoyu and Barlacchi, Gianni and Byrne, Bill and de Gispert, Adri{\`a}
Proceedings of the 4th Workshop on NLP for Conversational AI
70--76
In conversational QA, models have to leverage information in previous turns to answer upcoming questions. Current approaches, such as Question Rewriting, struggle to extract relevant information as the conversation unwinds. We introduce the Common Ground (CG), an approach to accumulate conversational information as it emerges and select the relevant information at every turn. We show that CG offers a more efficient and human-like way to exploit conversational information compared to existing approaches, leading to improvements on Open Domain Conversational QA.
null
null
10.18653/v1/2022.nlp4convai-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,607
inproceedings
smith-etal-2022-human
Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.8/
Smith, Eric and Hsu, Orion and Qian, Rebecca and Roller, Stephen and Boureau, Y-Lan and Weston, Jason
Proceedings of the 4th Workshop on NLP for Conversational AI
77--97
At the heart of improving conversational AI is the open problem of how to evaluate conversations. Issues with automatic metrics are well known (Liu et al., 2016), with human evaluations still considered the gold standard. Unfortunately, how to perform human evaluations is also an open problem: differing data collection methods have varying levels of human agreement and statistical sensitivity, resulting in differing amounts of human annotation hours and labor costs. In this work we compare five different crowdworker-based human evaluation methods and find that different methods are best depending on the types of models compared, with no clear winner across the board. While this highlights the open problems in the area, our analysis leads to advice of when to use which one, and possible future directions.
null
null
10.18653/v1/2022.nlp4convai-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,608
inproceedings
sarkar-etal-2022-kg
{KG}-{CR}u{SE}: Recurrent Walks over Knowledge Graph for Explainable Conversation Reasoning using Semantic Embeddings
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.9/
Sarkar, Rajdeep and Arcan, Mihael and McCrae, John
Proceedings of the 4th Workshop on NLP for Conversational AI
98--107
Knowledge-grounded dialogue systems utilise external knowledge such as knowledge graphs to generate informative and appropriate responses. A crucial challenge of such systems is to select facts from a knowledge graph pertinent to the dialogue context for response generation. This fact selection can be formulated as path traversal over a knowledge graph conditioned on the dialogue context. Such paths can originate from facts mentioned in the dialogue history and terminate at the facts to be mentioned in the response. These walks, in turn, provide an explanation of the flow of the conversation. This work proposes KG-CRuSE, a simple, yet effective LSTM based decoder that utilises the semantic information in the dialogue history and the knowledge graph elements to generate such paths for effective conversation explanation. Extensive evaluations showed that our model outperforms the state-of-the-art models on the OpenDialKG dataset on multiple metrics.
null
null
10.18653/v1/2022.nlp4convai-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,609
inproceedings
sauer-etal-2022-knowledge
Knowledge Distillation Meets Few-Shot Learning: An Approach for Few-Shot Intent Classification Within and Across Domains
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.10/
Sauer, Anna and Asaadi, Shima and K{\"uch, Fabian
Proceedings of the 4th Workshop on NLP for Conversational AI
108--119
Large Transformer-based natural language understanding models have achieved state-of-the-art performance in dialogue systems. However, scarce labeled data for training, the large model size, and low inference speed hinder their deployment in low-resource scenarios. Few-shot learning and knowledge distillation techniques have been introduced to reduce the need for labeled data and computational resources, respectively. However, these techniques are incompatible because few-shot learning trains models using few data, whereas, knowledge distillation requires sufficient data to train smaller, yet competitive models that run on limited computational resources. In this paper, we address the problem of distilling generalizable small models under the few-shot setting for the intent classification task. Considering in-domain and cross-domain few-shot learning scenarios, we introduce an approach for distilling small models that generalize to new intent classes and domains using only a handful of labeled examples. We conduct experiments on public intent classification benchmarks, and observe a slight performance gap between small models and large Transformer-based models. Overall, our results in both few-shot scenarios confirm the generalization ability of the small distilled models while having lower computational costs.
null
null
10.18653/v1/2022.nlp4convai-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,610
inproceedings
huang-etal-2022-mtl
{MTL}-{SLT}: Multi-Task Learning for Spoken Language Tasks
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.11/
Huang, Zhiqi and Rao, Milind and Raju, Anirudh and Zhang, Zhe and Bui, Bach and Lee, Chul
Proceedings of the 4th Workshop on NLP for Conversational AI
120--130
Language understanding in speech-based systems has attracted extensive interest from both academic and industrial communities in recent years with the growing demand for voice-based applications. Prior works focus on independent research by the automatic speech recognition (ASR) and natural language processing (NLP) communities, or on jointly modeling the speech and NLP problems focusing on a single dataset or single NLP task. To facilitate the development of spoken language research, we introduce MTL-SLT, a multi-task learning framework for spoken language tasks. MTL-SLT takes speech as input, and outputs transcription, intent, named entities, summaries, and answers to text queries, supporting the tasks of spoken language understanding, spoken summarization and spoken question answering respectively. The proposed framework benefits from three key aspects: 1) pre-trained sub-networks of ASR model and language model; 2) multi-task learning objective to exploit shared knowledge from different tasks; 3) end-to-end training of ASR and downstream NLP task based on sequence loss. We obtain state-of-the-art results on spoken language understanding tasks such as SLURP and ATIS. Spoken summarization results are reported on a new dataset: Spoken-Gigaword.
null
null
10.18653/v1/2022.nlp4convai-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,611
inproceedings
sundar-heck-2022-multimodal
Multimodal Conversational {AI}: A Survey of Datasets and Approaches
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.12/
Sundar, Anirudh and Heck, Larry
Proceedings of the 4th Workshop on NLP for Conversational AI
131--147
As humans, we experience the world with all our senses or modalities (sound, sight, touch, smell, and taste). We use these modalities, particularly sight and touch, to convey and interpret specific meanings. Multimodal expressions are central to conversations; a rich set of modalities amplify and often compensate for each other. A multimodal conversational AI system answers questions, fulfills tasks, and emulates human conversations by understanding and expressing itself via multiple modalities. This paper motivates, defines, and mathematically formulates the multimodal conversational research objective. We provide a taxonomy of research required to solve the objective: multimodal representation, fusion, alignment, translation, and co-learning. We survey state-of-the-art datasets and approaches for each research area and highlight their limiting assumptions. Finally, we identify multimodal co-learning as a promising direction for multimodal conversational AI research.
null
null
10.18653/v1/2022.nlp4convai-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,612
inproceedings
kann-etal-2022-open
Open-domain Dialogue Generation: What We Can Do, Cannot Do, And Should Do Next
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.13/
Kann, Katharina and Ebrahimi, Abteen and Koh, Joewie and Dudy, Shiran and Roncone, Alessandro
Proceedings of the 4th Workshop on NLP for Conversational AI
148--165
Human{--}computer conversation has long been an interest of artificial intelligence and natural language processing research. Recent years have seen a dramatic improvement in quality for both task-oriented and open-domain dialogue systems, and an increasing amount of research in the area. The goal of this work is threefold: (1) to provide an overview of recent advances in the field of open-domain dialogue, (2) to summarize issues related to ethics, bias, and fairness that the field has identified as well as typical errors of dialogue systems, and (3) to outline important future challenges. We hope that this work will be of interest to both new and experienced researchers in the area.
null
null
10.18653/v1/2022.nlp4convai-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,613
inproceedings
berlot-attwell-rudzicz-2022-relevance
Relevance in Dialogue: Is Less More? An Empirical Comparison of Existing Metrics, and a Novel Simple Metric
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.14/
Berlot-Attwell, Ian and Rudzicz, Frank
Proceedings of the 4th Workshop on NLP for Conversational AI
166--183
In this work, we evaluate various existing dialogue relevance metrics, find strong dependency on the dataset, often with poor correlation with human scores of relevance, and propose modifications to reduce data requirements and domain sensitivity while improving correlation. Our proposed metric achieves state-of-the-art performance on the HUMOD dataset while reducing measured sensitivity to dataset by 37{\%}-66{\%}. We achieve this without fine-tuning a pretrained language model, and using only 3,750 unannotated human dialogues and a single negative example. Despite these limitations, we demonstrate competitive performance on four datasets from different domains. Our code, including our metric and experiments, is open sourced.
null
null
10.18653/v1/2022.nlp4convai-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,614
inproceedings
gupta-etal-2022-retronlu
{R}etro{NLU}: Retrieval Augmented Task-Oriented Semantic Parsing
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.15/
Gupta, Vivek and Shrivastava, Akshat and Sagar, Adithya and Aghajanyan, Armen and Savenkov, Denis
Proceedings of the 4th Workshop on NLP for Conversational AI
184--196
While large pre-trained language models accumulate a lot of knowledge in their parameters, it has been demonstrated that augmenting it with non-parametric retrieval-based memory has a number of benefits ranging from improved accuracy to data efficiency for knowledge-focused tasks such as question answering. In this work, we apply retrieval-based modeling ideas to the challenging complex task of multi-domain task-oriented semantic parsing for conversational assistants. Our technique, RetroNLU, extends a sequence-to-sequence model architecture with a retrieval component, which is used to retrieve existing similar samples and present them as an additional context to the model. In particular, we analyze two settings, where we augment an input with (a) retrieved nearest neighbor utterances (utterance-nn), and (b) ground-truth semantic parses of nearest neighbor utterances (semparse-nn). Our technique outperforms the baseline method by 1.5{\%} absolute macro-F1, especially at the low resource setting, matching the baseline model accuracy with only 40{\%} of the complete data. Furthermore, we analyse the quality, model sensitivity, and performance of the nearest neighbor retrieval component`s for semantic parses of varied utterance complexity.
null
null
10.18653/v1/2022.nlp4convai-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,615
inproceedings
saha-etal-2022-stylistic
Stylistic Response Generation by Controlling Personality Traits and Intent
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.16/
Saha, Sougata and Das, Souvik and Srihari, Rohini
Proceedings of the 4th Workshop on NLP for Conversational AI
197--211
Personality traits influence human actions and thoughts, which is manifested in day to day conversations. Although glimpses of personality traits are observable in existing open domain conversation corpora, leveraging generic language modelling for response generation overlooks the interlocutor idiosyncrasies, resulting in non-customizable personality agnostic responses. With the motivation of enabling stylistically configurable response generators, in this paper we experiment with end-to-end mechanisms to ground neural response generators based on both (i) interlocutor Big-5 personality traits, and (ii) discourse intent as stylistic control codes. Since most of the existing large scale open domain chat corpora do not include Big-5 personality traits and discourse intent, we employ automatic annotation schemes to enrich the corpora with noisy estimates of personality and intent annotations, and further assess the impact of using such features as control codes for response generation using automatic evaluation metrics, ablation studies and human judgement. Our experiments illustrate the effectiveness of this strategy resulting in improvements to existing benchmarks. Additionally, we yield two silver standard annotated corpora with intents and personality traits annotated, which can be of use to the research community.
null
null
10.18653/v1/2022.nlp4convai-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,616
inproceedings
zhang-etal-2022-toward
Toward Knowledge-Enriched Conversational Recommendation Systems
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.17/
Zhang, Tong and Liu, Yong and Li, Boyang and Zhong, Peixiang and Zhang, Chen and Wang, Hao and Miao, Chunyan
Proceedings of the 4th Workshop on NLP for Conversational AI
212--217
Conversational Recommendation Systems recommend items through language based interactions with users. In order to generate naturalistic conversations and effectively utilize knowledge graphs (KGs) containing background information, we propose a novel Bag-of-Entities loss, which encourages the generated utterances to mention concepts related to the item being recommended, such as the genre or director of a movie. We also propose an alignment loss to further integrate KG entities into the response generation network. Experiments on the large-scale REDIAL dataset demonstrate that the proposed system consistently outperforms state-of-the-art baselines.
null
null
10.18653/v1/2022.nlp4convai-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,617
inproceedings
han-etal-2022-understanding
Understanding and Improving the Exemplar-based Generation for Open-domain Conversation
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.18/
Han, Seungju and Kim, Beomsu and Seo, Seokjun and Erdenee, Enkhbayar and Chang, Buru
Proceedings of the 4th Workshop on NLP for Conversational AI
218--230
Exemplar-based generative models for open-domain conversation produce responses based on the exemplars provided by the retriever, taking advantage of generative models and retrieval models. However, due to the one-to-many problem of the open-domain conversation, they often ignore the retrieved exemplars while generating responses or produce responses over-fitted to the retrieved exemplars. To address these advantages, we introduce a training method selecting exemplars that are semantically relevant to the gold response but lexically distanced from the gold response. In the training phase, our training method first uses the gold response instead of dialogue context as a query to select exemplars that are semantically relevant to the gold response. And then, it eliminates the exemplars that lexically resemble the gold responses to alleviate the dependency of the generative models on that exemplars. The remaining exemplars could be irrelevant to the given context since they are searched depending on the gold response. Thus, our training method further utilizes the relevance scores between the given context and the exemplars to penalize the irrelevant exemplars. Extensive experiments demonstrate that our proposed training method alleviates the drawbacks of the existing exemplar-based generative models and significantly improves the performance in terms of appropriateness and informativeness.
null
null
10.18653/v1/2022.nlp4convai-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,618
inproceedings
nordquist-meyers-2022-breadth
On Breadth Alone: Improving the Precision of Terminology Extraction Systems on Patent Corpora
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.1/
Nordquist, Sean and Meyers, Adam
Proceedings of the Natural Legal Language Processing Workshop 2022
1--11
Automatic Terminology Extraction (ATE) methods are a class of linguistic, statistical, machine learning or hybrid techniques for identifying terminology in a set of documents. Most modern ATE methods use a statistical measure of how important or characteristic a potential term is to a foreground corpus by using a second background corpus as a baseline. While many variables with ATE methods have been carefully evaluated and tuned in the literature, the effects of choosing a particular background corpus over another are not obvious. In this paper, we propose a methodology that allows us to adjust the relative breadth of the foreground and background corpora in patent documents by taking advantage of the Cooperative Patent Classification (CPC) scheme. Our results show that for every foreground corpus, the broadest background corpus gave the worst performance, in the worst case that difference is 17{\%}. Similarly, the least broad background corpus gave suboptimal performance in all three experiments. We also demonstrate qualitative differences between background corpora {--} narrower background corpora tend towards more technical output. We expect our results to generalize to terminology extraction for other legal and technical documents and, generally, to the foreground/background approach to ATE.
null
null
10.18653/v1/2022.nllp-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,634
inproceedings
gubelmann-etal-2022-means
On What it Means to Pay Your Fair Share: Towards Automatically Mapping Different Conceptions of Tax Justice in Legal Research Literature
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.2/
Gubelmann, Reto and Hongler, Peter and Margadant, Elina and Handschuh, Siegfried
Proceedings of the Natural Legal Language Processing Workshop 2022
12--30
In this article, we explore the potential and challenges of applying transformer-based pre-trained language models (PLMs) and statistical methods to a particularly challenging, yet highly important and largely uncharted domain: normative discussions in tax law research. On our conviction, the role of NLP in this essentially contested territory is to make explicit implicit normative assumptions, and to foster debates across ideological divides. To this goal, we propose the first steps towards a method that automatically labels normative statements in tax law research, and that suggests the normative background of these statements. Our results are encouraging, but it is clear that there is still room for improvement.
null
null
10.18653/v1/2022.nllp-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,635
inproceedings
semo-etal-2022-classactionprediction
{C}lass{A}ction{P}rediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the {US}
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.3/
Semo, Gil and Bernsohn, Dor and Hagag, Ben and Hayat, Gila and Niklaus, Joel
Proceedings of the Natural Legal Language Processing Workshop 2022
31--46
The research field of Legal Natural Language Processing (NLP) has been very active recently, with Legal Judgment Prediction (LJP) becoming one of the most extensively studied tasks. To date, most publicly released LJP datasets originate from countries with civil law. In this work, we release, for the first time, a challenging LJP dataset focused on class action cases in the US. It is the first dataset in the common law system that focuses on the harder and more realistic task involving the complaints as input instead of the often used facts summary written by the court. Additionally, we study the difficulty of the task by collecting expert human predictions, showing that even human experts can only reach 53{\%} accuracy on this dataset. Our Longformer model clearly outperforms the human baseline (63{\%}), despite only considering the first 2,048 tokens. Furthermore, we perform a detailed error analysis and find that the Longformer model is significantly better calibrated than the human experts. Finally, we publicly release the dataset and the code used for the experiments.
null
null
10.18653/v1/2022.nllp-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,636
inproceedings
percin-etal-2022-combining
Combining {W}ord{N}et and Word Embeddings in Data Augmentation for Legal Texts
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.4/
Per{\c{c}}in, Sezen and Galassi, Andrea and Lagioia, Francesca and Ruggeri, Federico and Santin, Piera and Sartor, Giovanni and Torroni, Paolo
Proceedings of the Natural Legal Language Processing Workshop 2022
47--52
Creating balanced labeled textual corpora for complex tasks, like legal analysis, is a challenging and expensive process that often requires the collaboration of domain experts. To address this problem, we propose a data augmentation method based on the combination of GloVe word embeddings and the WordNet ontology. We present an example of application in the legal domain, specifically on decisions of the Court of Justice of the European Union.Our evaluation with human experts confirms that our method is more robust than the alternatives.
null
null
10.18653/v1/2022.nllp-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,637
inproceedings
zufall-etal-2022-legal
A Legal Approach to Hate Speech {--} Operationalizing the {EU}`s Legal Framework against the Expression of Hatred as an {NLP} Task
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.5/
Zufall, Frederike and Hamacher, Marius and Kloppenborg, Katharina and Zesch, Torsten
Proceedings of the Natural Legal Language Processing Workshop 2022
53--64
We propose a {\textquoteleft}legal approach' to hate speech detection by operationalization of the decision as to whether a post is subject to criminal law into an NLP task. Comparing existing regulatory regimes for hate speech, we base our investigation on the European Union`s framework as it provides a widely applicable legal minimum standard. Accurately deciding whether a post is punishable or not usually requires legal education. We show that, by breaking the legal assessment down into a series of simpler sub-decisions, even laypersons can annotate consistently. Based on a newly annotated dataset, our experiments show that directly learning an automated model of punishable content is challenging. However, learning the two sub-tasks of {\textquoteleft}target group' and {\textquoteleft}targeting conduct' instead of a holistic, end-to-end approach to the legal assessment yields better results. Overall, our method also provides decisions that are more transparent than those of end-to-end models, which is a crucial point in legal decision-making.
null
null
10.18653/v1/2022.nllp-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,638
inproceedings
lukose-etal-2022-privacy
Privacy Pitfalls of Online Service Terms and Conditions: a Hybrid Approach for Classification and Summarization
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.6/
Lukose, Emilia and De, Suparna and Johnson, Jon
Proceedings of the Natural Legal Language Processing Workshop 2022
65--75
Verbose and complicated legal terminology in online service terms and conditions (T{\&}C) means that users typically don`t read these documents before accepting the terms of such unilateral service contracts. With such services becoming part of mainstream digital life, highlighting Terms of Service (ToS) clauses that impact on the collection and use of user data and privacy are important concerns. Advances in text summarization can help to create informative and concise summaries of the terms, but existing approaches geared towards news and microblogging corpora are not directly applicable to the ToS domain, which is hindered by a lack of T{\&}C-relevant resources for training and evaluation. This paper presents a ToS model, developing a hybrid extractive-classifier-abstractive pipeline that highlights the privacy and data collection/use-related sections in a ToS document and paraphrases these into concise and informative sentences. Relying on significantly less training data (4313 training pairs) than previous representative works (287,226 pairs), our model outperforms extractive baselines by at least 50{\%} in ROUGE-1 score and 54{\%} in METEOR score. The paper also contributes to existing community efforts by curating a dataset of online service T{\&}C, through a developed web scraping tool.
null
null
10.18653/v1/2022.nllp-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,639
inproceedings
schraagen-etal-2022-abstractive
Abstractive Summarization of {D}utch Court Verdicts Using Sequence-to-sequence Models
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.7/
Schraagen, Marijn and Bex, Floris and Van De Luijtgaarden, Nick and Prijs, Dani{\"el
Proceedings of the Natural Legal Language Processing Workshop 2022
76--87
With the legal sector embracing digitization, the increasing availability of information has led to a need for systems that can automatically summarize legal documents. Most existing research on legal text summarization has so far focused on extractive models, which can result in awkward summaries, as sentences in legal documents can be very long and detailed. In this study, we apply two abstractive summarization models on a Dutch legal domain dataset. The results show that existing models transfer quite well across domains and languages: the ROUGE scores of our experiments are comparable to state-of-the-art studies on English news article texts. Examining one of the models showed the capability of rewriting long legal sentences to much shorter ones, using mostly vocabulary from the source document. Human evaluation shows that for both models hand-made summaries are still perceived as more relevant and readable, and automatic summaries do not always capture elements such as background, considerations and judgement. Still, generated summaries are valuable if only a keyword summary or no summary at all is present.
null
null
10.18653/v1/2022.nllp-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,640
inproceedings
maroudas-etal-2022-legal
Legal-Tech Open Diaries: Lesson learned on how to develop and deploy light-weight models in the era of humongous Language Models
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.8/
Maroudas, Stelios and Legkas, Sotiris and Malakasiotis, Prodromos and Chalkidis, Ilias
Proceedings of the Natural Legal Language Processing Workshop 2022
88--110
In the era of billion-parameter-sized Language Models (LMs), start-ups have to follow trends and adapt their technology accordingly. Nonetheless, there are open challenges since the development and deployment of large models comes with a need for high computational resources and has economical consequences. In this work, we follow the steps of the R{\&}D group of a modern legal-tech start-up and present important insights on model development and deployment. We start from ground zero by pre-training multiple domain-specific multi-lingual LMs which are a better fit to contractual and regulatory text compared to the available alternatives (XLM-R). We present benchmark results of such models in a half-public half-private legal benchmark comprising 5 downstream tasks showing the impact of larger model size. Lastly, we examine the impact of a full-scale pipeline for model compression which includes: a) Parameter Pruning, b) Knowledge Distillation, and c) Quantization: The resulting models are much more efficient without sacrificing performance at large.
null
null
10.18653/v1/2022.nllp-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,641
inproceedings
bannihatti-kumar-etal-2022-towards
Towards Cross-Domain Transferability of Text Generation Models for Legal Text
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.9/
Bannihatti Kumar, Vinayshekhar and Bhattacharjee, Kasturi and Gangadharaiah, Rashmi
Proceedings of the Natural Legal Language Processing Workshop 2022
111--118
Legalese can often be filled with verbose domain-specific jargon which can make it challenging to understand and use for non-experts. Creating succinct summaries of legal documents often makes it easier for user comprehension. However, obtaining labeled data for every domain of legal text is challenging, which makes cross-domain transferability of text generation models for legal text, an important area of research. In this paper, we explore the ability of existing state-of-the-art T5 {\&} BART-based summarization models to transfer across legal domains. We leverage publicly available datasets across four domains for this task, one of which is a new resource for summarizing privacy policies, that we curate and release for academic research. Our experiments demonstrate the low cross-domain transferability of these models, while also highlighting the benefits of combining different domains. Further, we compare the effectiveness of standard metrics for this task and illustrate the vast differences in their performance.
null
null
10.18653/v1/2022.nllp-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,642
inproceedings
li-etal-2022-parameter
Parameter-Efficient Legal Domain Adaptation
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.10/
Li, Jonathan and Bhambhoria, Rohan and Zhu, Xiaodan
Proceedings of the Natural Legal Language Processing Workshop 2022
119--129
Seeking legal advice is often expensive. Recent advancements in machine learning for solving complex problems can be leveraged to help make legal services more accessible to the public. However, real-life applications encounter significant challenges. State-of-the-art language models are growing increasingly large, making parameter-efficient learning increasingly important. Unfortunately, parameter-efficient methods perform poorly with small amounts of data, which are common in the legal domain (where data labelling costs are high). To address these challenges, we propose parameter-efficient legal domain adaptation, which uses vast unsupervised legal data from public legal forums to perform legal pre-training. This method exceeds or matches the fewshot performance of existing models such as LEGAL-BERT on various legal tasks while tuning only approximately 0.1{\%} of model parameters. Additionally, we show that our method can achieve calibration comparable to existing methods across several tasks. To the best of our knowledge, this work is among the first to explore parameter-efficient methods of tuning language models in the legal domain.
null
null
10.18653/v1/2022.nllp-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,643
inproceedings
mamakas-etal-2022-processing
Processing Long Legal Documents with Pre-trained Transformers: Modding {L}egal{BERT} and Longformer
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.11/
Mamakas, Dimitris and Tsotsi, Petros and Androutsopoulos, Ion and Chalkidis, Ilias
Proceedings of the Natural Legal Language Processing Workshop 2022
130--142
Pre-trained Transformers currently dominate most NLP tasks. They impose, however, limits on the maximum input length (512 sub-words in BERT), which are too restrictive in the legal domain. Even sparse-attention models, such as Longformer and BigBird, which increase the maximum input length to 4,096 sub-words, severely truncate texts in three of the six datasets of LexGLUE. Simpler linear classifiers with TF-IDF features can handle texts of any length, require far less resources to train and deploy, but are usually outperformed by pre-trained Transformers. We explore two directions to cope with long legal texts: (i) modifying a Longformer warm-started from LegalBERT to handle even longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use TF-IDF representations. The first approach is the best in terms of performance, surpassing a hierarchical version of LegalBERT, which was the previous state of the art in LexGLUE. The second approach leads to computationally more efficient models at the expense of lower performance, but the resulting models still outperform overall a linear SVM with TF-IDF features in long legal document classification.
null
null
10.18653/v1/2022.nllp-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,644
inproceedings
hwang-etal-2022-data
Data-efficient end-to-end Information Extraction for Statistical Legal Analysis
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.12/
Hwang, Wonseok and Eom, Saehee and Lee, Hanuhl and Park, Hai Jin and Seo, Minjoon
Proceedings of the Natural Legal Language Processing Workshop 2022
143--152
Legal practitioners often face a vast amount of documents. Lawyers, for instance, search for appropriate precedents favorable to their clients, while the number of legal precedents is ever-growing. Although legal search engines can assist finding individual target documents and narrowing down the number of candidates, retrieved information is often presented as unstructured text and users have to examine each document thoroughly which could lead to information overloading. This also makes their statistical analysis challenging. Here, we present an end-to-end information extraction (IE) system for legal documents. By formulating IE as a generation task, our system can be easily applied to various tasks without domain-specific engineering effort. The experimental results of four IE tasks on Korean precedents shows that our IE system can achieve competent scores (-2.3 on average) compared to the rule-based baseline with as few as 50 training examples per task and higher score (+5.4 on average) with 200 examples. Finally, our statistical analysis on two case categories {---} drunk driving and fraud {---} with 35k precedents reveals the resulting structured information from our IE system faithfully reflects the macroscopic features of Korean legal system.
null
null
10.18653/v1/2022.nllp-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,645
inproceedings
malik-etal-2022-semantic
Semantic Segmentation of Legal Documents via Rhetorical Roles
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.13/
Malik, Vijit and Sanjay, Rishabh and Guha, Shouvik Kumar and Hazarika, Angshuman and Nigam, Shubham Kumar and Bhattacharya, Arnab and Modi, Ashutosh
Proceedings of the Natural Legal Language Processing Workshop 2022
153--171
Legal documents are unstructured, use legal jargon, and have considerable length, making them difficult to process automatically via conventional text processing techniques. A legal document processing system would benefit substantially if the documents could be segmented into coherent information units. This paper proposes a new corpus of legal documents annotated (with the help of legal experts) with a set of 13 semantically coherent units labels (referred to as Rhetorical Roles), e.g., facts, arguments, statute, issue, precedent, ruling, and ratio. We perform a thorough analysis of the corpus and the annotations. For automatically segmenting the legal documents, we experiment with the task of rhetorical role prediction: given a document, predict the text segments corresponding to various roles. Using the created corpus, we experiment extensively with various deep learning-based baseline models for the task. Further, we develop a multitask learning (MTL) based deep model with document rhetorical role label shift as an auxiliary task for segmenting a legal document. The proposed model shows superior performance over the existing models. We also experiment with model performance in the case of domain transfer and model distillation techniques to see the model performance in limited data conditions.
null
null
10.18653/v1/2022.nllp-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,646
inproceedings
yin-habernal-2022-privacy
Privacy-Preserving Models for Legal Natural Language Processing
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.14/
Yin, Ying and Habernal, Ivan
Proceedings of the Natural Legal Language Processing Workshop 2022
172--183
Pre-training large transformer models with in-domain data improves domain adaptation and helps gain performance on the domain-specific downstream tasks. However, sharing models pre-trained on potentially sensitive data is prone to adversarial privacy attacks. In this paper, we asked to which extent we can guarantee privacy of pre-training data and, at the same time, achieve better downstream performance on legal tasks without the need of additional labeled data. We extensively experiment with scalable self-supervised learning of transformer models under the formal paradigm of differential privacy and show that under specific training configurations we can improve downstream performance without sacrifying privacy protection for the in-domain data. Our main contribution is utilizing differential privacy for large-scale pre-training of transformer language models in the legal NLP domain, which, to the best of our knowledge, has not been addressed before.
null
null
10.18653/v1/2022.nllp-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,647
inproceedings
kalamkar-etal-2022-named
Named Entity Recognition in {I}ndian court judgments
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.15/
Kalamkar, Prathamesh and Agarwal, Astha and Tiwari, Aman and Gupta, Smita and Karn, Saurabh and Raghavan, Vivek
Proceedings of the Natural Legal Language Processing Workshop 2022
184--193
Identification of named entities from legal texts is an essential building block for developing other legal Artificial Intelligence applications. Named Entities in legal texts are slightly different and more fine-grained than commonly used named entities like Person, Organization, Location etc. In this paper, we introduce a new corpus of 46545 annotated legal named entities mapped to 14 legal entity types. The Baseline model for extracting legal named entities from judgment text is also developed.
null
null
10.18653/v1/2022.nllp-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,648
inproceedings
bongard-etal-2022-legal
The Legal Argument Reasoning Task in Civil Procedure
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.17/
Bongard, Leonard and Held, Lena and Habernal, Ivan
Proceedings of the Natural Legal Language Processing Workshop 2022
194--207
We present a new NLP task and dataset from the domain of the U.S. civil procedure. Each instance of the dataset consists of a general introduction to the case, a particular question, and a possible solution argument, accompanied by a detailed analysis of why the argument applies in that case. Since the dataset is based on a book aimed at law students, we believe that it represents a truly complex task for benchmarking modern legal language models. Our baseline evaluation shows that fine-tuning a legal transformer provides some advantage over random baseline models, but our analysis reveals that the actual ability to infer legal arguments remains a challenging open research question.
null
null
10.18653/v1/2022.nllp-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,649
inproceedings
sheik-etal-2022-efficient
Efficient Deep Learning-based Sentence Boundary Detection in Legal Text
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.18/
Sheik, Reshma and T, Gokul and Nirmala, S
Proceedings of the Natural Legal Language Processing Workshop 2022
208--217
A key component of the Natural Language Processing (NLP) pipeline is Sentence Boundary Detection (SBD). Erroneous SBD could affect other processing steps and reduce performance. A few criteria based on punctuation and capitalization are necessary to identify sentence borders in well-defined corpora. However, due to several grammatical ambiguities, the complex structure of legal data poses difficulties for SBD. In this paper, we have trained a neural network framework for identifying the end of the sentence in legal text. We used several state-of-the-art deep learning models, analyzed their performance, and identified that Convolutional Neural Network(CNN) outperformed other deep learning frameworks. We compared the results with rule-based, statistical, and transformer-based frameworks. The best neural network model outscored the popular rule-based framework with an improvement of 8{\%} in the F1 score. Although domain-specific statistical models have slightly improved performance, the trained CNN is 80 times faster in run-time and doesn`t require much feature engineering. Furthermore, after extensive pretraining, the transformer models fall short in overall performance compared to the best deep learning model.
null
null
10.18653/v1/2022.nllp-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,650
inproceedings
braun-2022-tracking
Tracking Semantic Shifts in {G}erman Court Decisions with Diachronic Word Embeddings
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.19/
Braun, Daniel
Proceedings of the Natural Legal Language Processing Workshop 2022
218--227
Language and its usage change over time. While legal language is arguably more stable than everyday language, it is still subject to change. Sometimes it changes gradually and slowly, sometimes almost instantaneously, for example through legislative changes. This paper presents an application of diachronic word embeddings to track changes in the usage of language by German courts triggered by changing legislation, based on a corpus of more than 200,000 documents. The results show the swift and lasting effect that changes in legislation can have on the usage of language by courts and suggest that using time-restricted word embedding models could be beneficial for downstream NLP tasks.
null
null
10.18653/v1/2022.nllp-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,651
inproceedings
m-benatti-etal-2022-disclose
Should {I} disclose my dataset? Caveats between reproducibility and individual data rights
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.20/
Benatti, Raysa M. and Villarroel, Camila M. L. and Avila, Sandra and Colombini, Esther L. and Severi, Fabiana
Proceedings of the Natural Legal Language Processing Workshop 2022
228--237
Natural language processing techniques have helped domain experts solve legal problems. Digital availability of court documents increases possibilities for researchers, who can access them as a source for building datasets {---} whose disclosure is aligned with good reproducibility practices in computational research. Large and digitized court systems, such as the Brazilian one, are prone to be explored in that sense. However, personal data protection laws impose restrictions on data exposure and state principles about which researchers should be mindful. Special caution must be taken in cases with human rights violations, such as gender discrimination, over which we elaborate as an example of interest. We present legal and ethical considerations on the issue, as well as guidelines for researchers dealing with this kind of data and deciding whether to disclose it.
null
null
10.18653/v1/2022.nllp-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,652
inproceedings
xu-etal-2022-attack
Attack on Unfair {T}o{S} Clause Detection: A Case Study using Universal Adversarial Triggers
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.21/
Xu, Shanshan and Broda, Irina and Haddad, Rashid and Negrini, Marco and Grabmair, Matthias
Proceedings of the Natural Legal Language Processing Workshop 2022
238--245
Recent work has demonstrated that natural language processing techniques can support consumer protection by automatically detecting unfair clauses in the Terms of Service (ToS) Agreement. This work demonstrates that transformer-based ToS analysis systems are vulnerable to adversarial attacks. We conduct experiments attacking an unfair-clause detector with universal adversarial triggers. Experiments show that a minor perturbation of the text can considerably reduce the detection performance. Moreover, to measure the detectability of the triggers, we conduct a detailed human evaluation study by collecting both answer accuracy and response time from the participants. The results show that the naturalness of the triggers remains key to tricking readers.
null
null
10.18653/v1/2022.nllp-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,653
inproceedings
au-etal-2022-e
{E}-{NER} {---} An Annotated Named Entity Recognition Corpus of Legal Text
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.22/
Au, Ting Wai Terence and Lampos, Vasileios and Cox, Ingemar
Proceedings of the Natural Legal Language Processing Workshop 2022
246--255
Identifying named entities such as a person, location or organization, in documents can highlight key information to readers. Training Named Entity Recognition (NER) models requires an annotated data set, which can be a time-consuming labour-intensive task. Nevertheless, there are publicly available NER data sets for general English. Recently there has been interest in developing NER for legal text. However, prior work and experimental results reported here indicate that there is a significant degradation in performance when NER methods trained on a general English data set are applied to legal text. We describe a publicly available legal NER data set, called E-NER, based on legal company filings available from the US Securities and Exchange Commission`s EDGAR data set. Training a number of different NER algorithms on the general English CoNLL-2003 corpus but testing on our test collection confirmed significant degradations in accuracy, as measured by the F1-score, of between 29.4{\%} and 60.4{\%}, compared to training and testing on the E-NER collection.
null
null
10.18653/v1/2022.nllp-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,654
inproceedings
li-etal-2022-detecting
Detecting Relevant Differences Between Similar Legal Texts
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.24/
Li, Xiang and Gao, Jiaxun and Inkpen, Diana and Alschner, Wolfgang
Proceedings of the Natural Legal Language Processing Workshop 2022
256--264
Given two similar legal texts, is it useful to be able to focus only on the parts that contain relevant differences. However, because of variation in linguistic structure and terminology, it is not easy to identify true semantic differences. An accurate difference detection model between similar legal texts is therefore in demand, in order to increase the efficiency of legal research and document analysis. In this paper, we automatically label a training dataset of sentence pairs using an existing legal resource of international investment treaties that were already manually annotated with metadata. Then we propose models based on state-of-the-art deep learning techniques for the novel task of detecting relevant differences. In addition to providing solutions for this task, we include models for automatically producing metadata for the treaties that do not have it.
null
null
10.18653/v1/2022.nllp-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,655
inproceedings
bergam-etal-2022-legal
Legal and Political Stance Detection of {SCOTUS} Language
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.25/
Bergam, Noah and Allaway, Emily and Mckeown, Kathleen
Proceedings of the Natural Legal Language Processing Workshop 2022
265--275
We analyze publicly available US Supreme Court documents using automated stance detection. In the first phase of our work, we investigate the extent to which the Court`s public-facing language is political. We propose and calculate two distinct ideology metrics of SCOTUS justices using oral argument transcripts. We then compare these language-based metrics to existing social scientific measures of the ideology of the Supreme Court and the public. Through this cross-disciplinary analysis, we find that justices who are more responsive to public opinion tend to express their ideology during oral arguments. This observation provides a new kind of evidence in favor of the attitudinal change hypothesis of Supreme Court justice behavior. As a natural extension of this political stance detection, we propose the more specialized task of legal stance detection with our new dataset SC-stance, which matches written opinions to legal questions. We find competitive performance on this dataset using language adapters trained on legal documents.
null
null
10.18653/v1/2022.nllp-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,656
inproceedings
joshi-etal-2022-graph
Graph-based Keyword Planning for Legal Clause Generation from Topics
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.26/
Joshi, Sagar and Balaji, Sumanth and Garimella, Aparna and Varma, Vasudeva
Proceedings of the Natural Legal Language Processing Workshop 2022
276--286
Generating domain-specific content such as legal clauses based on minimal user-provided information can be of significant benefit in automating legal contract generation. In this paper, we propose a controllable graph-based mechanism that can generate legal clauses using only the topic or type of the legal clauses. Our pipeline consists of two stages involving a graph-based planner followed by a clause generator. The planner outlines the content of a legal clause as a sequence of keywords in the order of generic to more specific clause information based on the input topic using a controllable graph-based mechanism. The generation stage takes in a given plan and generates a clause. The pipeline consists of a graph-based planner followed by text generation. We illustrate the effectiveness of our proposed two-stage approach on a broad set of clause topics in contracts.
null
null
10.18653/v1/2022.nllp-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,657
inproceedings
van-hofslot-etal-2022-automatic
Automatic Classification of Legal Violations in Cookie Banner Texts
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.27/
Van Hofslot, Marieke and Akdag Salah, Almila and Gatt, Albert and Santos, Cristiana
Proceedings of the Natural Legal Language Processing Workshop 2022
287--295
Cookie banners are designed to request consent from website visitors for their personal data. Recent research suggest that a high percentage of cookie banners violate legal regulations as defined by the General Data Protection Regulation (GDPR) and the ePrivacy Directive. In this paper, we focus on language used in these cookie banners, and whether these violations can be automatically detected, or not. We make use of a small cookie banner dataset that is annotated by five experts for legal violations and test it with state of the art classification models, namely BERT, LEGAL-BERT, BART in a zero-shot setting and BERT with LIWC embeddings. Our results show that none of the models outperform the others in all classes, but in general, BERT and LEGAL-BERT provide the highest accuracy results (70{\%}-97{\%}). However, they are influenced by the small size and the unbalanced distributions in the dataset.
null
null
10.18653/v1/2022.nllp-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,658
inproceedings
garimella-etal-2022-text
Text Simplification for Legal Domain: {I}nsights and Challenges
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.28/
Garimella, Aparna and Sancheti, Abhilasha and Aggarwal, Vinay and Ganesh, Ananya and Chhaya, Niyati and Kambhatla, Nandakishore
Proceedings of the Natural Legal Language Processing Workshop 2022
296--304
Legal documents such as contracts contain complex and domain-specific jargons, long and nested sentences, and often present with several details that may be difficult to understand for laypeople without domain expertise. In this paper, we explore the problem of text simplification (TS) in legal domain. The main challenge to this is the lack of availability of complex-simple parallel datasets for the legal domain. We investigate some of the existing datasets, methods, and metrics in the TS literature for simplifying legal texts, and perform human evaluation to analyze the gaps. We present some of the challenges involved, and outline a few open questions that need to be addressed for future research in this direction.
null
null
10.18653/v1/2022.nllp-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,659
inproceedings
smadu-etal-2022-legal
Legal Named Entity Recognition with Multi-Task Domain Adaptation
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.29/
Sm{\u{a}}du, R{\u{a}}zvan-Alexandru and Dinic{\u{a}}, Ion-Robert and Avram, Andrei-Marius and Cercel, Dumitru-Clementin and Pop, Florin and Cercel, Mihaela-Claudia
Proceedings of the Natural Legal Language Processing Workshop 2022
305--321
Named Entity Recognition (NER) is a well-explored area from Information Retrieval and Natural Language Processing with an extensive research community. Despite that, few languages, such as English and German, are well-resourced, whereas many other languages, such as Romanian, have scarce resources, especially in domain-specific applications. In this work, we address the NER problem in the legal domain from both Romanian and German languages and evaluate the performance of our proposed method based on domain adaptation. We employ multi-task learning to jointly train a neural network on two legal and general domains and perform adaptation among them. The results show that domain adaptation increase performances by a small amount, under 1{\%}, while considerable improvements are in the recall metric.
null
null
10.18653/v1/2022.nllp-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,660
inproceedings
zhong-litman-2022-computing
Computing and Exploiting Document Structure to Improve Unsupervised Extractive Summarization of Legal Case Decisions
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.30/
Zhong, Yang and Litman, Diane
Proceedings of the Natural Legal Language Processing Workshop 2022
322--337
Though many algorithms can be used to automatically summarize legal case decisions, most fail to incorporate domain knowledge about how important sentences in a legal decision relate to a representation of its document structure. For example, analysis of a legal case sum- marization dataset demonstrates that sentences serving different types of argumentative roles in the decision appear in different sections of the document. In this work, we propose an unsupervised graph-based ranking model that uses a reweighting algorithm to exploit properties of the document structure of legal case decisions. We also explore the impact of using different methods to compute the document structure. Results on the Canadian Legal Case Law dataset show that our proposed method outperforms several strong baselines.
null
null
10.18653/v1/2022.nllp-1.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,661
inproceedings
al-qurishi-etal-2022-aralegal
{A}ra{L}egal-{BERT}: A pretrained language model for {A}rabic Legal text
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.31/
Al-qurishi, Muhammad and Alqaseemi, Sarah and Souissi, Riad
Proceedings of the Natural Legal Language Processing Workshop 2022
338--344
The effectiveness of the bidirectional encoder representations from transformers (BERT) model for multiple linguistic tasks is well documented. However, its potential for a narrow and specific domain, such as legal, has not been fully explored. In this study, we examine the use of BERT in the Arabic legal domain and customize this language model for several downstream tasks using different domain-relevant training and test datasets to train BERT from scratch. We introduce AraLegal-BERT, a bidirectional encoder transformer-based model that has been thoroughly tested and carefully optimized with the goal of amplifying the impact of natural language processing-driven solutions on jurisprudence, legal documents, and legal practice. We fine-tuned AraLegal-BERT and evaluated it against three BERT variants for the Arabic language in three natural language understanding tasks. The results showed that the base version of AraLegal-BERT achieved better accuracy than the typical and original BERT model concerning legal texts.
null
null
10.18653/v1/2022.nllp-1.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,662
inproceedings
mamooler-etal-2022-efficient
An Efficient Active Learning Pipeline for Legal Text Classification
Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina and Preoțiuc-Pietro, Daniel
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nllp-1.32/
Mamooler, Sepideh and Lebret, R{\'e}mi and Massonnet, Stephane and Aberer, Karl
Proceedings of the Natural Legal Language Processing Workshop 2022
345--358
Active Learning (AL) is a powerful tool for learning with less labeled data, in particular, for specialized domains, like legal documents, where unlabeled data is abundant, but the annotation requires domain expertise and is thus expensive. Recent works have shown the effectiveness of AL strategies for pre-trained language models. However, most AL strategies require a set of labeled samples to start with, which is expensive to acquire. In addition, pre-trained language models have been shown unstable during fine-tuning with small datasets, and their embeddings are not semantically meaningful. In this work, we propose a pipeline for effectively using active learning with pre-trained language models in the legal domain. To this end, we leverage the available \textit{unlabeled} data in three phases. First, we continue pre-training the model to adapt it to the downstream task. Second, we use knowledge distillation to guide the model`s embeddings to a semantically meaningful space. Finally, we propose a simple, yet effective, strategy to find the initial set of labeled samples with fewer actions compared to existing methods. Our experiments on Contract-NLI, adapted to the classification task, and LEDGAR benchmarks show that our approach outperforms standard AL strategies, and is more efficient. Furthermore, our pipeline reaches comparable results to the fully-supervised approach with a small performance gap, and dramatically reduced annotation cost. Code and the adapted data will be made available.
null
null
10.18653/v1/2022.nllp-1.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,663
inproceedings
baig-etal-2022-drivingbeacon
{D}riving{B}eacon: Driving Behaviour Change Support System Considering Mobile Use and Geo-information
Krahmer, Emiel and McCoy, Kathy and Reiter, Ehud
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.nlg4health-1.1/
Baig, Jawwad and Chen, Guanyi and Lin, Chenghua and Reiter, Ehud
Proceedings of the First Workshop on Natural Language Generation in Healthcare
1--8
Natural Language Generation has been proved to be effective and efficient in constructing health behaviour change support systems. We are working on DrivingBeacon, a behaviour change support system that uses telematics data from mobile phone sensors to generate weekly data-to-text feedback reports to vehicle drivers. The system makes use of a wealth of information such as mobile phone use while driving, geo-information, speeding, rush hour driving to generate the feedback. We present results from a real-world evaluation where 8 drivers in UK used DrivingBeacon for 4 weeks. Results are promising but not conclusive.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,665
inproceedings
grambow-etal-2022-domain
In-Domain Pre-Training Improves Clinical Note Generation from Doctor-Patient Conversations
Krahmer, Emiel and McCoy, Kathy and Reiter, Ehud
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.nlg4health-1.2/
Grambow, Colin and Zhang, Longxiang and Schaaf, Thomas
Proceedings of the First Workshop on Natural Language Generation in Healthcare
9--22
Summarization of doctor-patient conversations into clinical notes by medical scribes is an essential process for effective clinical care. Pre-trained transformer models have shown a great amount of success in this area, but the domain shift from standard NLP tasks to the medical domain continues to present challenges. We build upon several recent works to show that additional pre-training with in-domain medical conversations leads to performance gains for clinical summarization. In addition to conventional evaluation metrics, we also explore a clinical named entity recognition model for concept-based evaluation. Finally, we contrast long-sequence transformers with a common transformer model, BART. Overall, our findings corroborate research in non-medical domains and suggest that in-domain pre-training combined with transformers for long sequences are effective strategies for summarizing clinical encounters.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,666
inproceedings
bhattacharya-etal-2022-lchqa
{LCHQA}-Summ: Multi-perspective Summarization of Publicly Sourced Consumer Health Answers
Krahmer, Emiel and McCoy, Kathy and Reiter, Ehud
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.nlg4health-1.3/
Bhattacharya, Abari and Chaturvedi, Rochana and Yadav, Shweta
Proceedings of the First Workshop on Natural Language Generation in Healthcare
23--26
Community question answering forums provide a convenient platform for people to source answers to their questions including those related to healthcare from the general public. The answers to user queries are generally long and contain multiple different perspectives, redundancy or irrelevant answers. This presents a novel challenge for domain-specific concise and correct multi-answer summarization which we propose in this paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,667
inproceedings
hsu-etal-2022-towards
Towards Development of an Automated Health Coach
Krahmer, Emiel and McCoy, Kathy and Reiter, Ehud
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.nlg4health-1.4/
Hsu, Leighanne and Marquez Hernandez, Rommy and McCoy, Kathleen and Decker, Keith and Vemuri, Ajith and Dominick, Greg and Heintzelman, Megan
Proceedings of the First Workshop on Natural Language Generation in Healthcare
27--39
Human health coaching has been established as an effective intervention for improving clients' health, but it is restricted in scale due to the availability of coaches and finances of the clients. We aim to build a scalable, automated system for physical activity coaching that is similarly grounded in behavior change theories. In this paper, we present our initial steps toward building a flexible system that is capable of carrying out a natural dialogue for goal setting as a health coach would while also offering additional support through just-in-time adaptive interventions. We outline our modular system design and approach to gathering and analyzing data to incrementally implement such a system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,668
inproceedings
monfroglio-etal-2022-personalizing
Personalizing Weekly Diet Reports
Krahmer, Emiel and McCoy, Kathy and Reiter, Ehud
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.nlg4health-1.5/
Monfroglio, Elena and Anselma, Lucas and Mazzei, Alessandro
Proceedings of the First Workshop on Natural Language Generation in Healthcare
40--45
In this paper we present the main components of a weekly diet report generator (DRG) in natural language. The idea is to produce a text that contains information on the adherence of the dishes eaten during a week to the Mediterranean diet. The system is based on a user model, a database of the dishes eaten during the week and on the automatic computation of the Mediterranean Diet Score. All these sources of information are exploited to produce a highly personalized text. The system has two main goals, related to two different kinds of users: on the one hand, when used by dietitians, the main goal is to highlight the most salient medical information of the patient diet and, on the other hand, when used by final users, the main goal is to educate them toward a Mediterranean style of eating.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,669
inproceedings
fiumara-etal-2022-nieuw
The {NIEUW} Project: Developing Language Resources through Novel Incentives
Callison-Burch, Chris and Cieri, Christopher and Fiumara, James and Liberman, Mark
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nidcp-1.1/
Fiumara, James and Cieri, Christopher and Liberman, Mark and Callison-Burch, Chris and Wright, Jonathan and Parker, Robert
Proceedings of the 2nd Workshop on Novel Incentives in Data Collection from People: models, implementations, challenges and results within LREC 2022
1--7
This paper provides an overview and update on the Linguistic Data Consortium`s (LDC) NIEUW (Novel Incentives and Workflows) project supported by the National Science Foundation and part of LDC`s larger goal of improving the cost, variety, scale, and quality of language resources available for education, research, and technology development. NIEUW leverages the power of novel incentives to elicit linguistic data and annotations from a wide variety of contributors including citizen scientists, game players, and language students and professionals. In order to align appropriate incentives with the various contributors, LDC has created three distinct web portals to bring together researchers and other language professionals with participants best suited to their project needs. These portals include LanguageARC designed for citizen scientists, Machina Pro Linguistica designed for students and language professionals, and LingoBoingo designed for game players. The design, interface, and underlying tools for each web portal were developed to appeal to the different incentives and motivations of their respective target audiences.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,672
inproceedings
fort-etal-2022-use
Use of a Citizen Science Platform for the Creation of a Language Resource to Study Bias in Language Models for {F}rench: A Case Study
Callison-Burch, Chris and Cieri, Christopher and Fiumara, James and Liberman, Mark
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nidcp-1.2/
Fort, Kar{\"en and N{\'ev{\'eol, Aur{\'elie and Dupont, Yoann and Bezan{\c{con, Julien
Proceedings of the 2nd Workshop on Novel Incentives in Data Collection from People: models, implementations, challenges and results within LREC 2022
8--13
There is a growing interest in the evaluation of bias, fairness and social impact of Natural Language Processing models and tools. However, little resources are available for this task in languages other than English. Translation of resources originally developed for English is a promising research direction. However, there is also a need for complementing translated resources by newly sourced resources in the original languages and social contexts studied. In order to collect a language resource for the study of biases in Language Models for French, we decided to resort to citizen science. We created three tasks on the LanguageARC citizen science platform to assist with the translation of an existing resource from English into French as well as the collection of complementary resources in native French. We successfully collected data for all three tasks from a total of 102 volunteer participants. Participants from different parts of the world contributed and we noted that although calls sent to mailing lists had a positive impact on participation, some participants pointed barriers to contributions due to the collection platform.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,673
inproceedings
hansen-etal-2022-fearless
Fearless Steps {APOLLO}: Advanced Naturalistic Corpora Development
Callison-Burch, Chris and Cieri, Christopher and Fiumara, James and Liberman, Mark
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nidcp-1.3/
Hansen, John H.L. and Joglekar, Aditya and Chen, Szu-Jui and Chandra Shekar, Meena and Belitz, Chelzy
Proceedings of the 2nd Workshop on Novel Incentives in Data Collection from People: models, implementations, challenges and results within LREC 2022
14--19
In this study, we present the Fearless Steps APOLLO Community Resource, a collection of audio and corresponding meta-data diarized from the NASA Apollo Missions. Massive naturalistic speech data which is time-synchronized, without any human subject privacy constraints is very rare and difficult to organize, collect, and deploy. The Apollo Missions Audio is the largest collection of multi-speaker multi-channel data, where over 600 personnel are communicating over multiple missions to achieve strategic space exploration goals. A total of 12 manned missions over a six-year period produced extensive 30-track 1-inch analog tapes containing over 150,000 hours of audio. This presents the wider research community a unique opportunity to extract multi-modal knowledge in speech science, team cohesion and group dynamics, and historical archive preservation. We aim to make this entire resource and supporting speech technology meta-data creation publicly available as a Community Resource for the development of speech and behavioral science. Here we present the development of this community resource, our outreach efforts, and technological developments resulting from this data. We finally discuss the planned future directions for this community resource.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,674
inproceedings
hernandez-mena-meza-ruiz-2022-creating
Creating {M}exican {S}panish Language Resources through the Social Service Program
Callison-Burch, Chris and Cieri, Christopher and Fiumara, James and Liberman, Mark
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nidcp-1.4/
Hernandez Mena, Carlos Daniel and Meza Ruiz, Ivan Vladimir
Proceedings of the 2nd Workshop on Novel Incentives in Data Collection from People: models, implementations, challenges and results within LREC 2022
20--24
This work presents the path toward the creation of eight Spoken Language Resources under the umbrella of the Mexican Social Service national program. This program asks undergraduate students to donate time and work for the benefit of their society as a requirement to receive their degree. The program has thousands of options for the students who enroll. We show how we created a program which has resulted in the creation of open language resources which now are freely available in different repositories. We estimate that this exercise is equivalent to a budget of more than half a million US dollars. However, since the program is based on retribution from the students to their communities there has not been a necessity of a financial budget.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,675
inproceedings
fridriksdottir-einarsson-2022-fictionary
Fictionary-Based Games for Language Resource Creation
Callison-Burch, Chris and Cieri, Christopher and Fiumara, James and Liberman, Mark
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nidcp-1.5/
Fri{\dh}riksd{\'o}ttir, Steinunn Rut and Einarsson, Hafsteinn
Proceedings of the 2nd Workshop on Novel Incentives in Data Collection from People: models, implementations, challenges and results within LREC 2022
25--31
In this paper, we present a novel approach to data collection for natural language processing (NLP), linguistic research and lexicographic work. Using the parlor game Fictionary as a framework, data can be crowd-sourced in a gamified manner, which carries the potential of faster, cheaper and better data when compared to traditional methods due to the engaging and competitive nature of the game. To improve data quality, the game includes a built-in review process where players review each other`s data and evaluate its quality. The paper proposes several games that can be used within this framework, and explains the value of the data generated by their use. These proposals include games that collect named entities along with their corresponding type tags, question-answer pairs, translation pairs and neologism, to name only a few. We are currently working on a digital platform that will host these games in Icelandic but wish to open the discussion around this topic and encourage other researchers to explore their own versions of the proposed games, all of which are language-independent.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,676
inproceedings
zhan-etal-2022-using
Using Mixed Incentives to Document Xi`an Guanzhong
Callison-Burch, Chris and Cieri, Christopher and Fiumara, James and Liberman, Mark
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nidcp-1.6/
Zhan, Juhong and Jiang, Yue and Cieri, Christopher and Liberman, Mark and Yuan, Jiahong and Chen, Yiya and Scharenborg, Odette
Proceedings of the 2nd Workshop on Novel Incentives in Data Collection from People: models, implementations, challenges and results within LREC 2022
32--37
This paper describes our use of mixed incentives and the citizen science portal LanguageARC to prepare, collect and quality control a large corpus of object namings for the purpose of providing speech data to document the under-represented Guanzhong dialect of Chinese spoken in the Shaanxi province in the environs of Xi`an.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,677
inproceedings
cole-2022-crowdsourced
Crowdsourced Participants' Accuracy at Identifying the Social Class of Speakers from {S}outh {E}ast {E}ngland
Callison-Burch, Chris and Cieri, Christopher and Fiumara, James and Liberman, Mark
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nidcp-1.7/
Cole, Amanda
Proceedings of the 2nd Workshop on Novel Incentives in Data Collection from People: models, implementations, challenges and results within LREC 2022
38--45
Five participants, each located in distinct locations (USA, Canada, South Africa, Scotland and (South East) England), identified the self-determined social class of a corpus of 227 speakers (born 1986{--}2001; from South East England) based on 10-second passage readings. This pilot study demonstrates the potential for using crowdsourcing to collect sociolinguistic data, specifically using LanguageARC, especially when geographic spread of participants is desirable but not easily possible using traditional fieldwork methods. Results show that, firstly, accuracy at identifying social class is relatively low when compared to other factors, including when the same speech stimuli were used (e.g., ethnicity: Cole 2020). Secondly, participants identified speakers' social class significantly better than chance for a three-class distinction (working, middle, upper) but not for a six-class distinction. Thirdly, despite some differences in performance, the participant located in South East England did not perform significantly better than other participants, suggesting that the participant`s presumed greater familiarity with sociolinguistic variation in the region may not have been advantageous. Finally, there is a distinction to be made between participants' ability to pinpoint a speaker`s exact social class membership and their ability to identify the speaker`s relative class position. This paper discusses the role of social identification tasks in illuminating how speech is categorised and interpreted.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,678
inproceedings
lyding-etal-2022-applicability
About the Applicability of Combining Implicit Crowdsourcing and Language Learning for the Collection of {NLP} Datasets
Callison-Burch, Chris and Cieri, Christopher and Fiumara, James and Liberman, Mark
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nidcp-1.8/
Lyding, Verena and Nicolas, Lionel and K{\"onig, Alexander
Proceedings of the 2nd Workshop on Novel Incentives in Data Collection from People: models, implementations, challenges and results within LREC 2022
46--57
In this article, we present a recent trend of approaches, hereafter referred to as Collect4NLP, and discuss its applicability. Collect4NLP-based approaches collect inputs from language learners through learning exercises and aggregate the collected data to derive linguistic knowledge of expert quality. The primary purpose of these approaches is to improve NLP resources, however sincere concern with the needs of learners is crucial for making Collect4NLP work. We discuss the applicability of Collect4NLP approaches in relation to two perspectives. On the one hand, we compare Collect4NLP approaches to the two crowdsourcing trends currently most prevalent in NLP, namely Crowdsourcing Platforms (CPs) and Games-With-A-Purpose (GWAPs), and identify strengths and weaknesses of each trend. By doing so we aim to highlight particularities of each trend and to identify in which kind of settings one trend should be favored over the other two. On the other hand, we analyze the applicability of Collect4NLP approaches to the production of different types of NLP resources. We first list the types of NLP resources most used within its community and second propose a set of blueprints for mapping these resources to well-established language learning exercises as found in standard language learning textbooks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,679
inproceedings
heinisch-2022-influence
The Influence of Intrinsic and Extrinsic Motivation on the Creation of Language Resources in a Citizen Linguistics Project about Lexicography
Callison-Burch, Chris and Cieri, Christopher and Fiumara, James and Liberman, Mark
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nidcp-1.9/
Heinisch, Barbara
Proceedings of the 2nd Workshop on Novel Incentives in Data Collection from People: models, implementations, challenges and results within LREC 2022
58--63
In the field of citizen linguistics, various initiatives are aimed at the creation of language resources by members of the public. To recruit and retain these participants different incentives informed by different motivations, extrinsic and intrinsic ones, play a role at different project stages. Illustrated by a project in the field of lexicography which draws on the extrinsic and/or intrinsic motivation of participants, the complexity of providing the {\textquoteleft}right' incentives is addressed. This complexity does not only surface when considering cultural differences and the heterogeneity of the motivations participants might have but also through the changing motivations over time. Here, identifying target groups may help to guide recruitment, retention and dissemination activities. In addition, continuous adaptations may be required during the course of the project to strike a balance between necessary and feasible incentives.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,680
article
robinson-etal-2022-task
Task-dependent Optimal Weight Combinations for Static Embeddings
Derczynski, Leon
null
2022
Link{\"oping, Sweden
Link{\"oping University Electronic Press
https://aclanthology.org/2022.nejlt-1.2/
Robinson, Nathaniel and Carlson, Nathaniel and Mortensen, David and Vargas, Elizabeth and Fackrell, Thomas and Fulda, Nancy
null
null
A variety of NLP applications use word2vec skip-gram, GloVe, and fastText word embeddings. These models learn two sets of embedding vectors, but most practitioners use only one of them, or alternately an unweighted sum of both. This is the first study to systematically explore a range of linear combinations between the first and second embedding sets. We evaluate these combinations on a set of six NLP benchmarks including IR, POS-tagging, and sentence similarity. We show that the default embedding combinations are often suboptimal and demonstrate 1.0-8.0{\%} improvements. Notably, GloVes default unweighted sum is its least effective combination across tasks. We provide a theoretical basis for weighting one set of embeddings more than the other according to the algorithm and task. We apply our findings to improve accuracy in applications of cross-lingual alignment and navigational knowledge by up to 15.2{\%}.
Northern European Journal of Language Technology
8
10.3384/nejlt.2000-1533.2022.4438
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,682
article
howell-bender-2022-building
Building Analyses from Syntactic Inference in Local Languages: An {HPSG} Grammar Inference System
Derczynski, Leon
null
2022
Link{\"oping, Sweden
Link{\"oping University Electronic Press
https://aclanthology.org/2022.nejlt-1.3/
Howell, Kristen and Bender, Emily M.
null
null
We present a grammar inference system that leverages linguistic knowledge recorded in the form of annotations in interlinear glossed text (IGT) and in a meta-grammar engineering system (the LinGO Grammar Matrix customization system) to automatically produce machine-readable HPSG grammars. Building on prior work to handle the inference of lexical classes, stems, affixes and position classes, and preliminary work on inferring case systems and word order, we introduce an integrated grammar inference system that covers a wide range of fundamental linguistic phenomena. System development was guided by 27 geneologically and geographically diverse languages, and we test the system`s cross-linguistic generalizability on an additional 5 held-out languages, using datasets provided by field linguists. Our system out-performs three baseline systems in increasing coverage while limiting ambiguity and producing richer semantic representations, while also producing richer representations than previous work in grammar inference.
Northern European Journal of Language Technology
8
10.3384/nejlt.2000-1533.2022.4017
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,683
article
dayanik-etal-2022-analysis
Bias Identification and Attribution in {NLP} Models With Regression and Effect Sizes
Derczynski, Leon
null
2022
Link{\"oping, Sweden
Link{\"oping University Electronic Press
https://aclanthology.org/2022.nejlt-1.4/
Dayanik, Erenay and Vu, Ngoc Thang and Pad{\'o}, Sebastian
null
null
In recent years, there has been an increasing awareness that many NLP systems incorporate biases of various types (e.g., regarding gender or race) which can have significant negative consequences. At the same time, the techniques used to statistically analyze such biases are still relatively simple. Typically, studies test for the presence of a significant difference between two levels of a single bias variable (e.g., male vs. female) without attention to potential confounders, and do not quantify the importance of the bias variable. This article proposes to analyze bias in the output of NLP systems using multivariate regression models. They provide a robust and more informative alternative which (a) generalizes to multiple bias variables, (b) can take covariates into account, (c) can be combined with measures of effect size to quantify the size of bias. Jointly, these effects contribute to a more robust statistical analysis of bias that can be used to diagnose system behavior and extract informative examples. We demonstrate the benefits of our method by analyzing a range of current NLP models on one regression and one classification tasks (emotion intensity prediction and coreference resolution, respectively).
Northern European Journal of Language Technology
8
10.3384/nejlt.2000-1533.2022.3505
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,684
article
abercrombie-batista-navarro-2022-policy
Policy-focused Stance Detection in Parliamentary Debate Speeches
Derczynski, Leon
null
2022
Link{\"oping, Sweden
Link{\"oping University Electronic Press
https://aclanthology.org/2022.nejlt-1.5/
Abercrombie, Gavin and Batista-Navarro, Riza
null
null
Legislative debate transcripts provide citizens with information about the activities of their elected representatives, but are difficult for people to process. We propose the novel task of policy-focused stance detection, in which both the policy proposals under debate and the position of the speakers towards those proposals are identified. We adapt a previously existing dataset to include manual annotations of policy preferences, an established schema from political science. We evaluate a range of approaches to the automatic classification of policy preferences and speech sentiment polarity, including transformer-based text representations and a multi-task learning paradigm. We find that it is possible to identify the policies under discussion using features derived from the speeches, and that incorporating motion-dependent debate modelling, previously used to classify speech sentiment, also improves performance in the classification of policy preferences. We analyse the output of the best performing system, finding that discriminating features for the task are highly domain-specific, and that speeches that address policy preferences proposed by members of the same party can be among the most difficult to predict.
Northern European Journal of Language Technology
8
10.3384/nejlt.2000-1533.2022.3454
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,685
article
wein-2022-spanish
{S}panish {A}bstract {M}eaning {R}epresentation: Annotation of a General Corpus
Derczynski, Leon
null
2022
Link{\"oping, Sweden
Link{\"oping University Electronic Press
https://aclanthology.org/2022.nejlt-1.6/
Wein, Shira and Donatelli, Lucia and Ricker, Ethan and Engstrom, Calvin and Nelson, Alex and Harter, Leonie and Schneider, Nathan
null
null
Abstract Meaning Representation (AMR), originally designed for English, has been adapted to a number of languages to facilitate cross-lingual semantic representation and analysis. We build on previous work and present the first sizable, general annotation project for Spanish AMR. We release a detailed set of annotation guidelines and a corpus of 486 gold-annotated sentences spanning multiple genres from an existing, cross-lingual AMR corpus. Our work constitutes the second largest non-English gold AMR corpus to date. Fine-tuning an AMR to-Spanish generation model with our annotations results in a BERTScore improvement of 8.8{\%}, demonstrating initial utility of our work.
Northern European Journal of Language Technology
8
10.3384/nejlt.2000-1533.2022.4462
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,686
article
tirosh-becker-etal-2022-part
Part-of-Speech and Morphological Tagging of {A}lgerian {J}udeo-{A}rabic
Derczynski, Leon
null
2022
Link{\"oping, Sweden
Link{\"oping University Electronic Press
https://aclanthology.org/2022.nejlt-1.7/
Tirosh-Becker, Ofra and Kessler, Michal and Becker, Oren and Belinkov, Yonatan
null
null
Most linguistic studies of Judeo-Arabic, the ensemble of dialects spoken and written by Jews in Arab lands, are qualitative in nature and rely on laborious manual annotation work, and are therefore limited in scale. In this work, we develop automatic methods for morpho-syntactic tagging of Algerian Judeo-Arabic texts published by Algerian Jews in the 19th{--}20th centuries, based on a linguistically tagged corpus. First, we describe our semi-automatic approach for preprocessing these texts. Then, we experiment with both an off-the-shelf morphological tagger and several specially designed neural network taggers. Finally, we perform a real-world evaluation of new texts that were never tagged before in comparison with human expert annotators. Our experimental results demonstrate that these methods can dramatically speed up and improve the linguistic research pipeline, enabling linguists to study these dialects on a much greater scale.
Northern European Journal of Language Technology
8
10.3384/nejlt.2000-1533.2022.4315
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,687
article
karlgren-2022-lexical
Lexical variation in {E}nglish language podcasts, editorial media, and social media
Derczynski, Leon
null
2022
Link{\"oping, Sweden
Link{\"oping University Electronic Press
https://aclanthology.org/2022.nejlt-1.8/
Karlgren, Jussi
null
null
The study presented in this paper demonstrates how transcribed podcast material differs with respect to lexical content from other collections of English language data: editorial text, social media, both long form and microblogs, dialogue from movie scripts, and transcribed phone conversations. Most of the recorded differences are as might be expected, reflecting known or assumed difference between spoken and written language, between dialogue and soliloquy, and between scripted formal and unscripted informal language use. Most notably, podcast material, compared to the hitherto typical training sets from editorial media, is characterised by being in the present tense, and with a much higher incidence of pronouns, interjections, and negations. These characteristics are, unsurprisingly, largely shared with social media texts. Where podcast material differs from social media material is in its attitudinal content, with many more amplifiers and much less negative attitude than in blog texts. This variation, besides being of philological interest, has ramifications for computational work. Information access for material which is not primarily topical should be designed to be sensitive to such variation that defines the data set itself and discriminates items within it. In general, training sets for language models are a non-trivial parameter which are likely to show effects both expected and unexpected when applied to data from other sources and the characteristics and provenance of data used to train a model should be listed on the label as a minimal form of downstream consumer protection.
Northern European Journal of Language Technology
8
10.3384/nejlt.2000-1533.2022.3566
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,688
article
kutuzov-etal-2022-contextualized
Contextualized embeddings for semantic change detection: Lessons learned
Derczynski, Leon
null
2022
Link{\"oping, Sweden
Link{\"oping University Electronic Press
https://aclanthology.org/2022.nejlt-1.9/
Kutuzov, Andrey and Velldal, Erik and {\O}vrelid, Lilja
null
null
We present a qualitative analysis of the (potentially erroneous) outputs of contextualized embedding-based methods for detecting diachronic semantic change. First, we introduce an ensemble method outperforming previously described contextualized approaches. This method is used as a basis for an in-depth analysis of the degrees of semantic change predicted for English words across 5 decades. Our findings show that contextualized methods can often predict high change scores for words which are not undergoing any real diachronic semantic shift in the lexicographic sense of the term (or at least the status of these shifts is questionable). Such challenging cases are discussed in detail with examples, and their linguistic categorization is proposed. Our conclusion is that pre-trained contextualized language models are prone to confound changes in lexicographic senses and changes in contextual variance, which naturally stem from their distributional nature, but is different from the types of issues observed in methods based on static embeddings. Additionally, they often merge together syntactic and semantic aspects of lexical entities. We propose a range of possible future solutions to these issues.
Northern European Journal of Language Technology
8
10.3384/nejlt.2000-1533.2022.3478
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,689
inproceedings
bhattacharya-2022-rice
Rice Cultivation in {I}ndia {--} Challenges and Environmental Effects
Sinha, Manjira and Dasgupta, Tirthankar and Chatterjee, Sanjay
dec
2022
IIIT Delhi, New Delhi, India
Association for Computational Linguistics
https://aclanthology.org/2022.nalm-1.1/
Bhattacharya, Ushasi
Proceedings of the First Workshop on NLP in Agriculture and Livestock Management
1--4
Rice is one of the most cultivated grain crops in India as well as in Asian countries. It is a staple food in India. India is the second largest producer of Rice next to China. This crop is grown mainly in tropical and rain fed areas. In this paper, major types of Rice crops cultivated in India, the major challenges of cultivating Rice in India and its adverse effect in the environment are discussed. We have also discussed how the Computer Vision, Natural Language Processing, Mobile Applications and other technologies and research works can help to overcome these issues.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,697
inproceedings
pal-etal-2022-custom
A custom {CNN} model for detection of rice disease under complex environment
Sinha, Manjira and Dasgupta, Tirthankar and Chatterjee, Sanjay
dec
2022
IIIT Delhi, New Delhi, India
Association for Computational Linguistics
https://aclanthology.org/2022.nalm-1.2/
Pal, Chiranjit and Pratihar, Sanjoy and Mukherjee, Imon
Proceedings of the First Workshop on NLP in Agriculture and Livestock Management
5--8
The work in this paper designs an image-based rice disease detection framework that takes rice plant image as input and identifies the presence of BrownSpot disease in the image fed into the system. A CNN-based disease detection scheme performs the binary classification task on our custom dataset containing 2223 images of healthy and unhealthy classes under complex environments. Experimental results show that our system is able to achieve consistently satisfactory results in performing disease detection tasks. Furthermore, the CNN disease detection model compares with state-of-the-art works and procures an accuracy of 96.8{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,698
inproceedings
abrams-scheutz-2022-social
Social Norms Guide Reference Resolution
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.1/
Abrams, Mitchell and Scheutz, Matthias
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1--11
Humans use natural language, vision, and context to resolve referents in their environment. While some situated reference resolution is trivial, ambiguous cases arise when the language is underspecified or there are multiple candidate referents. This study investigates howpragmatic modulators external to the linguistic content are critical for the correct interpretation of referents in these scenarios. Inparticular, we demonstrate in a human subjects experiment how the social norms applicable in the given context influence theinterpretation of referring expressions. Additionally, we highlight how current coreference tools in natural language processing fail tohandle these ambiguous cases. We also briefly discuss the implications of this work for assistive robots which will routinely need to resolve referents in their environment.
null
null
10.18653/v1/2022.naacl-main.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,700
inproceedings
martin-etal-2022-learning
Learning Natural Language Generation with Truncated Reinforcement Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.2/
Martin, Alice and Quispe, Guillaume and Ollion, Charles and Le Corff, Sylvain and Strub, Florian and Pietquin, Olivier
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
12--37
This paper introduces TRUncated ReinForcement Learning for Language (TrufLL), an original approach to train conditional languagemodels without a supervised learning phase, by only using reinforcement learning (RL). As RL methods unsuccessfully scale to large action spaces, we dynamically truncate the vocabulary space using a generic language model. TrufLL thus enables to train a language agent by solely interacting with its environment without any task-specific prior knowledge; it is only guided with a task-agnostic language model. Interestingly, this approach avoids the dependency to labelled datasets and inherently reduces pretrained policy flaws such as language or exposure biases. We evaluate TrufLL on two visual question generation tasks, for which we report positive results over performance and language metrics, which we then corroborate with a human evaluation. To our knowledge, it is the first approach that successfully learns a language generation policy without pre-training, using only reinforcement learning.
null
null
10.18653/v1/2022.naacl-main.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,701
inproceedings
indurthi-etal-2022-language
Language Model Augmented Monotonic Attention for Simultaneous Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.3/
Indurthi, Sathish Reddy and Zaidi, Mohd Abbas and Lee, Beomseok and Lakumarapu, Nikhil Kumar and Kim, Sangha
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
38--45
The state-of-the-art adaptive policies for Simultaneous Neural Machine Translation (SNMT) use monotonic attention to perform read/write decisions based on the partial source and target sequences. The lack of sufficient information might cause the monotonic attention to take poor read/write decisions, which in turn negatively affects the performance of the SNMT model. On the other hand, human translators make better read/write decisions since they can anticipate the immediate future words using linguistic information and domain knowledge. In this work, we propose a framework to aid monotonic attention with an external language model to improve its decisions. Experiments on MuST-C English-German and English-French speech-to-text translation tasks show the future information from the language model improves the state-of-the-art monotonic multi-head attention model further.
null
null
10.18653/v1/2022.naacl-main.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,702
inproceedings
ter-hoeve-etal-2022-makes
What Makes a Good and Useful Summary? {I}ncorporating Users in Automatic Summarization Research
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.4/
Ter Hoeve, Maartje and Kiseleva, Julia and de Rijke, Maarten
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
46--75
Automatic text summarization has enjoyed great progress over the years and is used in numerous applications, impacting the lives of many. Despite this development, there is little research that meaningfully investigates how the current research focus in automatic summarization aligns with users' needs. To bridge this gap, we propose a survey methodology that can be used to investigate the needs of users of automatically generated summaries. Importantly, these needs are dependent on the target group. Hence, we design our survey in such a way that it can be easily adjusted to investigate different user groups. In this work we focus on university students, who make extensive use of summaries during their studies. We find that the current research directions of the automatic summarization community do not fully align with students' needs. Motivated by our findings, we present ways to mitigate this mismatch in future research on automatic summarization: we propose research directions that impact the design, the development and the evaluation of automatically generated summaries.
null
null
10.18653/v1/2022.naacl-main.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,703
inproceedings
yuan-etal-2022-eracond
{E}r{AC}on{D}: Error Annotated Conversational Dialog Dataset for Grammatical Error Correction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.5/
Yuan, Xun and Pham, Derek and Davidson, Sam and Yu, Zhou
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
76--84
Currently available grammatical error correction (GEC) datasets are compiled using essays or other long-form text written by language learners, limiting the applicability of these datasets to other domains such as informal writing and conversational dialog. In this paper, we present a novel GEC dataset consisting of parallel original and corrected utterances drawn from open-domain chatbot conversations; this dataset is, to our knowledge, the first GEC dataset targeted to a human-machine conversational setting. We also present a detailed annotation scheme which ranks errors by perceived impact on comprehension, making our dataset more representative of real-world language learning applications. To demonstrate the utility of the dataset, we use our annotated data to fine-tune a state-of-the-art GEC model. Experimental results show the effectiveness of our data in improving GEC model performance in a conversational scenario.
null
null
10.18653/v1/2022.naacl-main.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,704
inproceedings
stasaski-hearst-2022-semantic
Semantic Diversity in Dialogue with Natural Language Inference
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.6/
Stasaski, Katherine and Hearst, Marti
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
85--98
Generating diverse, interesting responses to chitchat conversations is a problem for neural conversational agents. This paper makes two substantial contributions to improving diversity in dialogue generation. First, we propose a novel metric which uses Natural Language Inference (NLI) to measure the semantic diversity of a set of model responses for a conversation. We evaluate this metric using an established framework (Tevet and Berant, 2021) and find strong evidence indicating NLI Diversity is correlated with semantic diversity. Specifically, we show that the contradiction relation is more useful than the neutral relation for measuring this diversity and that incorporating the NLI model`s confidence achieves state-of-the-art results. Second, we demonstrate how to iteratively improve the semantic diversity of a sampled set of responses via a new generation procedure called Diversity Threshold Generation, which results in an average 137{\%} increase in NLI Diversity compared to standard generation procedures.
null
null
10.18653/v1/2022.naacl-main.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,705
inproceedings
hong-jang-2022-lea
{LEA}: Meta Knowledge-Driven Self-Attentive Document Embedding for Few-Shot Text Classification
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.7/
Hong, S. K. and Jang, Tae Young
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
99--106
Text classification has achieved great success with the prosperity of deep learning and pre-trained language models. However, we often encounter labeled data deficiency problems in real-world text-classification tasks. To overcome such challenging scenarios, interest in few-shot learning has increased, whereas most few-shot text classification studies suffer from a difficulty of utilizing pre-trained language models. In the study, we propose a novel learning method for learning how to attend, called LEA, through which meta-level attention aspects are derived based on our meta-learning strategy. This enables the generation of task-specific document embedding with leveraging pre-trained language models even though a few labeled data instances are given. We evaluate our proposed learning method on five benchmark datasets. The results show that the novel method robustly provides the competitive performance compared to recent few-shot learning methods for all the datasets.
null
null
10.18653/v1/2022.naacl-main.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,706
inproceedings
bai-etal-2022-enhancing
Enhancing Self-Attention with Knowledge-Assisted Attention Maps
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.8/
Bai, Jiangang and Wang, Yujing and Sun, Hong and Wu, Ruonan and Yang, Tianmeng and Tang, Pengfei and Cao, Defu and Zhang1, Mingliang and Tong, Yunhai and Yang, Yaming and Bai, Jing and Zhang, Ruofei and Sun, Hao and Shen, Wei
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
107--115
Large-scale pre-trained language models have attracted extensive attentions in the research community and shown promising results on various tasks of natural language processing. However, the attention maps, which record the attention scores between tokens in self-attention mechanism, are sometimes ineffective as they are learned implicitly without the guidance of explicit semantic knowledge. Thus, we aim to infuse explicit external knowledge into pre-trained language models to further boost their performance. Existing works of knowledge infusion largely depend on multi-task learning frameworks, which are inefficient and require large-scale re-training when new knowledge is considered. In this paper, we propose a novel and generic solution, KAM-BERT, which directly incorporates knowledge-generated attention maps into the self-attention mechanism. It requires only a few extra parameters and supports efficient fine-tuning once new knowledge is added. KAM-BERT achieves consistent improvements on various academic datasets for natural language understanding. It also outperforms other state-of-the-art methods which conduct knowledge infusion into transformer-based architectures. Moreover, we apply our model to an industry-scale ad relevance application and show its advantages in the real-world scenario.
null
null
10.18653/v1/2022.naacl-main.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,707
inproceedings
chernyavskiy-etal-2022-batch
Batch-Softmax Contrastive Loss for Pairwise Sentence Scoring Tasks
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.9/
Chernyavskiy, Anton and Ilvovsky, Dmitry and Kalinin, Pavel and Nakov, Preslav
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
116--126
The use of contrastive loss for representation learning has become prominent in computer vision, and it is now getting attention in Natural Language Processing (NLP).Here, we explore the idea of using a batch-softmax contrastive loss when fine-tuning large-scale pre-trained transformer models to learn better task-specific sentence embeddings for pairwise sentence scoring tasks. We introduce and study a number of variations in the calculation of the loss as well as in the overall training procedure; in particular, we find that a special data shuffling can be quite important. Our experimental results show sizable improvements on a number of datasets and pairwise sentence scoring tasks including classification, ranking, and regression. Finally, we offer detailed analysis and discussion, which should be useful for researchers aiming to explore the utility of contrastive loss in NLP.
null
null
10.18653/v1/2022.naacl-main.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,708
inproceedings
spangher-etal-2022-newsedits
{N}ews{E}dits: A News Article Revision Dataset and a Novel Document-Level Reasoning Challenge
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.10/
Spangher, Alexander and Ren, Xiang and May, Jonathan and Peng, Nanyun
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
127--157
News article revision histories provide clues to narrative and factual evolution in news articles. To facilitate analysis of this evolution, we present the first publicly available dataset of news revision histories, NewsEdits. Our dataset is large-scale and multilingual; it contains 1.2 million articles with 4.6 million versions from over 22 English- and French-language newspaper sources based in three countries, spanning 15 years of coverage (2006-2021).We define article-level edit actions: Addition, Deletion, Edit and Refactor, and develop a high-accuracy extraction algorithm to identify these actions. To underscore the factual nature of many edit actions, we conduct analyses showing that added and deleted sentences are more likely to contain updating events, main content and quotes than unchanged sentences. Finally, to explore whether edit actions are predictable, we introduce three novel tasks aimed at predicting actions performed during version updates. We show that these tasks are possible for expert humans but are challenging for large NLP models. We hope this can spur research in narrative framing and help provide predictive tools for journalists chasing breaking news.
null
null
10.18653/v1/2022.naacl-main.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,709
inproceedings
ibraheem-etal-2022-putting
Putting the Con in Context: Identifying Deceptive Actors in the Game of Mafia
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.11/
Ibraheem, Samee and Zhou, Gaoyue and DeNero, John
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
158--168
While neural networks demonstrate a remarkable ability to model linguistic content, capturing contextual information related to a speaker`s conversational role is an open area of research. In this work, we analyze the effect of speaker role on language use through the game of Mafia, in which participants are assigned either an honest or a deceptive role. In addition to building a framework to collect a dataset of Mafia game records, we demonstrate that there are differences in the language produced by players with different roles. We confirm that classification models are able to rank deceptive players as more suspicious than honest ones based only on their use of language. Furthermore, we show that training models on two auxiliary tasks outperforms a standard BERT-based text classification approach. We also present methods for using our trained models to identify features that distinguish between player roles, which could be used to assist players during the Mafia game.
null
null
10.18653/v1/2022.naacl-main.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,710
inproceedings
yang-etal-2022-subs
{SUBS}: Subtree Substitution for Compositional Semantic Parsing
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.12/
Yang, Jingfeng and Zhang, Le and Yang, Diyi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
169--174
Although sequence-to-sequence models often achieve good performance in semantic parsing for i.i.d. data, their performance is still inferior in compositional generalization. Several data augmentation methods have been proposed to alleviate this problem. However, prior work only leveraged superficial grammar or rules for data augmentation, which resulted in limited improvement. We propose to use subtree substitution for compositional data augmentation, where we consider subtrees with similar semantic functions as exchangeable. Our experiments showed that such augmented data led to significantly better performance on Scan and GeoQuery, and reached new SOTA on compositional split of GeoQuery.
null
null
10.18653/v1/2022.naacl-main.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,711
inproceedings
rottger-etal-2022-two
Two Contrasting Data Annotation Paradigms for Subjective {NLP} Tasks
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.13/
R{\"ottger, Paul and Vidgen, Bertie and Hovy, Dirk and Pierrehumbert, Janet
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
175--190
Labelled data is the foundation of most natural language processing tasks. However, labelling data is difficult and there often are diverse valid beliefs about what the correct data labels should be. So far, dataset creators have acknowledged annotator subjectivity, but rarely actively managed it in the annotation process. This has led to partly-subjective datasets that fail to serve a clear downstream use. To address this issue, we propose two contrasting paradigms for data annotation. The descriptive paradigm encourages annotator subjectivity, whereas the prescriptive paradigm discourages it. Descriptive annotation allows for the surveying and modelling of different beliefs, whereas prescriptive annotation enables the training of models that consistently apply one belief. We discuss benefits and challenges in implementing both paradigms, and argue that dataset creators should explicitly aim for one or the other to facilitate the intended use of their dataset. Lastly, we conduct an annotation experiment using hate speech data that illustrates the contrast between the two paradigms.
null
null
10.18653/v1/2022.naacl-main.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,712
inproceedings
zeng-etal-2022-deep
Do Deep Neural Nets Display Human-like Attention in Short Answer Scoring?
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.14/
Zeng, Zijie and Li, Xinyu and Gasevic, Dragan and Chen, Guanliang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
191--205
Deep Learning (DL) techniques have been increasingly adopted for Automatic Text Scoring in education. However, these techniques often suffer from their inabilities to explain and justify how a prediction is made, which, unavoidably, decreases their trustworthiness and hinders educators from embracing them in practice. This study aimed to investigate whether (and to what extent) DL-based graders align with human graders regarding the important words they identify when marking short answer questions. To this end, we first conducted a user study to ask human graders to manually annotate important words in assessing answer quality and then measured the overlap between these human-annotated words and those identified by DL-based graders (i.e., those receiving large attention weights). Furthermore, we ran a randomized controlled experiment to explore the impact of highlighting important words detected by DL-based graders on human grading. The results showed that: (i) DL-based graders, to a certain degree, displayed alignment with human graders no matter whether DL-based graders and human graders agreed on the quality of an answer; and (ii) it is possible to facilitate human grading by highlighting those DL-detected important words, though further investigations are necessary to understand how human graders exploit such highlighted words.
null
null
10.18653/v1/2022.naacl-main.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,713
inproceedings
li-etal-2022-knowledge
Knowledge-Grounded Dialogue Generation with a Unified Knowledge Representation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.15/
Li, Yu and Peng, Baolin and Shen, Yelong and Mao, Yi and Liden, Lars and Yu, Zhou and Gao, Jianfeng
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
206--218
Knowledge-grounded dialogue systems are challenging to build due to the lack of training data and heterogeneous knowledge sources. Existing systems perform poorly on unseen topics due to limited topics covered in the training data. In addition, it is challenging to generalize to the domains that require different types of knowledge sources. To address the above challenges, we present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation for knowledge-grounded dialogue generation tasks. We first retrieve relevant information from heterogeneous knowledge sources (e.g., wiki, dictionary, or knowledge graph); Then the retrieved knowledge is transformed into text and concatenated with dialogue history to feed into the language model for generating responses. PLUG is pre-trained on a large-scale knowledge-grounded dialogue corpus. The empirical evaluation on two benchmarks shows that PLUG generalizes well across different knowledge-grounded dialogue tasks. It achieves comparable performance with state-of-the-art methods in the fully-supervised setting and significantly outperforms other approaches in zero-shot and few-shot settings.
null
null
10.18653/v1/2022.naacl-main.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,714
inproceedings
feng-etal-2022-ceres
{CERES}: Pretraining of Graph-Conditioned Transformer for Semi-Structured Session Data
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.16/
Feng, Rui and Luo, Chen and Yin, Qingyu and Yin, Bing and Zhao, Tuo and Zhang, Chao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
219--230
User sessions empower many search and recommendation tasks on a daily basis. Such session data are semi-structured, which encode heterogeneous relations between queries and products, and each item is described by the unstructured text. Despite recent advances in self-supervised learning for text or graphs, there lack of self-supervised learning models that can effectively capture both intra-item semantics and inter-item interactions for semi-structured sessions. To fill this gap, we propose CERES, a graph-based transformer model for semi-structured session data. CERES learns representations that capture both inter- and intra-item semantics with (1) a graph-conditioned masked language pretraining task that jointly learns from item text and item-item relations; and (2) a graph-conditioned transformer architecture that propagates inter-item contexts to item-level representations. We pretrained CERES using {\textasciitilde}468 million Amazon sessions and find that CERES outperforms strong pretraining baselines by up to 9{\%} in three session search and entity linking tasks.
null
null
10.18653/v1/2022.naacl-main.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,715
inproceedings
sinno-etal-2022-political
Political Ideology and Polarization: A Multi-dimensional Approach
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.17/
Sinno, Barea and Oviedo, Bernardo and Atwell, Katherine and Alikhani, Malihe and Li, Junyi Jessy
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
231--243
Analyzing ideology and polarization is of critical importance in advancing our grasp of modern politics. Recent research has made great strides towards understanding the ideological bias (i.e., stance) of news media along the left-right spectrum. In this work, we instead take a novel and more nuanced approach for the study of ideology based on its left or right positions on the issue being discussed. Aligned with the theoretical accounts in political science, we treat ideology as a multi-dimensional construct, and introduce the first diachronic dataset of news articles whose ideological positions are annotated by trained political scientists and linguists at the paragraph level. We showcase that, by controlling for the author`s stance, our method allows for the quantitative and temporal measurement and analysis of polarization as a multidimensional ideological distance. We further present baseline models for ideology prediction, outlining a challenging task distinct from stance detection.
null
null
10.18653/v1/2022.naacl-main.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,716
inproceedings
luo-etal-2022-cooperative
Cooperative Self-training of Machine Reading Comprehension
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.18/
Luo, Hongyin and Li, Shang-Wen and Gao, Mingye and Yu, Seunghak and Glass, James
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
244--257
Pretrained language models have significantly improved the performance of downstream language understanding tasks, including extractive question answering, by providing high-quality contextualized word embeddings. However, training question answering models still requires large amounts of annotated data for specific domains. In this work, we propose a cooperative self-training framework, RGX, for automatically generating more non-trivial question-answer pairs to improve model performance. RGX is built upon a masked answer extraction task with an interactive learning environment containing an answer entity Recognizer, a question Generator, and an answer eXtractor. Given a passage with a masked entity, the generator generates a question around the entity, and the extractor is trained to extract the masked entity with the generated question and raw texts. The framework allows the training of question generation and answering models on any text corpora without annotation. We further leverage a self-training technique to improve the performance of both question generation and answer extraction models. Experiment results show that RGX outperforms the state-of-the-art (SOTA) pretrained language models and transfer learning approaches on standard question-answering benchmarks, and yields the new SOTA performance under given model size and transfer learning settings.
null
null
10.18653/v1/2022.naacl-main.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,717
inproceedings
modarressi-etal-2022-globenc
{G}lob{E}nc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.19/
Modarressi, Ali and Fayyaz, Mohsen and Yaghoobzadeh, Yadollah and Pilehvar, Mohammad Taher
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
258--271
There has been a growing interest in interpreting the underlying dynamics of Transformers. While self-attention patterns were initially deemed as the primary option, recent studies have shown that integrating other components can yield more accurate explanations. This paper introduces a novel token attribution analysis method that incorporates all the components in the encoder block and aggregates this throughout layers. Through extensive quantitative and qualitative experiments, we demonstrate that our method can produce faithful and meaningful global token attributions. Our experiments reveal that incorporating almost every encoder component results in increasingly more accurate analysis in both local (single layer) and global (the whole model) settings. Our global attribution analysis significantly outperforms previous methods on various tasks regarding correlation with gradient-based saliency scores. Our code is freely available at \url{https://github.com/mohsenfayyaz/GlobEnc}.
null
null
10.18653/v1/2022.naacl-main.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,718
inproceedings
liu-etal-2022-robustly
A Robustly Optimized {BMRC} for Aspect Sentiment Triplet Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.20/
Liu, Shu and Li, Kaiwen and Li, Zuhe
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
272--278
Aspect sentiment triplet extraction (ASTE) is a challenging subtask in aspect-based sentiment analysis. It aims to explore the triplets of aspects, opinions and sentiments with complex correspondence from the context. The bidirectional machine reading comprehension (BMRC), can effectively deal with ASTE task, but several problems remains, such as query conflict and probability unilateral decrease. Therefore, this paper presents a robustly optimized BMRC method by incorporating four improvements. The word segmentation is applied to facilitate the semantic learning. Exclusive classifiers are designed to avoid the interference between different queries. A span matching rule is proposed to select the aspects and opinions that better represent the expectations of the model. The probability generation strategy is also introduced to obtain the predicted probability for aspects, opinions and aspect-opinion pairs. We have conducted extensive experiments on multiple benchmark datasets, where our model achieves the state-of-the-art performance.
null
null
10.18653/v1/2022.naacl-main.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,719
inproceedings
zhang-etal-2022-seed
Seed-Guided Topic Discovery with Out-of-Vocabulary Seeds
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.21/
Zhang, Yu and Meng, Yu and Wang, Xuan and Wang, Sheng and Han, Jiawei
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
279--290
Discovering latent topics from text corpora has been studied for decades. Many existing topic models adopt a fully unsupervised setting, and their discovered topics may not cater to users' particular interests due to their inability of leveraging user guidance. Although there exist seed-guided topic discovery approaches that leverage user-provided seeds to discover topic-representative terms, they are less concerned with two factors: (1) the existence of out-of-vocabulary seeds and (2) the power of pre-trained language models (PLMs). In this paper, we generalize the task of seed-guided topic discovery to allow out-of-vocabulary seeds. We propose a novel framework, named SeeTopic, wherein the general knowledge of PLMs and the local semantics learned from the input corpus can mutually benefit each other. Experiments on three real datasets from different domains demonstrate the effectiveness of SeeTopic in terms of topic coherence, accuracy, and diversity.
null
null
10.18653/v1/2022.naacl-main.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,720
inproceedings
wang-etal-2022-towards
Towards Process-Oriented, Modular, and Versatile Question Generation that Meets Educational Needs
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.22/
Wang, Xu and Fan, Simin and Houghton, Jessica and Wang, Lu
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
291--302
NLP-powered automatic question generation (QG) techniques carry great pedagogical potential of saving educators' time and benefiting student learning. Yet, QG systems have not been widely adopted in classrooms to date. In this work, we aim to pinpoint key impediments and investigate how to improve the usability of automatic QG techniques for educational purposes by understanding how instructors construct questions and identifying touch points to enhance the underlying NLP models. We perform an in-depth need finding study with 11 instructors across 7 different universities, and summarize their thought processes and needs when creating questions. While instructors show great interests in using NLP systems to support question design, none of them has used such tools in practice. They resort to multiple sources of information, ranging from domain knowledge to students' misconceptions, all of which missing from today`s QG systems. We argue that building effective human-NLP collaborative QG systems that emphasize instructor control and explainability is imperative for real-world adoption. We call for QG systems to provide process-oriented support, use modular design, and handle diverse sources of input.
null
null
10.18653/v1/2022.naacl-main.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,721
inproceedings
martin-etal-2022-swahbert
{S}wah{BERT}: Language Model of {S}wahili
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.23/
Martin, Gati and Mswahili, Medard Edmund and Jeong, Young-Seob and Woo, Jiyoung
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
303--313
The rapid development of social networks, electronic commerce, mobile Internet, and other technologies, has influenced the growth of Web data. Social media and Internet forums are valuable sources of citizens' opinions, which can be analyzed for community development and user behavior analysis. Unfortunately, the scarcity of resources (i.e., datasets or language models) become a barrier to the development of natural language processing applications in low-resource languages. Thanks to the recent growth of online forums and news platforms of Swahili, we introduce two datasets of Swahili in this paper: a pre-training dataset of approximately 105MB with 16M words and annotated dataset of 13K instances for the emotion classification task. The emotion classification dataset is manually annotated by two native Swahili speakers. We pre-trained a new monolingual language model for Swahili, namely SwahBERT, using our collected pre-training data, and tested it with four downstream tasks including emotion classification. We found that SwahBERT outperforms multilingual BERT, a well-known existing language model, in almost all downstream tasks.
null
null
10.18653/v1/2022.naacl-main.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,722
inproceedings
zhou-etal-2022-deconstructing
Deconstructing {NLG} Evaluation: Evaluation Practices, Assumptions, and Their Implications
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.24/
Zhou, Kaitlyn and Blodgett, Su Lin and Trischler, Adam and Daum{\'e} III, Hal and Suleman, Kaheer and Olteanu, Alexandra
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
314--324
There are many ways to express similar things in text, which makes evaluating natural language generation (NLG) systems difficult. Compounding this difficulty is the need to assess varying quality criteria depending on the deployment setting. While the landscape of NLG evaluation has been well-mapped, practitioners' goals, assumptions, and constraints{---}which inform decisions about what, when, and how to evaluate{---}are often partially or implicitly stated, or not stated at all. Combining a formative semi-structured interview study of NLG practitioners (N=18) with a survey study of a broader sample of practitioners (N=61), we surface goals, community practices, assumptions, and constraints that shape NLG evaluations, examining their implications and how they embody ethical considerations.
null
null
10.18653/v1/2022.naacl-main.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,723
inproceedings
sotudeh-goharian-2022-tstr
{TSTR}: Too Short to Represent, Summarize with Details! Intro-Guided Extended Summary Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.25/
Sotudeh, Sajad and Goharian, Nazli
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
325--335
Many scientific papers such as those in arXiv and PubMed data collections have abstracts with varying lengths of 50-1000 words and average length of approximately 200 words, where longer abstracts typically convey more information about the source paper. Up to recently, scientific summarization research has typically focused on generating short, abstract-like summaries following the existing datasets used for scientific summarization. In domains where the source text is relatively long-form, such as in scientific documents, such summary is not able to go beyond the general and coarse overview and provide salient information from the source document. The recent interest to tackle this problem motivated curation of scientific datasets, arXiv-Long and PubMed-Long, containing human-written summaries of 400-600 words, hence, providing a venue for research in generating long/extended summaries. Extended summaries facilitate a faster read while providing details beyond coarse information. In this paper, we propose TSTR, an extractive summarizer that utilizes the introductory information of documents as pointers to their salient information. The evaluations on two existing large-scale extended summarization datasets indicate statistically significant improvement in terms of Rouge and average Rouge (F1) scores (except in one case) as compared to strong baselines and state-of-the-art. Comprehensive human evaluations favor our generated extended summaries in terms of cohesion and completeness.
null
null
10.18653/v1/2022.naacl-main.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,724
inproceedings
kosgi-etal-2022-empathic
Empathic Machines: Using Intermediate Features as Levers to Emulate Emotions in Text-To-Speech Systems
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.26/
Kosgi, Saiteja and Sivaprasad, Sarath and Pedanekar, Niranjan and Nelakanti, Anil and Gandhi, Vineet
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
336--347
We present a method to control the emotional prosody of Text to Speech (TTS) systems by using phoneme-level intermediate features (pitch, energy, and duration) as levers. As a key idea, we propose Differential Scaling (DS) to disentangle features relating to affective prosody from those arising due to acoustics conditions and speaker identity. With thorough experimental studies, we show that the proposed method improves over the prior art in accurately emulating the desired emotions while retaining the naturalness of speech. We extend the traditional evaluation of using individual sentences for a more complete evaluation of HCI systems. We present a novel experimental setup by replacing an actor with a TTS system in offline and live conversations. The emotion to be rendered is either predicted or manually assigned. The results show that the proposed method is strongly preferred over the state-of-the-art TTS system and adds the much-coveted {\textquotedblleft}human touch{\textquotedblright} in machine dialogue. Audio samples from our experiments and the code are available at: \url{https://emtts.github.io/tts-demo/}
null
null
10.18653/v1/2022.naacl-main.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,725
inproceedings
voigt-etal-2022-survey
The Why and The How: A Survey on Natural Language Interaction in Visualization
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.27/
Voigt, Henrik and Alacam, Ozge and Meuschke, Monique and Lawonn, Kai and Zarrie{\ss}, Sina
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
348--374
Natural language as a modality of interaction is becoming increasingly popular in the field of visualization. In addition to the popular query interfaces, other language-based interactions such as annotations, recommendations, explanations, or documentation experience growing interest. In this survey, we provide an overview of natural language-based interaction in the research area of visualization. We discuss a renowned taxonomy of visualization tasks and classify 119 related works to illustrate the state-of-the-art of how current natural language interfaces support their performance. We examine applied NLP methods and discuss human-machine dialogue structures with a focus on initiative, duration, and communicative functions in recent visualization-oriented dialogue interfaces. Based on this overview, we point out interesting areas for the future application of NLP methods in the field of visualization.
null
null
10.18653/v1/2022.naacl-main.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,726
inproceedings
huang-etal-2022-understand
Understand before Answer: Improve Temporal Reading Comprehension via Precise Question Understanding
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.28/
Huang, Hao and Geng, Xiubo and Long, Guodong and Jiang, Daxin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
375--384
This work studies temporal reading comprehension (TRC), which reads a free-text passage and answers temporal ordering questions. Precise question understanding is critical for temporal reading comprehension. For example, the question {\textquotedblleft}What happened before the victory{\textquotedblright} and {\textquotedblleft}What happened after the victory{\textquotedblright} share almost all words except one, while their answers are totally different. Moreover, even if two questions query about similar temporal relations, different varieties might also lead to various answers. For example, although both the question {\textquotedblleft}What usually happened during the press release?{\textquotedblright} and {\textquotedblleft}What might happen during the press release{\textquotedblright} query events which happen after {\textquotedblleft}the press release{\textquotedblright}, they convey divergent semantics. To this end, we propose a novel reading comprehension approach with precise question understanding. Specifically, a temporal ordering question is embedded into two vectors to capture the referred event and the temporal relation. Then we evaluate the temporal relation between candidate events and the referred event based on that. Such fine-grained representations offer two benefits. First, it enables a better understanding of the question by focusing on different elements of a question. Second, it provides good interpretability when evaluating temporal relations. Furthermore, we also harness an auxiliary contrastive loss for representation learning of temporal relations, which aims to distinguish relations with subtle but critical changes. The proposed approach outperforms strong baselines and achieves state-of-the-art performance on the TORQUE dataset. It also increases the accuracy of four pre-trained language models (BERT base, BERT large, RoBERTa base, and RoBETRa large), demonstrating its generic effectiveness on divergent models.
null
null
10.18653/v1/2022.naacl-main.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,727
inproceedings
knoll-etal-2022-user
User-Driven Research of Medical Note Generation Software
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.29/
Knoll, Tom and Moramarco, Francesco and Papadopoulos Korfiatis, Alex and Young, Rachel and Ruffini, Claudia and Perera, Mark and Perstl, Christian and Reiter, Ehud and Belz, Anya and Savkov, Aleksandar
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
385--394
A growing body of work uses Natural Language Processing (NLP) methods to automatically generate medical notes from audio recordings of doctor-patient consultations. However, there are very few studies on how such systems could be used in clinical practice, how clinicians would adjust to using them, or how system design should be influenced by such considerations. In this paper, we present three rounds of user studies, carried out in the context of developing a medical note generation system. We present, analyse and discuss the participating clinicians' impressions and views of how the system ought to be adapted to be of value to them. Next, we describe a three-week test run of the system in a live telehealth clinical practice. Major findings include (i) the emergence of five different note-taking behaviours; (ii) the importance of the system generating notes in real time during the consultation; and (iii) the identification of a number of clinical use cases that could prove challenging for automatic note generation systems.
null
null
10.18653/v1/2022.naacl-main.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,728
inproceedings
sorokin-etal-2022-ask
Ask Me Anything in Your Native Language
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.30/
Sorokin, Nikita and Abulkhanov, Dmitry and Piontkovskaya, Irina and Malykh, Valentin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
395--406
Cross-lingual question answering is a thriving field in the modern world, helping people to search information on the web more efficiently. One of the important scenarios is to give an answer even there is no answer in the language a person asks a question with. We present a novel approach based on single encoder for query and passage for retrieval from multi-lingual collection, together with cross-lingual generative reader. It achieves a new state of the art in both retrieval and end-to-end tasks on the XOR TyDi dataset outperforming the previous results up to 10{\%} on several languages. We find that our approach can be generalized to more than 20 languages in zero-shot approach and outperform all previous models by 12{\%}.
null
null
10.18653/v1/2022.naacl-main.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,729
inproceedings
li-etal-2022-diversifying
Diversifying Neural Dialogue Generation via Negative Distillation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.31/
Li, Yiwei and Feng, Shaoxiong and Sun, Bin and Li, Kan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
407--418
Generative dialogue models suffer badly from the generic response problem, limiting their applications to a few toy scenarios. Recently, an interesting approach, namely negative training, has been proposed to alleviate this problem by reminding the model not to generate high-frequency responses during training. However, its performance is hindered by two issues, ignoring low-frequency but generic responses and bringing low-frequency but meaningless responses. In this paper, we propose a novel negative training paradigm, called negative distillation, to keep the model away from the undesirable generic responses while avoiding the above problems. First, we introduce a negative teacher model that can produce query-wise generic responses, and then the student model is required to maximize the distance with multi-level negative knowledge. Empirical results show that our method outperforms previous negative training methods significantly.
null
null
10.18653/v1/2022.naacl-main.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,730
inproceedings
xu-etal-2022-synthetic
On Synthetic Data for Back Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.32/
Xu, Jiahao and Ruan, Yubin and Bi, Wei and Huang, Guoping and Shi, Shuming and Chen, Lihui and Liu, Lemao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
419--430
Back translation (BT) is one of the most significant technologies in NMT research fields. Existing attempts on BT share a common characteristic: they employ either beam search or random sampling to generate synthetic data with a backward model but seldom work studies the role of synthetic data in the performance of BT. This motivates us to ask a fundamental question: what kind of synthetic data contributes to BT performance?Through both theoretical and empirical studies, we identify two key factors on synthetic data controlling the back-translation NMT performance, which are quality and importance. Furthermore, based on our findings, we propose a simple yet effective method to generate synthetic data to better trade off both factors so as to yield the better performance for BT. We run extensive experiments on WMT14 DE-EN, EN-DE, and RU-EN benchmark tasks. By employing our proposed method to generate synthetic data, our BT model significantly outperforms the standard BT baselines (i.e., beam and sampling based methods for data generation), which proves the effectiveness of our proposed methods.
null
null
10.18653/v1/2022.naacl-main.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,731
inproceedings
cheng-etal-2022-mapping
Mapping the Design Space of Human-{AI} Interaction in Text Summarization
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.33/
Cheng, Ruijia and Smith-Renner, Alison and Zhang, Ke and Tetreault, Joel and Jaimes-Larrarte, Alejandro
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
431--455
Automatic text summarization systems commonly involve humans for preparing data or evaluating model performance, yet, there lacks a systematic understanding of humans' roles, experience, and needs when interacting with or being assisted by AI. From a human-centered perspective, we map the design opportunities and considerations for human-AI interaction in text summarization and broader text generation tasks. We first conducted a systematic literature review of 70 papers, developing a taxonomy of five interactions in AI-assisted text generation and relevant design dimensions. We designed text summarization prototypes for each interaction. We then interviewed 16 users, aided by the prototypes, to understand their expectations, experience, and needs regarding efficiency, control, and trust with AI in text summarization and propose design considerations accordingly.
null
null
10.18653/v1/2022.naacl-main.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,732