entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
russell-etal-2022-bu
{BU}-{TTS}: An Open-Source, Bilingual {W}elsh-{E}nglish, Text-to-Speech Corpus
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.15/
Russell, Stephen and Jones, Dewi and Prys, Delyth
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
104--109
This paper presents the design, collection and verification of a bilingual text-to-speech synthesis corpus for Welsh and English. The ever expanding voice collection currently contains almost 10 hours of recordings from a bilingual, phonetically balanced text corpus. The speakers consist of a professional voice actor and three amateur contributors, with male and female accents from north and south Wales. This corpus provides audio-text pairs for building and training high-quality bilingual Welsh-English neural based TTS systems. We describe the process by which we created a phonetically balanced prompt set and the challenges of attempting to collate such a dataset during the COVID-19 pandemic. Our initial findings in validating the corpus via the implementation of a state-of-the-art TTS models are presented. This corpus represents the first open-source Welsh language corpus large enough to capitalise on neural TTS architectures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,145
inproceedings
evans-etal-2022-developing
Developing Automatic Speech Recognition for {S}cottish {G}aelic
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.16/
Evans, Lucy and Lamb, William and Sinclair, Mark and Alex, Beatrice
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
110--120
This paper discusses our efforts to develop a full automatic speech recognition (ASR) system for Scottish Gaelic, starting from a point of limited resource. Building ASR technology is important for documenting and revitalising endangered languages; it enables existing resources to be enhanced with automatic subtitles and transcriptions, improves accessibility for users, and, in turn, encourages continued use of the language. In this paper, we explain the many difficulties faced when collecting minority language data for speech recognition. A novel cross-lingual approach to the alignment of training data is used to overcome one such difficulty, and in this way we demonstrate how majority language resources can bootstrap the development of lower-resourced language technology. We use the Kaldi speech recognition toolkit to develop several Gaelic ASR systems, and report a final WER of 26.30{\%}. This is a 9.50{\%} improvement on our original model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,146
inproceedings
o-raghallaigh-etal-2022-handwritten
Handwritten Text Recognition ({HTR}) for {I}rish-Language Folklore
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.17/
{\'O} Raghallaigh, Brian and Palandri, Andrea and Mac C{\'a}rthaigh, Cr{\'i}ost{\'o}ir
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
121--126
In this paper we present our method for digitising a large collection of handwritten Irish-language texts as part of a project to mine information from a large corpus of Irish and Scottish Gaelic folktales. The handwritten texts form part of the Main Manuscript Collection of the National Folklore Collection of Ireland and contain handwritten transcriptions of oral folklore collected in Ireland in the 20th century. With the goal of creating a large text corpus of the Irish-language folktales contained within this collection, our method involves scanning the pages of the physical volumes and digitising the text on these pages using Transkribus, a platform for the recognition of historical documents. Given the nature of the collection, the approach we have taken involves the creation of individual text recognition models for multiple collectors' hands. Doing it this way was motivated by the fact that a relatively small number of collectors contributed the bulk of the material, while the differences between each collector in terms of style, layout and orthography were difficult to reconcile within a single handwriting model. We present our preliminary results along with a discussion on the viability of using crowdsourced correction to improve our HTR models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,147
inproceedings
barnes-etal-2022-aac
{AAC} don Ghaeilge: the Prototype Development of Speech-Generating Assistive Technology for {I}rish
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.18/
Barnes, Emily and Morrin, Ois{\'i}n and N{\'i} Chasaide, Ailbhe and Cummins, Julia and Berthelsen, Harald and Murphy, Andy and Nic Corcr{\'a}in, Muireann and O{'}Neill, Claire and Gobl, Christer and N{\'i} Chiar{\'a}in, Neasa
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
127--132
This paper describes the prototype development of an Alternative and Augmentative Communication (AAC) system for the Irish language. This system allows users to communicate using the ABAIR synthetic voices, by selecting a series of words or images. Similar systems are widely available in English and are often used by autistic people, as well as by people with Cerebral Palsy, Alzheimer`s and Parkinson`s disease. A dual-pronged approach to development has been adopted: this involves (i) the initial short-term prototype development that targets the immediate needs of specific users, as well as considerations for (ii) the longer term development of a bilingual AAC system which will suit a broader range of users with varying linguistic backgrounds, age ranges and needs. This paper described the design considerations and the implementation steps in the current system. Given the substantial differences in linguistic structures in Irish and English, the development of a bilingual system raises many research questions and avenues for future development.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,148
inproceedings
tasnim-etal-2022-depac
{DEPAC}: a Corpus for Depression and Anxiety Detection from Speech
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.1/
Tasnim, Mashrura and Ehghaghi, Malikeh and Diep, Brian and Novikova, Jekaterina
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
1--16
Mental distress like depression and anxiety contribute to the largest proportion of the global burden of diseases. Automated diagnosis system of such disorders, empowered by recent innovations in Artificial Intelligence, can pave the way to reduce the sufferings of the affected individuals. Development of such systems requires information-rich and balanced corpora. In this work, we introduce a novel mental distress analysis audio dataset DEPAC, labelled based on established thresholds on depression and anxiety standard screening tools. This large dataset comprises multiple speech tasks per individual, as well as relevant demographic information. Alongside, we present a feature set consisting of hand-curated acoustic and linguistic features, which were found effective in identifying signs of mental illnesses in human speech. Finally, we justify the quality and effectiveness of our proposed audio corpus and feature set in predicting depression severity by comparing the performance of baseline machine learning models built on this dataset with baseline models trained on other well-known depression corpora.
null
null
10.18653/v1/2022.clpsych-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,150
inproceedings
orr-etal-2022-ethical
The ethical role of computational linguistics in digital psychological formulation and suicide prevention.
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.2/
Orr, Martin and Van Kessel, Kirsten and Parry, Dave
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
17--29
Formulation is central to clinical practice. Formulation has a factor weighing, pattern recognition and explanatory hypothesis modelling focus. Formulation attempts to make sense of why a person presents in a certain state at a certain time and context, and how that state may be best managed to enhance mental health, safety and optimal change. Inherent to the clinical need for formulation is an appreciation of the complexities, uncertainty and limits of applying theoretical concepts and symptom, diagnostic and risk categories to human experience; or attaching meaning or weight to any particular factor in an individual?s history or mental state without considering the broader biopsychosocial and cultural context. With specific reference to suicide prevention, this paper considers the need and potential for the computer linguistic community to be both cognisant of and ethically contribute to the clinical formulation process.
null
null
10.18653/v1/2022.clpsych-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,151
inproceedings
zirikly-dredze-2022-explaining
Explaining Models of Mental Health via Clinically Grounded Auxiliary Tasks
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.3/
Zirikly, Ayah and Dredze, Mark
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
30--39
Models of mental health based on natural language processing can uncover latent signals of mental health from language. Models that indicate whether an individual is depressed, or has other mental health conditions, can aid in diagnosis and treatment. A critical aspect of integration of these models into the clinical setting relies on explaining their behavior to domain experts. In the case of mental health diagnosis, clinicians already rely on an assessment framework to make these decisions; that framework can help a model generate meaningful explanations. In this work we propose to use PHQ-9 categories as an auxiliary task to explaining a social media based model of depression. We develop a multi-task learning framework that predicts both depression and PHQ-9 categories as auxiliary tasks. We compare the quality of explanations generated based on the depression task only, versus those that use the predicted PHQ-9 categories. We find that by relying on clinically meaningful auxiliary tasks, we produce more meaningful explanations.
null
null
10.18653/v1/2022.clpsych-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,152
inproceedings
cho-etal-2022-identifying
Identifying stable speech-language markers of autism in children: Preliminary evidence from a longitudinal telephony-based study
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.4/
Cho, Sunghye and Fusaroli, Riccardo and Pelella, Maggie Rose and Tena, Kimberly and Knox, Azia and Hauptmann, Aili and Covello, Maxine and Russell, Alison and Miller, Judith and Hulink, Alison and Uzokwe, Jennifer and Walker, Kevin and Fiumara, James and Pandey, Juhi and Chatham, Christopher and Cieri, Christopher and Schultz, Robert and Liberman, Mark and Parish-morris, Julia
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
40--46
This study examined differences in linguistic features produced by autistic and neurotypical (NT) children during brief picture descriptions, and assessed feature stability over time. Weekly speech samples from well-characterized participants were collected using a telephony system designed to improve access for geographically isolated and historically marginalized communities. Results showed stable group differences in certain acoustic features, some of which may potentially serve as key outcome measures in future treatment studies. These results highlight the importance of eliciting semi-structured speech samples in a variety of contexts over time, and adds to a growing body of research showing that fine-grained naturalistic communication features hold promise for intervention research.
null
null
10.18653/v1/2022.clpsych-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,153
inproceedings
mehta-etal-2022-psychotherapy
Psychotherapy is Not One Thing: Simultaneous Modeling of Different Therapeutic Approaches
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.5/
Mehta, Maitrey and Caperton, Derek and Axford, Katherine and Weitzman, Lauren and Atkins, David and Srikumar, Vivek and Imel, Zac
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
47--58
There are many different forms of psychotherapy. Itemized inventories of psychotherapeutic interventions provide a mechanism for evaluating the quality of care received by clients and for conducting research on how psychotherapy helps. However, evaluations such as these are slow, expensive, and are rarely used outside of well-funded research studies. Natural language processing research has progressed to allow automating such tasks. Yet, NLP work in this area has been restricted to evaluating a single approach to treatment, when prior research indicates therapists used a wide variety of interventions with their clients, often in the same session. In this paper, we frame this scenario as a multi-label classification task, and develop a group of models aimed at predicting a wide variety of therapist talk-turn level orientations. Our models achieve F1 macro scores of 0.5, with the class F1 ranging from 0.36 to 0.67. We present analyses which offer insights into the capability of such models to capture psychotherapy approaches, and which may complement human judgment.
null
null
10.18653/v1/2022.clpsych-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,154
inproceedings
harrigian-dredze-2022-now
Then and Now: Quantifying the Longitudinal Validity of Self-Disclosed Depression Diagnoses
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.6/
Harrigian, Keith and Dredze, Mark
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
59--75
Self-disclosed mental health diagnoses, which serve as ground truth annotations of mental health status in the absence of clinical measures, underpin the conclusions behind most computational studies of mental health language from the last decade. However, psychiatric conditions are dynamic; a prior depression diagnosis may no longer be indicative of an individual`s mental health, either due to treatment or other mitigating factors. We ask: to what extent are self-disclosures of mental health diagnoses actually relevant over time? We analyze recent activity from individuals who disclosed a depression diagnosis on social media over five years ago and, in turn, acquire a new understanding of how presentations of mental health status on social media manifest longitudinally. We also provide expanded evidence for the presence of personality-related biases in datasets curated using self-disclosed diagnoses. Our findings motivate three practical recommendations for improving mental health datasets curated using self-disclosed diagnoses:1) Annotate diagnosis dates and psychiatric comorbidities2) Sample control groups using propensity score matching3) Identify and remove spurious correlations introduced by selection bias
null
null
10.18653/v1/2022.clpsych-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,155
inproceedings
ireland-etal-2022-tracking
Tracking Mental Health Risks and Coping Strategies in Healthcare Workers' Online Conversations Across the {COVID}-19 Pandemic
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.7/
Ireland, Molly and Adams, Kaitlin and Farrell, Sean
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
76--88
The mental health risks of the COVID-19 pandemic are magnified for medical professionals, such as doctors and nurses. To track conversational markers of psychological distress and coping strategies, we analyzed 67.25 million words written by self-identified healthcare workers (N = 5,409; 60.5{\%} nurses, 40.5{\%} physicians) on Reddit beginning in June 2019. Dictionary-based measures revealed increasing emotionality (including more positive and negative emotion and more swearing), social withdrawal (less affiliation and empathy, more {\textquotedblleft}they{\textquotedblright} pronouns), and self-distancing (fewer {\textquotedblleft}I{\textquotedblright} pronouns) over time. Several effects were strongest for conversations that were least health-focused and self-relevant, suggesting that long-term changes in social and emotional behavior are general and not limited to personal or work-related experiences. Understanding protective and risky coping strategies used by healthcare workers during the pandemic is fundamental for maintaining mental health among front-line workers during periods of chronic stress, such as the COVID-19 pandemic.
null
null
10.18653/v1/2022.clpsych-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,156
inproceedings
aich-parde-2022-really
Are You Really Okay? A Transfer Learning-based Approach for Identification of Underlying Mental Illnesses
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.8/
Aich, Ankit and Parde, Natalie
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
89--104
Evidence has demonstrated the presence of similarities in language use across people with various mental health conditions. In this work, we investigate these correlations both in terms of literature and as a data analysis problem. We also introduce a novel state-of-the-art transfer learning-based approach that learns from linguistic feature spaces of previous conditions and predicts unknown ones. Our model achieves strong performance, with F1 scores of 0.75, 0.80, and 0.76 at detecting depression, stress, and suicidal ideation in a first-of-its-kind transfer task and offering promising evidence that language models can harness learned patterns from known mental health conditions to aid in their prediction of others that may lie latent.
null
null
10.18653/v1/2022.clpsych-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,157
inproceedings
burkhardt-etal-2022-comparing
Comparing emotion feature extraction approaches for predicting depression and anxiety
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.9/
Burkhardt, Hannah and Pullmann, Michael and Hull, Thomas and Are{\'a}n, Patricia and Cohen, Trevor
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
105--115
The increasing adoption of message-based behavioral therapy enables new approaches to assessing mental health using linguistic analysis of patient-generated text. Word counting approaches have demonstrated utility for linguistic feature extraction, but deep learning methods hold additional promise given recent advances in this area. We evaluated the utility of emotion features extracted using a BERT-based model in comparison to emotions extracted using word counts as predictors of symptom severity in a large set of messages from text-based therapy sessions involving over 6,500 unique patients, accompanied by data from repeatedly administered symptom scale measurements. BERT-based emotion features explained more variance in regression models of symptom severity, and improved predictive modeling of scale-derived diagnostic categories. However, LIWC categories that are not directly related to emotions provided valuable and complementary information for modeling of symptom severity, indicating a role for both approaches in inferring the mental states underlying patient-generated language.
null
null
10.18653/v1/2022.clpsych-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,158
inproceedings
lee-etal-2022-detecting
Detecting Suicidality with a Contextual Graph Neural Network
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.10/
Lee, Daeun and Kang, Migyeong and Kim, Minji and Han, Jinyoung
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
116--125
Discovering individuals' suicidality on social media has become increasingly important. Many researchers have studied to detect suicidality by using a suicide dictionary. However, while prior work focused on matching a word in a post with a suicide dictionary without considering contexts, little attention has been paid to how the word can be associated with the suicide-related context. To address this problem, we propose a suicidality detection model based on a graph neural network to grasp the dynamic semantic information of the suicide vocabulary by learning the relations between a given post and words. The extensive evaluation demonstrates that the proposed model achieves higher performance than the state-of-the-art methods. We believe the proposed model has great utility in identifying the suicidality of individuals and hence preventing individuals from potential suicide risks at an early stage.
null
null
10.18653/v1/2022.clpsych-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,159
inproceedings
lybarger-etal-2022-identifying
Identifying Distorted Thinking in Patient-Therapist Text Message Exchanges by Leveraging Dynamic Multi-Turn Context
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.11/
Lybarger, Kevin and Tauscher, Justin and Ding, Xiruo and Ben-zeev, Dror and Cohen, Trevor
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
126--136
There is growing evidence that mobile text message exchanges between patients and therapists can augment traditional cognitive behavioral therapy. The automatic characterization of patient thinking patterns in this asynchronous text communication may guide treatment and assist in therapist training. In this work, we automatically identify distorted thinking in text-based patient-therapist exchanges, investigating the role of conversation history (context) in distortion prediction. We identify six unique types of cognitive distortions and utilize BERT-based architectures to represent text messages within the context of the conversation. We propose two approaches for leveraging dynamic conversation context in model training. By representing the text messages within the context of the broader patient-therapist conversation, the models better emulate the therapist`s task of recognizing distorted thoughts. This multi-turn classification approach also leverages the clustering of distorted thinking in the conversation timeline. We demonstrate that including conversation context, including the proposed dynamic context methods, improves distortion prediction performance. The proposed architectures and conversation encoding approaches achieve performance comparable to inter-rater agreement. The presence of any distorted thinking is identified with relatively high performance at 0.73 F1, significantly outperforming the best context-agnostic models (0.68 F1).
null
null
10.18653/v1/2022.clpsych-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,160
inproceedings
gupta-etal-2022-learning
Learning to Automate Follow-up Question Generation using Process Knowledge for Depression Triage on {R}eddit Posts
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.12/
Gupta, Shrey and Agarwal, Anmol and Gaur, Manas and Roy, Kaushik and Narayanan, Vignesh and Kumaraguru, Ponnurangam and Sheth, Amit
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
137--147
Conversational Agents (CAs) powered with deep language models (DLMs) have shown tremendous promise in the domain of mental health. Prominently, the CAs have been used to provide informational or therapeutic services (e.g., cognitive behavioral therapy) to patients. However, the utility of CAs to assist in mental health triaging has not been explored in the existing work as it requires a controlled generation of follow-up questions (FQs), which are often initiated and guided by the mental health professionals (MHPs) in clinical settings. In the context of {\textquoteleft}depression', our experiments show that DLMs coupled with process knowledge in a mental health questionnaire generate 12.54{\%} and 9.37{\%} better FQs based on similarity and longest common subsequence matches to questions in the PHQ-9 dataset respectively, when compared with DLMs without process knowledge support. Despite coupling with process knowledge, we find that DLMs are still prone to hallucination, i.e., generating redundant, irrelevant, and unsafe FQs. We demonstrate the challenge of using existing datasets to train a DLM for generating FQs that adhere to clinical process knowledge. To address this limitation, we prepared an extended PHQ-9 based dataset, PRIMATE, in collaboration with MHPs. PRIMATE contains annotations regarding whether a particular question in the PHQ-9 dataset has already been answered in the user`s initial description of the mental health condition. We used PRIMATE to train a DLM in a supervised setting to identify which of the PHQ-9 questions can be answered directly from the user`s post and which ones would require more information from the user. Using performance analysis based on MCC scores, we show that PRIMATE is appropriate for identifying questions in PHQ-9 that could guide generative DLMs towards controlled FQ generation (with minimal hallucination) suitable for aiding triaging. The dataset created as a part of this research can be obtained from \url{https://github.com/primate-mh/Primate2022}
null
null
10.18653/v1/2022.clpsych-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,161
inproceedings
shriki-etal-2022-masking
Masking Morphosyntactic Categories to Evaluate Salience for Schizophrenia Diagnosis
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.13/
Shriki, Yaara and Ziv, Ido and Dershowitz, Nachum and Harel, Eiran and Bar, Kfir
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
148--157
Natural language processing tools have been shown to be effective for detecting symptoms of schizophrenia in transcribed speech. We analyze and assess the contribution of the various syntactic and morphological categories towards successful machine classification of texts produced by subjects with schizophrenia and by others. Specifically, we fine-tune a language model for the classification task, and mask all words that are attributed with each category of interest. The speech samples were generated in a controlled way by interviewing inpatients who were officially diagnosed with schizophrenia, and a corresponding group of healthy controls. All participants are native Hebrew speakers. Our results show that nouns are the most significant category for classification performance.
null
null
10.18653/v1/2022.clpsych-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,162
inproceedings
shapira-etal-2022-measuring
Measuring Linguistic Synchrony in Psychotherapy
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.14/
Shapira, Natalie and Atzil-Slonim, Dana and Tuval Mashiach, Rivka and Shapira, Ori
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
158--176
We study the phenomenon of linguistic synchrony between clients and therapists in a psychotherapy process. Linguistic Synchrony (LS) can be viewed as any observed interdependence or association between more than one person?s linguistic behavior. Accordingly, we establish LS as a methodological task. We suggest a LS function that applies a linguistic similarity measure based on the Jensen-Shannon distance across the observed part-of-speech tag distributions (JSDuPos) of the speakers in different time frames. We perform a study over a unique corpus of 872 transcribed sessions, covering 68 clients and 59 therapists. After establishing the presence of client-therapist LS, we verify its association with therapeutic alliance and treatment outcome (measured using WAI and ORS), and additionally analyse the behavior of JSDuPos throughout treatment. Results indicate that (1) higher linguistic similarity at the session level associates with higher therapeutic alliance as reported by the client and therapist at the end of the session, (2) higher linguistic similarity at the session level associates with higher level of treatment outcome as reported by the client at the beginnings of the next sessions, (3) there is a significant linear increase in linguistic similarity throughout treatment, (4) surprisingly, higher LS associates with lower treatment outcome. Finally, we demonstrate how the LS function can be used to interpret and explore the mechanism for synchrony.
null
null
10.18653/v1/2022.clpsych-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,163
inproceedings
giorgi-etal-2022-nonsuicidal
Nonsuicidal Self-Injury and Substance Use Disorders: A Shared Language of Addiction
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.15/
Giorgi, Salvatore and Himelein-wachowiak, Mckenzie and Habib, Daniel and Ungar, Lyle and Curtis, Brenda
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
177--183
Nonsuicidal self-injury (NSSI), or the deliberate injuring of one?s body without intending to die, has been shown to exhibit many similarities to substance use disorders (SUDs), including population-level characteristics, impulsivity traits, and comorbidity with other mental disorders. Research has further shown that people who self-injure adopt language common in SUD recovery communities (e.g., {\textquotedblleft}clean{\textquotedblright}, {\textquotedblleft}relapse{\textquotedblright}, {\textquotedblleft}addiction,{\textquotedblright} and celebratory language about sobriety milestones). In this study, we investigate the shared language of NSSI and SUD by comparing discussions on public Reddit forums related to self-injury and drug addiction. To this end, we build a set of LDA topics across both NSSI and SUD Reddit users and show that shared language across the two domains includes SUD recovery language in addition to other themes common to support forums (e.g., requests for help and gratitude). Next, we examine Reddit-wide posting activity and note that users posting in \textit{r/selfharm} also post in many mental health-related subreddits, while users of drug addiction related subreddits do not, despite high comorbidity between NSSI and SUDs. These results show that while people who self-injure may contextualize their disorder as an addiction, their posting habits demonstrate comorbidities with other mental disorders more so than their counterparts in recovery from SUDs. These observations have clinical implications for people who self-injure and seek support by sharing their experiences online.
null
null
10.18653/v1/2022.clpsych-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,164
inproceedings
tsakalidis-etal-2022-overview
Overview of the {CLP}sych 2022 Shared Task: Capturing Moments of Change in Longitudinal User Posts
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.16/
Tsakalidis, Adam and Chim, Jenny and Bilal, Iman Munire and Zirikly, Ayah and Atzil-Slonim, Dana and Nanni, Federico and Resnik, Philip and Gaur, Manas and Roy, Kaushik and Inkster, Becky and Leintz, Jeff and Liakata, Maria
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
184--198
We provide an overview of the CLPsych 2022 Shared Task, which focusses on the automatic identification of {\textquoteleft}Moments of Change' in lon- gitudinal posts by individuals on social media and its connection with information regarding mental health . This year`s task introduced the notion of longitudinal modelling of the text generated by an individual online over time, along with appropriate temporally sen- sitive evaluation metrics. The Shared Task con- sisted of two subtasks: (a) the main task of cap- turing changes in an individual`s mood (dras- tic changes-{\textquoteleft}Switches'- and gradual changes -{\textquoteleft}Escalations'- on the basis of textual content shared online; and subsequently (b) the sub- task of identifying the suicide risk level of an individual {--} a continuation of the CLPsych 2019 Shared Task{--} where participants were encouraged to explore how the identification of changes in mood in task (a) can help with assessing suicidality risk in task (b).
null
null
10.18653/v1/2022.clpsych-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,165
inproceedings
fabregat-marcos-etal-2022-approximate
Approximate Nearest Neighbour Extraction Techniques and Neural Networks for Suicide Risk Prediction in the {CLP}sych 2022 Shared Task
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.17/
Fabregat Marcos, Hermenegildo and Cejudo, Ander and Martinez-romo, Juan and Perez, Alicia and Araujo, Lourdes and Lebea, Nuria and Oronoz, Maite and Casillas, Arantza
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
199--204
This paper describes the participation of our group on the CLPsych 2022 shared task. For task A, which tries to capture changes in mood over time, we have applied an Approximate Nearest Neighbour (ANN) extraction technique with the aim of relabelling the user messages according to their proximity, based on the representation of these messages in a vector space. Regarding the subtask B, we have used the output of the subtask A to train a Recurrent Neural Network (RNN) to predict the risk of suicide at the user level. The results obtained are very competitive considering that our team was one of the few that made use of the organisers' proposed virtual environment and also made use of the Task A output to predict the Task B results.
null
null
10.18653/v1/2022.clpsych-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,166
inproceedings
bucur-etal-2022-capturing
Capturing Changes in Mood Over Time in Longitudinal Data Using Ensemble Methodologies
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.18/
Bucur, Ana-Maria and Jang, Hyewon and Liza, Farhana Ferdousi
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
205--212
This paper presents the system description of team BLUE for Task A of the CLPsych 2022 Shared Task on identifying changes in mood and behaviour in longitudinal textual data. These moments of change are signals that can be used to screen and prevent suicide attempts. To detect these changes, we experimented with several text representation methods, such as TF-IDF, sentence embeddings, emotion-informed embeddings and several classical machine learning classifiers. We chose to submit three runs of ensemble systems based on maximum voting on the predictions from the best performing models. Of the nine participating teams in Task A, our team ranked second in the Precision-oriented Coverage-based Evaluation, with a score of 0.499. Our best system was an ensemble of Support Vector Machine, Logistic Regression, and Adaptive Boosting classifiers using emotion-informed embeddings as input representation that can model both the linguistic and emotional information found in users? posts.
null
null
10.18653/v1/2022.clpsych-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,167
inproceedings
azim-etal-2022-detecting
Detecting Moments of Change and Suicidal Risks in Longitudinal User Texts Using Multi-task Learning
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.19/
Azim, Tayyaba and Gyanendro Singh, Loitongbam and Middleton, Stuart E.
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
213--218
This work describes the classification system proposed for the Computational Linguistics and Clinical Psychology (CLPsych) Shared Task 2022. We propose the use of multitask learning approach with bidirectional long-short term memory (Bi-LSTM) model for predicting changes in user`s mood and their suicidal risk level. The two classification tasks have been solved independently or in an augmented way previously, where the output of one task is leveraged for learning another task, however this work proposes an {\textquoteleft}all-in-one' framework that jointly learns the related mental health tasks. The experimental results suggest that the proposed multi-task framework outperforms the remaining single-task frameworks submitted to the challenge and evaluated via timeline based and coverage based performance metrics shared by the organisers. We also assess the potential of using various types of feature embedding schemes that could prove useful in initialising the Bi-LSTM model for better multitask learning in the mental health domain.
null
null
10.18653/v1/2022.clpsych-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,168
inproceedings
bayram-benhiba-2022-emotionally
Emotionally-Informed Models for Detecting Moments of Change and Suicide Risk Levels in Longitudinal Social Media Data
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.20/
Bayram, Ulya and Benhiba, Lamia
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
219--225
In this shared task, we focus on detecting mental health signals in Reddit users' posts through two main challenges: A) capturing mood changes (anomalies) from the longitudinal set of posts (called timelines), and B) assessing the users' suicide risk-levels. Our approaches leverage emotion recognition on linguistic content by computing emotion/sentiment scores using pre-trained BERTs on users' posts and feeding them to machine learning models, including XGBoost, Bi-LSTM, and logistic regression. For Task-A, we detect longitudinal anomalies using a sequence-to-sequence (seq2seq) autoencoder and capture regions of mood deviations. For Task-B, our two models utilize the BERT emotion/sentiment scores. The first computes emotion bandwidths and merges them with n-gram features, and employs logistic regression to detect users' suicide risk levels. The second model predicts suicide risk on the timeline level using a Bi-LSTM on Task-A results and sentiment scores. Our results outperformed most participating teams and ranked in the top three in Task-A. In Task-B, our methods surpass all others and return the best macro and micro F1 scores.
null
null
10.18653/v1/2022.clpsych-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,169
inproceedings
culnan-etal-2022-exploring
Exploring transformers and time lag features for predicting changes in mood over time
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.21/
Culnan, John and Romero Diaz, Damian and Bethard, Steven
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
226--231
This paper presents transformer-based models created for the CLPsych 2022 shared task. Using posts from Reddit users over a period of time, we aim to predict changes in mood from post to post. We test models that preserve timeline information through explicit ordering of posts as well as those that do not order posts but preserve features on the length of time between a user`s posts. We find that a model with temporal information may provide slight benefits over the same model without such information, although a RoBERTa transformer model provides enough information to make similar predictions without custom-encoded time information.
null
null
10.18653/v1/2022.clpsych-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,170
inproceedings
kirinde-gamaarachchige-etal-2022-multi
Multi-Task Learning to Capture Changes in Mood Over Time
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.22/
Kirinde Gamaarachchige, Prasadith and Husseini Orabi, Ahmed and Husseini Orabi, Mahmoud and Inkpen, Diana
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
232--238
This paper investigates the impact of using Multi-Task Learning (MTL) to predict mood changes over time for each individual (social media user). The presented models were developed as a part of the Computational Linguistics and Clinical Psychology (CLPsych) 2022 shared task. Given the limited number of Reddit social media users, as well as their posts, we decided to experiment with different multi-task learning architectures to identify to what extent knowledge can be shared among similar tasks. Due to class imbalance at both post and user levels and to accommodate task alignment, we randomly sampled an equal number of instances from the respective classes and performed ensemble learning to reduce prediction variance. Faced with several constraints, we managed to produce competitive results that could provide insights into the use of multi-task learning to identify mood changes over time and suicide ideation risk.
null
null
10.18653/v1/2022.clpsych-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,171
inproceedings
alhamed-etal-2022-predicting
Predicting Moments of Mood Changes Overtime from Imbalanced Social Media Data
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.23/
Alhamed, Falwah and Ive, Julia and Specia, Lucia
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
239--244
Social media data have been used in research for many years to understand users' mental health. In this paper, using user-generated content we aim to achieve two goals: the first is detecting moments of mood change over time using timelines of users from Reddit. The second is predicting the degree of suicide risk as a user-level classification task. We used different approaches to address longitudinal modelling as well as the problem of the severely imbalanced dataset. Using BERT with undersampling techniques performed the best among other LSTM and basic random forests models for the first task. For the second task, extracting some features related to suicide from posts' text contributed to the overall performance improvement. Specifically, a number of suicide-related words in a post as a feature improved the accuracy by 17{\%}.
null
null
10.18653/v1/2022.clpsych-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,172
inproceedings
boinepelli-etal-2022-towards
Towards Capturing Changes in Mood and Identifying Suicidality Risk
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.24/
Boinepelli, Sravani and Subramanian, Shivansh and Singam, Abhijeeth and Raha, Tathagata and Varma, Vasudeva
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
245--250
This paper describes our systems for CLPsych?s 2022 Shared Task. Subtask A involves capturing moments of change in an individual?s mood over time, while Subtask B asked us to identify the suicidality risk of a user. We explore multiple machine learning and deep learning methods for the same, taking real-life applicability into account while considering the design of the architecture. Our team achieved top results in different categories for both subtasks. Task A was evaluated on a post-level (using macro averaged F1) and on a window-based timeline level (using macro-averaged precision and recall). We scored a post-level F1 of 0.520 and ranked second with a timeline-level recall of 0.646. Task B was a user-level task where we also came in second with a micro F1 of 0.520 and scored third place on the leaderboard with a macro F1 of 0.380.
null
null
10.18653/v1/2022.clpsych-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,173
inproceedings
v-ganesan-etal-2022-wwbp
{WWBP}-{SQT}-lite: Multi-level Models and Difference Embeddings for Moments of Change Identification in Mental Health Forums
Zirikly, Ayah and Atzil-Slonim, Dana and Liakata, Maria and Bedrick, Steven and Desmet, Bart and Ireland, Molly and Lee, Andrew and MacAvaney, Sean and Purver, Matthew and Resnik, Rebecca and Yates, Andrew
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.clpsych-1.25/
V Ganesan, Adithya and Varadarajan, Vasudha and Mittal, Juhi and Subrahmanya, Shashanka and Matero, Matthew and Soni, Nikita and Guntuku, Sharath Chandra and Eichstaedt, Johannes and Schwartz, H. Andrew
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
251--258
Psychological states unfold dynamically; to understand and measure mental health at scale we need to detect and measure these changes from sequences of online posts. We evaluate two approaches to capturing psychological changes in text: the first relies on computing the difference between the embedding of a message with the one that precedes it, the second relies on a {\textquotedblleft}human-aware{\textquotedblright} multi-level recurrent transformer (HaRT). The mood changes of timeline posts of users were annotated into three classes, {\textquoteleft}ordinary,' {\textquoteleft}switching' (positive to negative or vice versa) and {\textquoteleft}escalations' (increasing in intensity). For classifying these mood changes, the difference-between-embeddings technique {--} applied to RoBERTa embeddings {--} showed the highest overall F1 score (0.61) across the three different classes on the test set. The technique particularly outperformed the HaRT transformer (and other baselines) in the detection of switches (F1 = .33) and escalations (F1 = .61).Consistent with the literature, the language use patterns associated with mental-health related constructs in prior work (including depression, stress, anger and anxiety) predicted both mood switches and escalations.
null
null
10.18653/v1/2022.clpsych-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,174
inproceedings
krishnamoorthy-etal-2022-clpt
{CLPT}: A Universal Annotation Scheme and Toolkit for Clinical Language Processing
Naumann, Tristan and Bethard, Steven and Roberts, Kirk and Rumshisky, Anna
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.clinicalnlp-1.1/
Krishnamoorthy, Saranya and Jiang, Yanyi and Buchanan, William and Singh, Ayush and Ortega, John
Proceedings of the 4th Clinical Natural Language Processing Workshop
1--9
With the abundance of natural language processing (NLP) frameworks and toolkits being used in the clinical arena, a new challenge has arisen - how do technologists collaborate across several projects in an easy way? Private sector companies are usually not willing to share their work due to intellectual property rights and profit-bearing decisions. Therefore, the annotation schemes and toolkits that they use are rarely shared with the wider community. We present the clinical language pipeline toolkit (CLPT) and its corresponding annotation scheme called the CLAO (Clinical Language Annotation Object) with the aim of creating a way to share research results and other efforts through a software solution. The CLAO is a unified annotation scheme for clinical technology processing (CTP) projects that forms part of the CLPT and is more reliable than previous standards such as UIMA, BioC, and cTakes for annotation searches, insertions, and deletions. Additionally, it offers a standardized object that can be exchanged through an API that the authors release publicly for CTP project inclusion.
null
null
10.18653/v1/2022.clinicalnlp-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,176
inproceedings
huang-etal-2022-plm
{PLM}-{ICD}: Automatic {ICD} Coding with Pretrained Language Models
Naumann, Tristan and Bethard, Steven and Roberts, Kirk and Rumshisky, Anna
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.clinicalnlp-1.2/
Huang, Chao-Wei and Tsai, Shang-Chi and Chen, Yun-Nung
Proceedings of the 4th Clinical Natural Language Processing Workshop
10--20
Automatically classifying electronic health records (EHRs) into diagnostic codes has been challenging to the NLP community. State-of-the-art methods treated this problem as a multi-label classification problem and proposed various architectures to model this problem. However, these systems did not leverage the superb performance of pretrained language models, which achieved superb performance on natural language understanding tasks. Prior work has shown that pretrained language models underperformed on this task with the regular fine-tuning scheme. Therefore, this paper aims at analyzing the causes of the underperformance and developing a framework for automatic ICD coding with pretrained language models. We spotted three main issues through the experiments: 1) large label space, 2) long input sequences, and 3) domain mismatch between pretraining and fine-tuning. We propose PLM-ICD, a framework that tackles the challenges with various strategies. The experimental results show that our proposed framework can overcome the challenges and achieves state-of-the-art performance in terms of multiple metrics on the benchmark MIMIC data. Our source code is available at \url{https://github.com/MiuLab/PLM-ICD}.
null
null
10.18653/v1/2022.clinicalnlp-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,177
inproceedings
seneviratne-etal-2022-networks
$m$-Networks: Adapting the Triplet Networks for Acronym Disambiguation
Naumann, Tristan and Bethard, Steven and Roberts, Kirk and Rumshisky, Anna
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.clinicalnlp-1.3/
Seneviratne, Sandaru and Daskalaki, Elena and Lenskiy, Artem and Suominen, Hanna
Proceedings of the 4th Clinical Natural Language Processing Workshop
21--29
Acronym disambiguation (AD) is the process of identifying the correct expansion of the acronyms in text. AD is crucial in natural language understanding of scientific and medical documents due to the high prevalence of technical acronyms and the possible expansions. Given that natural language is often ambiguous with more than one meaning for words, identifying the correct expansion for acronyms requires learning of effective representations for words, phrases, acronyms, and abbreviations based on their context. In this paper, we proposed an approach to leverage the triplet networks and triplet loss which learns better representations of text through distance comparisons of embeddings. We tested both the triplet network-based method and the modified triplet network-based method with $m$ networks on the AD dataset from the SDU@AAAI-21 AD task, CASI dataset, and MeDAL dataset. F scores of 87.31{\%}, 70.67{\%}, and 75.75{\%} were achieved by the $m$ network-based approach for SDU, CASI, and MeDAL datasets respectively indicating that triplet network-based methods have comparable performance but with only 12{\%} of the number of parameters in the baseline method. This effective implementation is available at \url{https://github.com/sandaruSen/m_networks} under the MIT license.
null
null
10.18653/v1/2022.clinicalnlp-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,178
inproceedings
liang-etal-2022-fine
Fine-tuning {BERT} Models for Summarizing {G}erman Radiology Findings
Naumann, Tristan and Bethard, Steven and Roberts, Kirk and Rumshisky, Anna
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.clinicalnlp-1.4/
Liang, Siting and Kades, Klaus and Fink, Matthias and Full, Peter and Weber, Tim and Kleesiek, Jens and Strube, Michael and Maier-Hein, Klaus
Proceedings of the 4th Clinical Natural Language Processing Workshop
30--40
Writing the conclusion section of radiology reports is essential for communicating the radiology findings and its assessment to physician in a condensed form. In this work, we employ a transformer-based Seq2Seq model for generating the conclusion section of German radiology reports. The model is initialized with the pretrained parameters of a German BERT model and fine-tuned in our downstream task on our domain data. We proposed two strategies to improve the factual correctness of the model. In the first method, next to the abstractive learning objective, we introduce an extraction learning objective to train the decoder in the model to both generate one summary sequence and extract the key findings from the source input. The second approach is to integrate the pointer mechanism into the transformer-based Seq2Seq model. The pointer network helps the Seq2Seq model to choose between generating tokens from the vocabulary or copying parts from the source input during generation. The results of the automatic and human evaluations show that the enhanced Seq2Seq model is capable of generating human-like radiology conclusions and that the improved models effectively reduce the factual errors in the generations despite the small amount of training data.
null
null
10.18653/v1/2022.clinicalnlp-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,179
inproceedings
min-etal-2022-rred
{RRED} : A Radiology Report Error Detector based on Deep Learning Framework
Naumann, Tristan and Bethard, Steven and Roberts, Kirk and Rumshisky, Anna
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.clinicalnlp-1.5/
Min, Dabin and Kim, Kaeun and Lee, Jong Hyuk and Kim, Yisak and Park, Chang Min
Proceedings of the 4th Clinical Natural Language Processing Workshop
41--52
Radiology report is an official record of radiologists' interpretation of patients' radiographs and it`s a crucial component in the overall medical diagnostic process. However, it can contain various types of errors that can lead to inadequate treatment or delay in diagnosis. To address this problem, we propose a deep learning framework to detect errors in radiology reports. Specifically, our method detects errors between findings and conclusion of chest X-ray reports based on a supervised learning framework. To compensate for the lack of data availability of radiology reports with errors, we develop an error generator to systematically create artificial errors in existing reports. In addition, we introduce a Medical Knowledge-enhancing Pre-training to further utilize the knowledge of abbreviations and key phrases frequently used in the medical domain. We believe that this is the first work to propose a deep learning framework for detecting errors in radiology reports based on a rich contextual and medical understanding. Validation on our radiologist-synthesized dataset, based on MIMIC-CXR, shows 0.80 and 0.95 of the area under precision-recall curve (AUPRC) and the area under the ROC curve (AUROC) respectively, indicating that our framework can effectively detect errors in the real-world radiology reports.
null
null
10.18653/v1/2022.clinicalnlp-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,180
inproceedings
schafer-etal-2022-cross
Cross-Language Transfer of High-Quality Annotations: Combining Neural Machine Translation with Cross-Linguistic Span Alignment to Apply {NER} to Clinical Texts in a Low-Resource Language
Naumann, Tristan and Bethard, Steven and Roberts, Kirk and Rumshisky, Anna
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.clinicalnlp-1.6/
Sch{\"afer, Henning and Idrissi-Yaghir, Ahmad and Horn, Peter and Friedrich, Christoph
Proceedings of the 4th Clinical Natural Language Processing Workshop
53--62
In this work, cross-linguistic span prediction based on contextualized word embedding models is used together with neural machine translation (NMT) to transfer and apply the state-of-the-art models in natural language processing (NLP) to a low-resource language clinical corpus. Two directions are evaluated: (a) English models can be applied to translated texts to subsequently transfer the predicted annotations to the source language and (b) existing high-quality annotations can be transferred beyond translation and then used to train NLP models in the target language. Effectiveness and loss of transmission is evaluated using the German Berlin-T{\"ubingen-Oncology Corpus (BRONCO) dataset with transferred external data from NCBI disease, SemEval-2013 drug-drug interaction (DDI) and i2b2/VA 2010 data. The use of English models for translated clinical texts has always involved attempts to take full advantage of the benefits associated with them (large pre-trained biomedical word embeddings). To improve advances in this area, we provide a general-purpose pipeline to transfer any annotated BRAT or CoNLL format to various target languages. For the entity class medication, good results were obtained with 0.806 $F1$-score after re-alignment. Limited success occurred in the diagnosis and treatment class with results just below 0.5 $F1$-score due to differences in annotation guidelines.
null
null
10.18653/v1/2022.clinicalnlp-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,181
inproceedings
van-aken-etal-2022-see
What Do You See in this Patient? Behavioral Testing of Clinical {NLP} Models
Naumann, Tristan and Bethard, Steven and Roberts, Kirk and Rumshisky, Anna
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.clinicalnlp-1.7/
Van Aken, Betty and Herrmann, Sebastian and L{\"oser, Alexander
Proceedings of the 4th Clinical Natural Language Processing Workshop
63--73
Decision support systems based on clinical notes have the potential to improve patient care by pointing doctors towards overseen risks. Predicting a patient`s outcome is an essential part of such systems, for which the use of deep neural networks has shown promising results. However, the patterns learned by these networks are mostly opaque and previous work revealed both reproduction of systemic biases and unexpected behavior for out-of-distribution patients. For application in clinical practice it is crucial to be aware of such behavior. We thus introduce a testing framework that evaluates clinical models regarding certain changes in the input. The framework helps to understand learned patterns and their influence on model decisions. In this work, we apply it to analyse the change in behavior with regard to the patient characteristics gender, age and ethnicity. Our evaluation of three current clinical NLP models demonstrates the concrete effects of these characteristics on the models' decisions. They show that model behavior varies drastically even when fine-tuned on the same data with similar AUROC score. These results exemplify the need for a broader communication of model behavior in the clinical domain.
null
null
10.18653/v1/2022.clinicalnlp-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,182
inproceedings
lehman-etal-2022-learning
Learning to Ask Like a Physician
Naumann, Tristan and Bethard, Steven and Roberts, Kirk and Rumshisky, Anna
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.clinicalnlp-1.8/
Lehman, Eric and Lialin, Vladislav and Legaspi, Katelyn Edelwina and Sy, Anne Janelle and Pile, Patricia Therese and Alberto, Nicole Rose and Ragasa, Richard Raymund and Puyat, Corinna Victoria and Tali{\~n}o, Marianne Katharina and Alberto, Isabelle Rose and Alfonso, Pia Gabrielle and Moukheiber, Dana and Wallace, Byron and Rumshisky, Anna and Liang, Jennifer and Raghavan, Preethi and Celi, Leo Anthony and Szolovits, Peter
Proceedings of the 4th Clinical Natural Language Processing Workshop
74--86
Existing question answering (QA) datasets derived from electronic health records (EHR) are artificially generated and consequently fail to capture realistic physician information needs. We present Discharge Summary Clinical Questions (DiSCQ), a newly curated question dataset composed of 2,000+ questions paired with the snippets of text (triggers) that prompted each question. The questions are generated by medical experts from 100+ MIMIC-III discharge summaries. We analyze this dataset to characterize the types of information sought by medical experts. We also train baseline models for trigger detection and question generation (QG), paired with unsupervised answer retrieval over EHRs. Our baseline model is able to generate high quality questions in over 62{\%} of cases when prompted with human selected triggers. We release this dataset (and all code to reproduce baseline model results) to facilitate further research into realistic clinical QA and QG: \url{https://github.com/elehman16/discq}.
null
null
10.18653/v1/2022.clinicalnlp-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,183
inproceedings
rojas-etal-2022-clinical
Clinical Flair: A Pre-Trained Language Model for {S}panish Clinical Natural Language Processing
Naumann, Tristan and Bethard, Steven and Roberts, Kirk and Rumshisky, Anna
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.clinicalnlp-1.9/
Rojas, Mat{\'i}as and Dunstan, Jocelyn and Villena, Fabi{\'a}n
Proceedings of the 4th Clinical Natural Language Processing Workshop
87--92
Word embeddings have been widely used in Natural Language Processing (NLP) tasks. Although these representations can capture the semantic information of words, they cannot learn the sequence-level semantics. This problem can be handled using contextual word embeddings derived from pre-trained language models, which have contributed to significant improvements in several NLP tasks. Further improvements are achieved when pre-training these models on domain-specific corpora. In this paper, we introduce Clinical Flair, a domain-specific language model trained on Spanish clinical narratives. To validate the quality of the contextual representations retrieved from our model, we tested them on four named entity recognition datasets belonging to the clinical and biomedical domains. Our experiments confirm that incorporating domain-specific embeddings into classical sequence labeling architectures improves model performance dramatically compared to general-domain embeddings, demonstrating the importance of having these resources available.
null
null
10.18653/v1/2022.clinicalnlp-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,184
inproceedings
shim-etal-2022-exploratory
An exploratory data analysis: the performance differences of a medical code prediction system on different demographic groups
Naumann, Tristan and Bethard, Steven and Roberts, Kirk and Rumshisky, Anna
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.clinicalnlp-1.10/
Shim, Heereen and Lowet, Dietwig and Luca, Stijn and Vanrumste, Bart
Proceedings of the 4th Clinical Natural Language Processing Workshop
93--102
Recent studies show that neural natural processing models for medical code prediction suffer from a label imbalance issue. This study aims to investigate further imbalance in a medical code prediction dataset in terms of demographic variables and analyse performance differences in demographic groups. We use sample-based metrics to correctly evaluate the performance in terms of the data subject. Also, a simple label distance metric is proposed to quantify the difference in the label distribution between a group and the entire data. Our analysis results reveal that the model performs differently towards different demographic groups: significant differences between age groups and between insurance types are observed. Interestingly, we found a weak positive correlation between the number of training data of the group and the performance of the group. However, a strong negative correlation between the label distance of the group and the performance of the group is observed. This result suggests that the model tends to perform poorly in the group whose label distribution is different from the global label distribution of the training data set. Further analysis of the model performance is required to identify the cause of these differences and to improve the model building.
null
null
10.18653/v1/2022.clinicalnlp-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,185
inproceedings
wang-etal-2022-ensemble
Ensemble-based Fine-Tuning Strategy for Temporal Relation Extraction from the Clinical Narrative
Naumann, Tristan and Bethard, Steven and Roberts, Kirk and Rumshisky, Anna
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.clinicalnlp-1.11/
Wang, Lijing and Miller, Timothy and Bethard, Steven and Savova, Guergana
Proceedings of the 4th Clinical Natural Language Processing Workshop
103--108
In this paper, we investigate ensemble methods for fine-tuning transformer-based pretrained models for clinical natural language processing tasks, specifically temporal relation extraction from the clinical narrative. Our experimental results on the THYME data show that ensembling as a fine-tuning strategy can further boost model performance over single learners optimized for hyperparameters. Dynamic snapshot ensembling is particularly beneficial as it fine-tunes a wide array of parameters and results in a 2.8{\%} absolute improvement in F1 over the base single learner.
null
null
10.18653/v1/2022.clinicalnlp-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,186
inproceedings
dligach-etal-2022-exploring
Exploring Text Representations for Generative Temporal Relation Extraction
Naumann, Tristan and Bethard, Steven and Roberts, Kirk and Rumshisky, Anna
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.clinicalnlp-1.12/
Dligach, Dmitriy and Bethard, Steven and Miller, Timothy and Savova, Guergana
Proceedings of the 4th Clinical Natural Language Processing Workshop
109--113
Sequence-to-sequence models are appealing because they allow both encoder and decoder to be shared across many tasks by formulating those tasks as text-to-text problems. Despite recently reported successes of such models, we find that engineering input/output representations for such text-to-text models is challenging. On the Clinical TempEval 2016 relation extraction task, the most natural choice of output representations, where relations are spelled out in simple predicate logic statements, did not lead to good performance. We explore a variety of input/output representations, with the most successful prompting one event at a time, and achieving results competitive with standard pairwise temporal relation extraction systems.
null
null
10.18653/v1/2022.clinicalnlp-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,187
inproceedings
tanev-2022-ontopopulis
{O}nto{P}opulis, a System for Learning Semantic Classes
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.1/
Tanev, Hristo
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
8--12
Ontopopulis is a multilingual weakly supervised terminology learning algorithm which takes on its input a set of seed terms for a semantic category and an unannotated text corpus. The algorithm learns additional terms, which belong to this category. For example, for the category {\textquotedblleft}environmental disasters{\textquotedblright} the input seed set in English is environmental disaster, water pollution, climate change. Among the highest ranked new terms which the system learns for this semantic class are deforestation, global warming and so on.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,189
inproceedings
callegari-xhura-2022-corpus
A corpus for Automatic Article Analysis
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.2/
Callegari, Elena and Xhura, Desara
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
13--21
We describe the structure and creation of the SageWrite corpus. This is a manually annotated corpus created to support automatic language generation and automatic quality assessment of academic articles. The corpus currently contains annotations for 100 excerpts taken from various scientific articles. For each of these excerpts, the corpus contains (i) a draft version of the excerpt (ii) annotations that reflect the stylistic and linguistics merits of the excerpt, such as whether or not the text is clearly structured. The SageWrite corpus is the first corpus for the fine-tuning of text-generation algorithms that specifically addresses academic writing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,190
inproceedings
atnashev-etal-2022-razmecheno
Razmecheno: Named Entity Recognition from Digital Archive of Diaries {\textquotedblleft}Prozhito{\textquotedblright}
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.3/
Atnashev, Timofey and Ganeeva, Veronika and Kazakov, Roman and Matyash, Daria and Sonkin, Michael and Voloshina, Ekaterina and Serikov, Oleg and Artemova, Ekaterina
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
22--38
The vast majority of existing datasets for Named Entity Recognition (NER) are built primarily on news, research papers and Wikipedia with a few exceptions, created from historical and literary texts. What is more, English is the main source for data for further labelling. This paper aims to fill in multiple gaps by creating a novel dataset {\textquotedblleft}Razmecheno{\textquotedblright}, gathered from the diary texts of the project {\textquotedblleft}Prozhito{\textquotedblright} in Russian. Our dataset is of interest for multiple research lines: literary studies of diary texts, transfer learning from other domains, low-resource or cross-lingual named entity recognition. Razmecheno comprises 1331 sentences and 14119 tokens, sampled from diaries, written during the Perestroika. The annotation schema consists of five commonly used entity tags: person, characteristics, location, organisation, and facility. The labelling is carried out on the crowdsourcing platfrom Yandex.Toloka in two stages. First, workers selected sentences, which contain an entity of particular type. Second, they marked up entity spans. As a result 1113 entities were obtained. Empirical evaluation of Razmecheno is carried out with off-the-shelf NER tools and by fine-tuning pre-trained contextualized encoders. We release the annotated dataset for open access.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,191
inproceedings
nikolova-stoupak-etal-2022-filtering
Filtering of Noisy Web-Crawled Parallel Corpus: the {J}apanese-{B}ulgarian Language Pair
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.4/
Nikolova-Stoupak, Iglika and Shimizu, Shuichiro and Chu, Chenhui and Kurohashi, Sadao
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
39--48
One of the main challenges within the rapidly developing field of neural machine translation is its application to low-resource languages. Recent attempts to provide large parallel corpora in rare language pairs include the generation of web-crawled corpora, which may be vast but are, unfortunately, excessively noisy. The corpus utilised to train machine translation models in the study is CCMatrix, provided by OPUS. Firstly, the corpus is cleaned based on a number of heuristic rules. Then, parts of it are selected in three discrete ways: at random, based on the {\textquotedblleft}margin distance{\textquotedblright} metric that is native to the CCMatrix dataset, and based on scores derived through the application of a state-of-the-art classifier model (Acarcicek et al., 2020) utilised in a thematic WMT shared task. The performance of the issuing models is evaluated and compared. The classifier-based model does not reach high performance as compared with its margin-based counterpart, opening a discussion of ways for further improvement. Still, BLEU scores surpass those of Acarcicek et al.`s (2020) paper by over 15 points.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,192
inproceedings
ralev-pfeffer-2022-hate
Hate Speech Classification in {B}ulgarian
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.5/
Ralev, Radoslav and Pfeffer, J{\"urgen
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
49--58
In recent years, we have seen a surge in the propagation of online hate speech on social media platforms. According to a multitude of sources such as the European Council, hate speech can lead to acts of violence and conflict on a broader scale. That has led to in- creased awareness by governments, companies, and the scientific community, and although the field is relatively new, there have been considerable advancements in the field as a result of the collective effort. Despite the increasingly better results, most of the research focuses on the more popular languages (i.e., English, German, or Arabic), whereas less popular languages such as Bulgarian and other Balkan languages have been neglected. We have aggregated a real-world dataset from Bulgarian online forums and manually annotated 108,142 sentences. About 1.74{\%} of which can be described with the categories racism, sexism, rudeness, and profanity. We then developed and evaluated various classifiers on the dataset and found that a support vector machine with a linear kernel trained on character-level TF-IDF features is the best model. Our work can be seen as another piece in the puzzle to building a strong foundation for future work on hate speech classification in Bulgarian.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,193
inproceedings
lozanova-stoyanova-2022-wordnet
{W}ord{N}et-Based {B}ulgarian {S}ign {L}anguage Dictionary of Crisis Management Terminology
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.6/
Lozanova, Slavina and Stoyanova, Ivelina
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
59--67
This paper presents an online Bulgarian sign language dictionary covering terminology related to crisis management. The pressing need for such a resource became evident during the COVID pandemic when critical information regarding government measures was delivered on a regular basis to the public including Deaf citizens. The dictionary is freely available on the internet and is aimed at the Deaf, sign language interpreters, learners of sign language, social workers and the wide public. Each dictionary entry is supplied with synonyms in spoken Bulgarian, a definition, one or more signs corresponding to the concept in Bulgarian sign language, additional information about derivationally related words and similar signs with different meaning, as well as links to translations in other languages, including American sign language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,194
inproceedings
osenova-2022-raising
Raising and Control Constructions in a {B}ulgarian {UD} Parsebank of Parliament Sessions
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.7/
Osenova, Petya
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
68--74
The paper discusses the raising and control syntactic structures (marked as {\textquoteleft}xcomp') in a UD parsed corpus of Bulgarian Parliamentary Sessions. The idea is: to investigate the linguistic status of this phenomenon in an automatically parsed corpus, with a focus on verbal constructions of a head and its dependant together with the shared subject; to detect the errors and get insights on how to improve the annotation scheme and the automatic detection of this phenomenon realizations in Bulgarian.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,195
inproceedings
tisheva-dzhonova-2022-syntactic
Syntactic characteristics of emotive predicates in {B}ulgarian: A corpus-based study
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.8/
Tisheva, Yovka and Dzhonova, Marina
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
75--80
The paper presents a corpus-based study of emotive predicates (verbs and predicative constructions with adjectival, adverbial or noun phrases) in Bulgarian with respect to their syntactic characteristics. The sources of empirical data analyzed here are Bulgarian National Corpus, Corpus of Bulgarian Political and Journalistic Speech and Bulgarian part of Multilingual Comparable Corpora of Parliamentary Debates ParlaMint. The analyzes are organized in terms of morpho-syntactic features of emotive predicates, transitivity, syntactic functions and theta-roles of their arguments. Emotive predicates denote a state or an event involving an affective experience. As part of the special semantic class of psychological/Experiencer verbs, they have been studied in relation to the interaction between lexical semantics and argument realization. Bulgarian data confirm the well-established division of Psych predicates into three classes: Subject Experiencer (fear type verbs), Object Experiencer (frighten type verbs), Dative Experiencer. The third class is mostly represented by adverbial predicates.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,196
inproceedings
tarpomanova-aleksova-2022-evidential
Evidential strategies and grammatical marking in clauses governed by verba dicendi in {B}ulgarian
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.9/
Tarpomanova, Ekaterina and Aleksova, Krasimira
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
81--88
Th{\cyre} study explores the interaction between the participants in the communication process with respect to their knowledge about the situation presented in the utterance when transforming direct into indirect speech using a verbum dicendi. The speaker has a choice between firsthand (indicative tenses) which by definition denotes a witnessed situation and non-firsthand which presents the situation as non-witnessed. The interplay between the grammatical marking and the speaker`s evidential strategy is analyzed by applying a corpus method. The data of the Bulgarian National Corpus are used to detect the preferences for a given strategy considering also the grammatical person which indicates the level of knowledge of the communicants about the situation: the 1st person shows the strong knowledge of the speaker, the 2nd person is related to the strong knowledge of the listener, and the 3rd person is associated with a weak knowledge of both participants. Illustrative examples representative for a given situation are extracted from the corpus and subjected to a context analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,197
inproceedings
morita-2022-corpus
Corpus-Based Research into Verb-Forming Suffixes in {E}nglish: Its Empirical and Theoretical Consequences
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.10/
Morita, Junya
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
89--97
The present study explores the semantic and structural aspects of word formation processes in English, focusing on how verbs are derived by the suffixes -ize, -ify, -en, and -ate. Based on relevant derivatives extracted from the British National Corpus, their detailed observation is made from semantic and formal viewpoints. Then their theoretical analysis is carried out in the framework of generative theory. The BNC survey demonstrates that (i) the meanings of derived verbs are largely divided into five types and the submeanings are closely related to each other, (ii) the well-formedness of derived verbs is primarily determined by the semantic and formal features of their bases, and (iii) -ize suffixation is creative enough to provide a constant supply for new labels. To account for these empirical observations, the mechanism for forming -ize derivatives is proposed in which the semantic properties and creativity of -ize derivation stem solely from the underlying structure and the formal properties of the bases derive from the lexical entry of -ize.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,198
inproceedings
derzhanski-siruk-2022-notes
Some Notes on p(e)re-Reduplication in {B}ulgarian and {U}krainian: A Corpus-based Study
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.11/
Derzhanski, Ivan and Siruk, Olena
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
98--104
We present a comparative study of p(e)re-reduplication in Bulgarian and Ukrainian, based on material from a parallel corpus of bilingual texts. We analyse all occurrences found in the corpus of close sequences and conjunctions of two cognate words, the second of which features the intensive and recursive prefix pre- (Bulgarian) or pere- (Ukrainian). We find that in Bulgarian this construction occurs more frequently with finite verb forms, and in Ukrainian with participles and nouns. There is also a correlation with the mode of action denoted by the prefix: in its intensive meaning it turns up more often in Bulgarian, in its recursive meaning in the two languages equally, and in Ukrainian there are more occasions where it cannot be identified as either intensive or recursive. Finally, in both languages instances of p(e)re-reduplication are most common, by a wide marge, in texts with Ukrainian originals.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,199
inproceedings
ion-etal-2022-open
An Open-Domain {QA} System for e-Governance
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.12/
Ion, Radu and Avram, Andrei-Marius and P{\u{a}}is, Vasile and Mitrofan, Maria and Mititelu, Verginica Barbu and Irimia, Elena and Badea, Valentin
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
105--112
The paper presents an open-domain Question Answering system for Romanian, answering COVID-19 related questions. The QA system pipeline involves automatic question processing, automatic query generation, web searching for the top 10 most relevant documents and answer extraction using a fine-tuned BERT model for Extractive QA, trained on a COVID-19 data set that we have manually created. The paper will present the QA system and its integration with the Romanian language technologies portal RELATE, the COVID-19 data set and different evaluations of the QA performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,200
inproceedings
liakhovets-schlarb-2022-zero
Zero-shot Event Causality Identification with Question Answering
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.13/
Liakhovets, Daria and Schlarb, Sven
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
113--119
Extraction of event causality and especially implicit causality from text data is a challenging task. Causality is often treated as a specific relation type and can be considered as a part of relation extraction or relation classification task. Many causality identification-related tasks are designed to select the most plausible alternative of a set of possible causes and consider multiple-choice classification settings. Since there are powerful Question Answering (QA) systems pretrained on large text corpora, we investigated a zero-shot QA-based approach for event causality extraction using a Wikipedia-based dataset containing event descriptions (articles) and annotated causes. We aimed to evaluate to what extent reading comprehension ability of the QA-pipeline can be used for event-related causality extraction from plain text without any additional training. Some evaluation challenges and limitations of the data were discussed. We compared the performance of a two-step pipeline consisting of passage retrieval and extractive QA with QA-only pipeline on event-associated articles and mixed ones. Our systems achieved average cosine semantic similarity scores of 44 {--} 45{\%} in different settings.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,201
inproceedings
koeva-2022-ontology
Ontology of Visual Objects
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.14/
Koeva, Svetla
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
120--129
The focus of the paper is the Ontology of Visual Objects based on WordNet noun hierarchies. In particular, we present a methodology for bidirectional ontology engineering, which integrates the pre-existing knowledge resources and the selection of visual objects within the images representing particular thematic domains. The Ontology of Visual Objects organizes concepts labeled by corresponding classes (dominant classes, classes that are attributes to dominant classes, and classes that serve only as parents to dominant classes), relations between concepts and axioms defining the properties of the relations. The Ontology contains 851 classes (706 dominant and attribute classes), 15 relations and a number of axioms built upon them. The definition of relations between dominant and attribute classes and formulations of axioms based on the properties of the relations offers a reliable means for automatic object or image classification and description.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,202
inproceedings
kirillovich-etal-2022-sense
Sense-Annotated Corpus for {R}ussian
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.15/
Kirillovich, Alexander and Loukachevitch, Natalia and Kulaev, Maksim and Bolshina, Angelina and Ilvovsky, Dmitry
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
130--136
We present a sense-annotated corpus for Russian. The resource was obtained my manually annotating texts from the OpenCorpora corpus, an open corpus for the Russian language, by senses of Russian wordnet RuWordNet. The annotation was used as a test collection for comparing unsupervised (Personalized Pagerank) and pseudo-labeling methods for Russian word sense disambiguation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,203
inproceedings
barbu-mititelu-etal-2022-romanian
A {R}omanian Treebank Annotated with Verbal Multiword Expressions
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.16/
Barbu Mititelu, Verginica and Cristescu, Mihaela and Mitrofan, Maria and Zgreab{\u{a}}n, Bianca-M{\u{a}}d{\u{a}}lina and B{\u{a}}rbulescu, Elena-Andreea
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
137--145
In this paper we present a new version of the Romanian journalistic treebank annotated with verbal multiword expressions of four types: idioms, light verb constructions, reflexive verbs and inherently adpositional verbs, the last type being recently added to the corpus. These types have been defined and characterized in a multilingual setting (the PARSEME guidelines for annotating verbal multiword expressions). We present the annotation methodologies and offer quantitative data about the expressions occurring in the corpus. We discuss the characteristics of these expressions, with special reference to the difficulties they raise for the automatic processing of Romanian text, as well as for human usage. Special attention is paid to the challenges in the annotation of the inherently adpositional verbs. The corpus is freely available in two formats (CUPT and RDF), as well as queryable using a SPARQL endpoint.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,204
inproceedings
petrovski-2022-parallel
A Parallel {E}nglish - {S}erbian - {B}ulgarian - {M}acedonian Lexicon of Named Entities
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.17/
Petrovski, Aleksandar
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
146--151
This paper describes the creation of a parallel multilingual lexicon of named entities from English to three South Slavic languages: Serbian, Bulgarian and Macedonian, with Wikipedia as a source. The basics of the proposed methodology are well known. This methodology provides a cheap opportunity to build multilingual lexicons, without having expertise in target languages. Wikipedia`s database dump can be freely downloaded in SQL and XML formats. The method presented here has been used to build a Python application that extracts the English {--} Serbian {--} Bulgarian {--} Macedonian parallel titles from Wikipedia and classifies them using the English Wikipedia category system. The extracted named entity sets have been classified into five classes: PERSON, ORGANIZATION, LOCATION, PRODUCT, and MISC (miscellaneous). It has been achieved using Wikipedia metadata. The quality of classification has been checked manually on 1,000 randomly chosen named entities. The following are the results obtained: 97{\%} for precision and 90{\%} for recall.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,205
inproceedings
gargova-etal-2022-evaluation
Evaluation of Off-the-Shelf Language Identification Tools on {B}ulgarian Social Media Posts
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.18/
Gargova, Silvia and Temnikova, Irina and Dzhumerov, Ivo and Nikolaeva, Hristiana
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
152--161
Automatic Language Identification (LI) is a widely addressed task, but not all users (for example linguists) have the means or interest to develop their own tool or to train the existing ones with their own data. There are several off-the-shelf LI tools, but for some languages, it is unclear which tool is the best for specific types of text. This article presents a comparison of the performance of several off-the-shelf language identification tools on Bulgarian social media data. The LI tools are tested on a multilingual Twitter dataset (composed of 2966 tweets) and an existing Bulgarian Twitter dataset on the topic of fake content detection of 3350 tweets. The article presents the manual annotation procedure of the first dataset, a dis- cussion of the decisions of the two annotators, and the results from testing the 7 off-the-shelf LI tools on both datasets. Our findings show that the tool, which is the easiest for users with no programming skills, achieves the highest F1-Score on Bulgarian social media data, while other tools have very useful functionalities for Bulgarian social media texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,206
inproceedings
smaili-etal-2022-language
Language rehabilitation of people with {BROCA} aphasia using deep neural machine translation
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.19/
Smaili, Kamel and Langlois, David and Pribil, Peter
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
162--170
More than 13 million people suffer a stroke each year. Aphasia is known as a language disorder usually caused by a stroke that damages a specific area of the brain that controls the expression and understanding of language. Aphasia is characterized by a disturbance of the linguistic code affecting encoding and/or decoding of the language. Our project aims to propose a method that helps a person suffering from aphasia to communicate better with those around him. For this, we will propose a machine translation capable of correcting aphasic errors and helping the patient to communicate more easily. To build such a system, we need a parallel corpus; to our knowledge, this corpus does not exist, especially for French. Therefore, the main challenge and the objective of this task is to build a parallel corpus composed of sentences with aphasic errors and their corresponding correction. We will show how we create a pseudo-aphasia corpus from real data, and then we will show the feasibility of our project to translate from aphasia data to natural language. The preliminary results show that the deep learning methods we used achieve correct translations corresponding to a BLEU of 38.6.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,207
inproceedings
sorenson-2022-current
Current Shortcomings of Machine Translation in {S}panish and {B}ulgarian Vis-{\`a}-vis {E}nglish
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.20/
Sorenson, Travis
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
171--180
In late 2016, Google Translate (GT), widely considered a machine translation leader, replaced its statistical machine translation (SMT) functions with a neural machine translation (NMT) model for many large languages, including Spanish, with other languages following thereafter. Whereas the capabilities of GT had previously advanced incrementally, this switch to NMT resulted in seemingly exponential improvement. However, half a dozen years later, while recognizing GT`s usefulness, it is also imperative to systematically evaluate ongoing shortcomings, including determining which challenges may reasonably be presumed as superable over time and those which, following a multiyear tracking study, prove unlikely ever to be fully resolved. While the research in question principally explores Spanish-English-Spanish machine translation, this paper examines similar problems with Bulgarian-English-Bulgarian GT renditions. Better understanding both the strengths and weaknesses of current machine translation applications is fundamental to knowing when such non-human natural language processing (NLP) technology is capable of performing all or most of a given task, and when heavy, perhaps even exclusive human intervention is still required.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,208
inproceedings
krstev-vitas-2022-myriad
A Myriad of Ways to Say: {\textquotedblleft}Wear a mask!{\textquotedblright}
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.21/
Krstev, Cvetana and Vitas, Du{\v{s}}ko
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
181--189
This paper presents a small corpus of notices displayed at entrances of various Belgrade public premises asking those who enter to wear a mask. We analyze the various aspects of these notices: their physical appearance, script, lexica, syntax and style. A special attention is paid to various obligatory and optional parts of these notices. Obligatory parts deal with wearing masks, keeping the distance, limiting the number of persons on premises and using disinfection. We developed local grammars for modelling phrases that require wearing masks, that can be used both for recognition and for generation of paraphrases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,209
inproceedings
kralev-koeva-2022-image
Image Models for large-scale Object Detection and Classification
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.22/
Kralev, Jordan and Koeva, Svetla
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
190--201
Recent developments in computer vision applications that are based on machine learning models allow real-time object detection, segmentation and captioning in image or video streams. The paper presents the development of an extension of the 80 COCO categories into a novel ontology with more than 700 classes covering 130 thematic subdomains related to Sport, Transport, Arts and Security. The development of an image dataset of object segmentation was accelerated by machine learning for automatic generation of objects' boundaries and classes. The Multilingual image dataset contains over 20,000 images and 200,000 annotations. It was used to pre-train 130 models for object detection and classification. We show the established approach for the development of the new models and their integration into an application and evaluation framework.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,210
inproceedings
koeva-doychev-2022-ontology
Ontology Supported Frame Classification
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.23/
Koeva, Svetla and Doychev, Emil
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
203--213
We present BulFrame {--} a web-based system designed for creating, editing, validating and viewing conceptual frames. A unified theoretical model for the formal presentation of Conceptual frames is offered, which predetermines the architecture of the system with which the data is processed. A Conceptual frame defines a unique set of syntagmatic relations between verb synsets representing the frame and noun synsets expressing the frame elements. Thereby, the notion of Conceptual frame combines semantic knowledge presented in WordNet and FrameNet and builds upon it. The main difference with FrameNet semantic frames is the definition of the sets of nouns that can be combined with a given verb. This is achieved by an ontological representation of noun semantic classes. The framework is built and evaluated with Conceptual frames for Bulgarian verbs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,211
inproceedings
birtic-etal-2022-croatian
{C}roatian repository for the argument/adjunct distinction {--} {SARGADA}
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.25/
Birti{\'c}, Matea and Bra{\v{c}}, Ivana and Runjai{\'c}, Sini{\v{s}}a
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
225--233
The distinction between arguments and adjuncts is a relevant topic in many linguistic theories (Tesniere, 1959; Chomsky, 1981; Langacker, 1987; Van Valin, 2001; Herbst, 2014, etc.). Even though theories provide similar definitions of arguments and adjuncts, sometimes it is difficult to draw a clear line between them. In order to determine ambiguous syntactic parts as arguments or adjuncts, various tests have been proposed, but they often give contradictory results and are not fully reliable. Nevertheless, they can be used as an auxiliary tool. The project Syntactic and Semantic Analysis of Arguments and Adjuncts in Croatian {--} SARGADA was launched with the aim of thoroughly investigating the distinction between arguments and adjuncts in Croatian, and to apply the theoretical results in a syntactic repository which would be a valuable resource for improving NLP tools and for researching and teaching Croatian. In this paper, we will present diagnostic tests chosen as a tool to distinguish between arguments and adjuncts in the Croatian language. The repository containing sentences with ambiguous syntactic phrases and our workflow will also be described.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,213
inproceedings
rizov-tinchev-2022-towards
Towards Dynamic {W}ordnet: Time Flow Hydra
null
sep
2022
Sofia, Bulgaria
Department of Computational Linguistics, IBL -- BAS
https://aclanthology.org/2022.clib-1.26/
Rizov, Borislav and Tinchev, Tinko
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
234--238
Hydra is a Wordnet management system where the Synsets from different languages live in a common relational structure (Kripke frame) with a user-frendly GUI for searching, editing and alignment of the objects from the different languages. The data is retrieved by means of a modal logic query language. Despite its many merits the system stores only the current state of the wordnet data. Wordnet editing and development opens questions for wordnet data, structure and its consistency over time. The new Time Flow Hydra uses a Dynamic wordnet model with a discrete time embeded where all the states of all the objects are stored and accessed simultaneously. This provides the ability to track the changes, to detect the desired and undesired results of the data evolution. For example, we can ask which objects 10 days ago had 2 hyponyms, and 5 days later have 3.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,214
inproceedings
greco-etal-2022-small
A Small but Informed and Diverse Model: The Case of the Multimodal {G}uess{W}hat!? Guessing Game
Dobnik, Simon and Grove, Julian and Sayeed, Asad
sep
2022
Gothenburg, Sweden
Association for Computational Linguistics
https://aclanthology.org/2022.clasp-1.1/
Greco, Claudio and Testoni, Alberto and Bernardi, Raffaella and Frank, Stella
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
1--10
Pre-trained Vision and Language Transformers achieve high performance on downstream tasks due to their ability to transfer representational knowledge accumulated during pretraining on substantial amounts of data. In this paper, we ask whether it is possible to compete with such models using features based on transferred (pre-trained, frozen) representations combined with a lightweight architecture. We take a multimodal guessing task as our testbed, GuessWhat?!. An ensemble of our lightweight model matches the performance of the finetuned pre-trained transformer (LXMERT). An uncertainty analysis of our ensemble shows that the lightweight transferred representations close the data uncertainty gap with LXMERT, while retaining model diversity leading to ensemble boost. We further demonstrate that LXMERT`s performance gain is due solely to its extra V{\&}L pretraining rather than because of architectural improvements. These results argue for flexible integration of multiple features and lightweight models as a viable alternative to large, cumbersome, pre-trained models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,216
inproceedings
morger-etal-2022-cross
A Cross-lingual Comparison of Human and Model Relative Word Importance
Dobnik, Simon and Grove, Julian and Sayeed, Asad
sep
2022
Gothenburg, Sweden
Association for Computational Linguistics
https://aclanthology.org/2022.clasp-1.2/
Morger, Felix and Brandl, Stephanie and Beinborn, Lisa and Hollenstein, Nora
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
11--23
Relative word importance is a key metric for natural language processing. In this work, we compare human and model relative word importance to investigate if pretrained neural language models focus on the same words as humans cross-lingually. We perform an extensive study using several importance metrics (gradient-based saliency and attention-based) in monolingual and multilingual models, including eye-tracking corpora from four languages (German, Dutch, English, and Russian). We find that gradient-based saliency, first-layer attention, and attention flow correlate strongly with human eye-tracking data across all four languages. We further analyze the role of word length and word frequency in determining relative importance and find that it strongly correlates with length and frequency, however, the mechanisms behind these non-linear relations remain elusive. We obtain a cross-lingual approximation of the similarity between human and computational language processing and insights into the usability of several importance metrics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,217
inproceedings
cetoli-2022-dispatcher
Dispatcher: A Message-Passing Approach to Language Modelling
Dobnik, Simon and Grove, Julian and Sayeed, Asad
sep
2022
Gothenburg, Sweden
Association for Computational Linguistics
https://aclanthology.org/2022.clasp-1.3/
Cetoli, Alberto
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
24--29
This paper proposes a message-passing mechanism to address language modelling. A new layer type is introduced that aims to substitute self-attention for unidirectional sequence generation tasks. The system is shown to be competitive with existing methods: Given N tokens, the computational complexity is O(N logN) and the memory complexity is O(N) under reasonable assumptions. In the end, the Dispatcher layer is seen to achieve comparable perplexity to self-attention while being more efficient.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,218
inproceedings
dobnik-etal-2022-search
In Search of Meaning and Its Representations for Computational Linguistics
Dobnik, Simon and Grove, Julian and Sayeed, Asad
sep
2022
Gothenburg, Sweden
Association for Computational Linguistics
https://aclanthology.org/2022.clasp-1.4/
Dobnik, Simon and Cooper, Robin and Ek, Adam and Noble, Bill and Larsson, Staffan and Ilinykh, Nikolai and Maraev, Vladislav and Somashekarappa, Vidya
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
30--44
In this paper we examine different meaning representations that are commonly used in different natural language applications today and discuss their limits, both in terms of the aspects of the natural language meaning they are modelling and in terms of the aspects of the application for which they are used.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,219
inproceedings
hagstrom-etal-2022-use
Can We Use Small Models to Investigate Multimodal Fusion Methods?
Dobnik, Simon and Grove, Julian and Sayeed, Asad
sep
2022
Gothenburg, Sweden
Association for Computational Linguistics
https://aclanthology.org/2022.clasp-1.5/
Hagstr{\"om, Lovisa and Norlund, Tobias and Johansson, Richard
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
45--50
Many successful methods for fusing language with information from the visual modality have recently been proposed and the topic of multimodal training is ever evolving. However, it is still largely not known what makes different vision-and-language models successful. Investigations into this are made difficult by the large sizes of the models used, requiring large training datasets and causing long train and compute times. Therefore, we propose the idea of studying multimodal fusion methods in a smaller setting with small models and datasets. In this setting, we can experiment with different approaches for fusing multimodal information with language in a controlled fashion, while allowing for fast experimentation. We illustrate this idea with the math arithmetics sandbox. This is a setting in which we fuse language with information from the math modality and strive to replicate some fusion methods from the vision-and-language domain. We find that some results for fusion methods from the larger domain translate to the math arithmetics sandbox, indicating a promising future avenue for multimodal model prototyping.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,220
inproceedings
law-etal-2022-embodied
Embodied Interaction in Mental Health Consultations: Some Observations on Grounding and Repair
Dobnik, Simon and Grove, Julian and Sayeed, Asad
sep
2022
Gothenburg, Sweden
Association for Computational Linguistics
https://aclanthology.org/2022.clasp-1.6/
Law, Jing Hui and Healey, Patrick and Galindo Esparza, Rosella
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
51--61
Shared physical space is an important resource for face-to-face interaction. People use the position and orientation of their bodies{---}relative to each other and relative to the physical environment{---}to determine who is part of a conversation, to manage conversational roles (e.g. speaker, addressee, side-participant) and to help co-ordinate turn-taking. These embodied uses of shared space also extend to more fine-grained aspects of interaction, such as gestures and body movements, to support topic management, orchestration of turns and grounding. This paper explores the role of embodied resources in (mis)communication in a corpus of mental health consultations. We illustrate some of the specific ways in which clinicians and patients can exploit embodiment and the position of objects in shared space to diagnose and manage moments of misunderstanding.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,221
inproceedings
schlangen-2022-norm
Norm Participation Grounds Language
Dobnik, Simon and Grove, Julian and Sayeed, Asad
sep
2022
Gothenburg, Sweden
Association for Computational Linguistics
https://aclanthology.org/2022.clasp-1.7/
Schlangen, David
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
62--69
The striking recent advances in eliciting seemingly meaningful language behaviour from language-only machine learning models have only made more apparent, through the surfacing of clear limitations, the need to go beyond the language-only mode and to ground these models {\textquotedblleft}in the world{\textquotedblright}. Proposals for doing so vary in the details, but what unites them is that the solution is sought in the addition of non-linguistic data types such as images or video streams, while largely keeping the mode of learning constant. I propose a different, and more wide-ranging conception of how grounding should be understood: What grounds language is its normative nature. There are standards for doing things right, these standards are public and authoritative, while at the same time acceptance of authority can and must be disputed and negotiated, in interactions in which only bearers of normative status can rightfully participate. What grounds language, then, is the determined use that language users make of it, and what it is grounded in is the community of language users. I sketch this idea, and draw some conclusions for work on computational modelling of meaningful language use.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,222
inproceedings
mannan-krishnaswamy-2022-go
Where Am {I} and Where Should {I} Go? Grounding Positional and Directional Labels in a Disoriented Human Balancing Task
Dobnik, Simon and Grove, Julian and Sayeed, Asad
sep
2022
Gothenburg, Sweden
Association for Computational Linguistics
https://aclanthology.org/2022.clasp-1.8/
Mannan, Sheikh and Krishnaswamy, Nikhil
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
70--79
In this paper, we present an approach toward grounding linguistic positional and directional labels directly to human motions in the course of a disoriented balancing task in a multi-axis rotational device. We use deep neural models to predict human subjects' joystick motions as well as the subjects' proficiency in the task, combined with BERT embedding vectors for positional and directional labels extracted from annotations into an embodied direction classifier. We find that combining contextualized BERT embeddings with embeddings describing human motion and proficiency can successfully predict the direction a hypothetical human participant should move to achieve better balance with accuracy that is comparable to a moderately-proficient balancing task subject, and that our combined embodied model may actually make decisions that are objectively better than decisions made by some humans.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,223
inproceedings
cerini-etal-2022-speed
From Speed to Car and Back: An Exploratory Study about Associations between Abstract Nouns and Images
Dobnik, Simon and Grove, Julian and Sayeed, Asad
sep
2022
Gothenburg, Sweden
Association for Computational Linguistics
https://aclanthology.org/2022.clasp-1.9/
Cerini, Ludovica and Di Palma, Eliana and Lenci, Alessandro
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
80--88
Abstract concepts, notwithstanding their lack of physical referents in real world, are grounded in sensorimotor experience. In fact, images depicting concrete entities may be associated to abstract concepts, both via direct and indirect grounding processes. However, what are the links connecting the concrete concepts represented by images and abstract ones is still unclear. To investigate these links, we conducted a preliminary study collecting word association data and image-abstract word pair ratings, to identify whether the associations between visual and verbal systems rely on the same conceptual mappings. The goal of this research is to understand to what extent linguistic associations could be confirmed with visual stimuli, in order to have a starting point for multimodal analysis of abstract and concrete concepts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,224
article
sahin-2022-augment
To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource {NLP}
null
mar
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-1.2/
{\c{Sahin, G{\"ozde G{\"ul
null
5--42
Data-hungry deep neural networks have established themselves as the de facto standard for many NLP tasks, including the traditional sequence tagging ones. Despite their state-of-the-art performance on high-resource languages, they still fall behind their statistical counterparts in low-resource scenarios. One methodology to counterattack this problem is text augmentation, that is, generating new synthetic training data points from existing data. Although NLP has recently witnessed several new textual augmentation techniques, the field still lacks a systematic performance analysis on a diverse set of languages and sequence tagging tasks. To fill this gap, we investigate three categories of text augmentation methodologies that perform changes on the syntax (e.g., cropping sub-sentences), token (e.g., random word insertion), and character (e.g., character swapping) levels. We systematically compare the methods on part-of-speech tagging, dependency parsing, and semantic role labeling for a diverse set of language families using various models, including the architectures that rely on pretrained multilingual contextualized language models such as mBERT. Augmentation most significantly improves dependency parsing, followed by part-of-speech tagging and semantic role labeling. We find the experimented techniques to be effective on morphologically rich languages in general rather than analytic languages such as Vietnamese. Our results suggest that the augmentation techniques can further improve over strong baselines based on mBERT, especially for dependency parsing. We identify the character-level methods as the most consistent performers, while synonym replacement and syntactic augmenters provide inconsistent improvements. Finally, we discuss that the results most heavily depend on the task, language pair (e.g., syntactic-level techniques mostly benefit higher-level tasks and morphologically richer languages), and model type (e.g., token-level augmentation provides significant improvements for BPE, while character-level ones give generally higher scores for char and mBERT based models).
Computational Linguistics
48
10.1162/coli_a_00425
null
null
null
null
null
null
1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,226
article
bjorklund-etal-2022-improved
Improved N-Best Extraction with an Evaluation on Language Data
null
mar
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-1.4/
Bj{\"orklund, Johanna and Drewes, Frank and Jonsson, Anna
null
119--153
We show that a previously proposed algorithm for the N-best trees problem can be made more efficient by changing how it arranges and explores the search space. Given an integer N and a weighted tree automaton (wta) M over the tropical semiring, the algorithm computes N trees of minimal weight with respect to M. Compared with the original algorithm, the modifications increase the laziness of the evaluation strategy, which makes the new algorithm asymptotically more efficient than its predecessor. The algorithm is implemented in the software Betty, and compared to the state-of-the-art algorithm for extracting the N best runs, implemented in the software toolkit Tiburon. The data sets used in the experiments are wtas resulting from real-world natural language processing tasks, as well as artificially created wtas with varying degrees of nondeterminism. We find that Betty outperforms Tiburon on all tested data sets with respect to running time, while Tiburon seems to be the more memory-efficient choice.
Computational Linguistics
48
10.1162/coli_a_00427
null
null
null
null
null
null
1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,228
article
vincze-etal-2022-linguistic
Linguistic Parameters of Spontaneous Speech for Identifying Mild Cognitive Impairment and {A}lzheimer Disease
null
mar
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-1.5/
Vincze, Veronika and Szab{\'o}, Martina Katalin and Hoffmann, Ildik{\'o} and T{\'o}th, L{\'a}szl{\'o} and P{\'a}k{\'a}ski, Magdolna and K{\'a}lm{\'a}n, J{\'a}nos and Gosztolya, G{\'a}bor
null
119--153
In this article, we seek to automatically identify Hungarian patients suffering from mild cognitive impairment (MCI) or mild Alzheimer disease (mAD) based on their speech transcripts, focusing only on linguistic features. In addition to the features examined in our earlier study, we introduce syntactic, semantic, and pragmatic features of spontaneous speech that might affect the detection of dementia. In order to ascertain the most useful features for distinguishing healthy controls, MCI patients, and mAD patients, we carry out a statistical analysis of the data and investigate the significance level of the extracted features among various speaker group pairs and for various speaking tasks. In the second part of the article, we use this rich feature set as a basis for an effective discrimination among the three speaker groups. In our machine learning experiments, we analyze the efficacy of each feature group separately. Our model that uses all the features achieves competitive scores, either with or without demographic information (3-class accuracy values: 68{\%}{--}70{\%}, 2-class accuracy values: 77.3{\%}{--}80{\%}). We also analyze how different data recording scenarios affect linguistic features and how they can be productively used when distinguishing MCI patients from healthy controls.
Computational Linguistics
48
10.1162/coli_a_00428
null
null
null
null
null
null
1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,229
article
jin-etal-2022-deep
Deep Learning for Text Style Transfer: A Survey
null
mar
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-1.6/
Jin, Di and Jin, Zhijing and Hu, Zhiting and Vechtomova, Olga and Mihalcea, Rada
null
155--205
Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others. It has a long history in the field of natural language processing, and recently has re-gained significant attention thanks to the promising performance brought by deep neural models. In this article, we present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017. We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data. We also provide discussions on a variety of important topics regarding the future development of this task.1
Computational Linguistics
48
10.1162/coli_a_00426
null
null
null
null
null
null
1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,230
article
belinkov-2022-probing
Probing Classifiers: Promises, Shortcomings, and Advances
null
mar
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-1.7/
Belinkov, Yonatan
null
207--219
Probing classifiers have emerged as one of the prominent methodologies for interpreting and analyzing deep neural network models of natural language processing. The basic idea is simple{---}a classifier is trained to predict some linguistic property from a model`s representations{---}and has been used to examine a wide variety of models and properties. However, recent studies have demonstrated various methodological limitations of this approach. This squib critically reviews the probing classifiers framework, highlighting their promises, shortcomings, and advances.
Computational Linguistics
48
10.1162/coli_a_00422
null
null
null
null
null
null
1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,231
article
faruqui-hakkani-tur-2022-revisiting
Revisiting the Boundary between {ASR} and {NLU} in the Age of Conversational Dialog Systems
null
mar
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-1.8/
Faruqui, Manaal and Hakkani-T{\"ur, Dilek
null
221--232
As more users across the world are interacting with dialog agents in their daily life, there is a need for better speech understanding that calls for renewed attention to the dynamics between research in automatic speech recognition (ASR) and natural language understanding (NLU). We briefly review these research areas and lay out the current relationship between them. In light of the observations we make in this article, we argue that (1) NLU should be cognizant of the presence of ASR models being used upstream in a dialog system`s pipeline, (2) ASR should be able to learn from errors found in NLU, (3) there is a need for end-to-end data sets that provide semantic annotations on spoken input, (4) there should be stronger collaboration between ASR and NLU research communities.
Computational Linguistics
48
10.1162/coli_a_00430
null
null
null
null
null
null
1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,232
article
mohammad-2022-ethics-sheet
Ethics Sheet for Automatic Emotion Recognition and Sentiment Analysis
null
jun
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-2.1/
Mohammad, Saif M.
null
239--278
The importance and pervasiveness of emotions in our lives makes affective computing a tremendously important and vibrant line of work. Systems for automatic emotion recognition (AER) and sentiment analysis can be facilitators of enormous progress (e.g., in improving public health and commerce) but also enablers of great harm (e.g., for suppressing dissidents and manipulating voters). Thus, it is imperative that the affective computing community actively engage with the ethical ramifications of their creations. In this article, I have synthesized and organized information from AI Ethics and Emotion Recognition literature to present fifty ethical considerations relevant to AER. Notably, this ethics sheet fleshes out assumptions hidden in how AER is commonly framed, and in the choices often made regarding the data, method, and evaluation. Special attention is paid to the implications of AER on privacy and social groups. Along the way, key recommendations are made for responsible AER. The objective of the ethics sheet is to facilitate and encourage more thoughtfulness on why to automate, how to automate, and how to judge success well before the building of AER systems. Additionally, the ethics sheet acts as a useful introductory document on emotion recognition (complementing survey articles).
Computational Linguistics
48
10.1162/coli_a_00433
null
null
null
null
null
null
2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,235
article
laskar-etal-2022-domain
Domain Adaptation with Pre-trained Transformers for Query-Focused Abstractive Text Summarization
null
jun
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-2.2/
Laskar, Md Tahmid Rahman and Hoque, Enamul and Huang, Jimmy Xiangji
null
279--320
The Query-Focused Text Summarization (QFTS) task aims at building systems that generate the summary of the text document(s) based on the given query. A key challenge in addressing this task is the lack of large labeled data for training the summarization model. In this article, we address this challenge by exploring a series of domain adaptation techniques. Given the recent success of pre-trained transformer models in a wide range of natural language processing tasks, we utilize such models to generate abstractive summaries for the QFTS task for both single-document and multi-document scenarios. For domain adaptation, we apply a variety of techniques using pre-trained transformer-based summarization models including transfer learning, weakly supervised learning, and distant supervision. Extensive experiments on six datasets show that our proposed approach is very effective in generating abstractive summaries for the QFTS task while setting a new state-of-the-art result in several datasets across a set of automatic and human evaluation metrics.
Computational Linguistics
48
10.1162/coli_a_00434
null
null
null
null
null
null
2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,236
article
wan-etal-2022-challenges
Challenges of Neural Machine Translation for Short Texts
null
jun
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-2.3/
Wan, Yu and Yang, Baosong and Wong, Derek Fai and Chao, Lidia Sam and Yao, Liang and Zhang, Haibo and Chen, Boxing
null
321--342
Short texts (STs) present in a variety of scenarios, including query, dialog, and entity names. Most of the exciting studies in neural machine translation (NMT) are focused on tackling open problems concerning long sentences rather than short ones. The intuition behind is that, with respect to human learning and processing, short sequences are generally regarded as easy examples. In this article, we first dispel this speculation via conducting preliminary experiments, showing that the conventional state-of-the-art NMT approach, namely, Transformer (Vaswani et al. 2017), still suffers from over-translation and mistranslation errors over STs. After empirically investigating the rationale behind this, we summarize two challenges in NMT for STs associated with translation error types above, respectively: (1) the imbalanced length distribution in training set intensifies model inference calibration over STs, leading to more over-translation cases on STs; and (2) the lack of contextual information forces NMT to have higher data uncertainty on short sentences, and thus NMT model is troubled by considerable mistranslation errors. Some existing approaches, like balancing data distribution for training (e.g., data upsampling) and complementing contextual information (e.g., introducing translation memory) can alleviate the translation issues in NMT for STs. We encourage researchers to investigate other challenges in NMT for STs, thus reducing ST translation errors and enhancing translation quality.
Computational Linguistics
48
10.1162/coli_a_00435
null
null
null
null
null
null
2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,237
article
lee-etal-2022-annotation
Annotation Curricula to Implicitly Train Non-Expert Annotators
null
jun
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-2.4/
Lee, Ji-Ung and Klie, Jan-Christoph and Gurevych, Iryna
null
343--373
Annotation studies often require annotators to familiarize themselves with the task, its annotation scheme, and the data domain. This can be overwhelming in the beginning, mentally taxing, and induce errors into the resulting annotations; especially in citizen science or crowdsourcing scenarios where domain expertise is not required. To alleviate these issues, this work proposes annotation curricula, a novel approach to implicitly train annotators. The goal is to gradually introduce annotators into the task by ordering instances to be annotated according to a learning curriculum. To do so, this work formalizes annotation curricula for sentence- and paragraph-level annotation tasks, defines an ordering strategy, and identifies well-performing heuristics and interactively trained models on three existing English datasets. Finally, we provide a proof of concept for annotation curricula in a carefully designed user study with 40 voluntary participants who are asked to identify the most fitting misconception for English tweets about the Covid-19 pandemic. The results indicate that using a simple heuristic to order instances can already significantly reduce the total annotation time while preserving a high annotation quality. Annotation curricula thus can be a promising research direction to improve data collection. To facilitate future research{---}for instance, to adapt annotation curricula to specific tasks and expert annotation scenarios{---}all code and data from the user study consisting of 2,400 annotations is made available.1
Computational Linguistics
48
10.1162/coli_a_00436
null
null
null
null
null
null
2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,238
article
yadav-etal-2022-assessing
Assessing Corpus Evidence for Formal and Psycholinguistic Constraints on Nonprojectivity
null
jun
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-2.5/
Yadav, Himanshu and Husain, Samar and Futrell, Richard
null
375--401
Formal constraints on crossing dependencies have played a large role in research on the formal complexity of natural language grammars and parsing. Here we ask whether the apparent evidence for constraints on crossing dependencies in treebanks might arise because of independent constraints on trees, such as low arity and dependency length minimization. We address this question using two sets of experiments. In Experiment 1, we compare the distribution of formal properties of crossing dependencies, such as gap degree, between real trees and baseline trees matched for rate of crossing dependencies and various other properties. In Experiment 2, we model whether two dependencies cross, given certain psycholinguistic properties of the dependencies. We find surprisingly weak evidence for constraints originating from the mild context-sensitivity literature (gap degree and well-nestedness) beyond what can be explained by constraints on rate of crossing dependencies, topological properties of the trees, and dependency length. However, measures that have emerged from the parsing literature (e.g., edge degree, end-point crossings, and heads' depth difference) differ strongly between real and random trees. Modeling results show that cognitive metrics relating to information locality and working-memory limitations affect whether two dependencies cross or not, but they do not fully explain the distribution of crossing dependencies in natural languages. Together these results suggest that crossing constraints are better characterized by processing pressures than by mildly context-sensitive constraints.
Computational Linguistics
48
10.1162/coli_a_00437
null
null
null
null
null
null
2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,239
article
zhang-ma-2022-dual
Dual Attention Model for Citation Recommendation with Analyses on Explainability of Attention Mechanisms and Qualitative Experiments
null
jun
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-2.6/
Zhang, Yang and Ma, Qiang
null
403--470
Based on an exponentially increasing number of academic articles, discovering and citing comprehensive and appropriate resources have become non-trivial tasks. Conventional citation recommendation methods suffer from severe information losses. For example, they do not consider the section header of the paper that the author is writing and for which they need to find a citation, the relatedness between the words in the local context (the text span that describes a citation), or the importance of each word from the local context. These shortcomings make such methods insufficient for recommending adequate citations to academic manuscripts. In this study, we propose a novel embedding-based neural network called dual attention model for citation recommendation (DACR) to recommend citations during manuscript preparation. Our method adapts the embedding of three semantic pieces of information: words in the local context, structural contexts,1 and the section on which the author is working. A neural network model is designed to maximize the similarity between the embedding of the three inputs (local context words, section headers, and structural contexts) and the target citation appearing in the context. The core of the neural network model comprises self-attention and additive attention; the former aims to capture the relatedness between the contextual words and structural context, and the latter aims to learn their importance. Recommendation experiments on real-world datasets demonstrate the effectiveness of the proposed approach. To seek explainability on DACR, particularly the two attention mechanisms, the learned weights from them are investigated to determine how the attention mechanisms interpret {\textquotedblleft}relatedness{\textquotedblright} and {\textquotedblleft}importance{\textquotedblright} through the learned weights. In addition, qualitative analyses were conducted to testify that DACR could find necessary citations that were not noticed by the authors in the past due to the limitations of the keyword-based searching.
Computational Linguistics
48
10.1162/coli_a_00438
null
null
null
null
null
null
2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,240
article
paperno-2022-learning
On Learning Interpreted Languages with Recurrent Models
null
jun
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-2.7/
Paperno, Denis
null
471--482
Can recurrent neural nets, inspired by human sequential data processing, learn to understand language? We construct simplified data sets reflecting core properties of natural language as modeled in formal syntax and semantics: recursive syntactic structure and compositionality. We find LSTM and GRU networks to generalize to compositional interpretation well, but only in the most favorable learning settings, with a well-paced curriculum, extensive training data, and left-to-right (but not right-to-left) composition.
Computational Linguistics
48
10.1162/coli_a_00431
null
null
null
null
null
null
2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,241
article
sproat-2022-boring
Boring Problems Are Sometimes the Most Interesting
null
jun
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-2.8/
Sproat, Richard
null
483--490
In a recent position paper, Turing Award Winners Yoshua Bengio, Geoffrey Hinton, and Yann LeCun make the case that symbolic methods are not needed in AI and that, while there are still many issues to be resolved, AI will be solved using purely neural methods. In this piece I issue a challenge: Demonstrate that a purely neural approach to the problem of text normalization is possible. Various groups have tried, but so far nobody has eliminated the problem of unrecoverable errors, errors where, due to insufficient training data or faulty generalization, the system substitutes some other reading for the correct one. Solutions have been proposed that involve a marriage of traditional finite-state methods with neural models, but thus far nobody has shown that the problem can be solved using neural methods alone. Though text normalization is hardly an {\textquotedblleft}exciting{\textquotedblright} problem, I argue that until one can solve {\textquotedblleft}boring{\textquotedblright} problems like that using purely AI methods, one cannot claim that AI is a success.
Computational Linguistics
48
10.1162/coli_a_00439
null
null
null
null
null
null
2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,242
article
alemany-puig-ferrer-i-cancho-2022-linear
Linear-Time Calculation of the Expected Sum of Edge Lengths in Random Projective Linearizations of Trees
null
sep
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-3.1/
Alemany-Puig, Llu{\'i}s and Ferrer-i-Cancho, Ramon
null
491--516
The syntactic structure of a sentence is often represented using syntactic dependency trees. The sum of the distances between syntactically related words has been in the limelight for the past decades. Research on dependency distances led to the formulation of the principle of dependency distance minimization whereby words in sentences are ordered so as to minimize that sum. Numerous random baselines have been defined to carry out related quantitative studies on lan- guages. The simplest random baseline is the expected value of the sum in unconstrained random permutations of the words in the sentence, namely, when all the shufflings of the words of a sentence are allowed and equally likely. Here we focus on a popular baseline: random projective per- mutations of the words of the sentence, that is, permutations where the syntactic dependency structure is projective, a formal constraint that sentences satisfy often in languages. Thus far, the expectation of the sum of dependency distances in random projective shufflings of a sentence has been estimated approximately with a Monte Carlo procedure whose cost is of the order of Rn, where n is the number of words of the sentence and R is the number of samples; it is well known that the larger R is, the lower the error of the estimation but the larger the time cost. Here we pre- sent formulae to compute that expectation without error in time of the order of n. Furthermore, we show that star trees maximize it, and provide an algorithm to retrieve the trees that minimize it.
Computational Linguistics
48
10.1162/coli_a_00442
null
null
null
null
null
null
3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,243
article
anderson-gomez-rodriguez-2022-impact
The Impact of Edge Displacement {V}aserstein Distance on {UD} Parsing Performance
null
sep
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-3.2/
Anderson, Mark and G{\'o}mez-Rodr{\'i}guez, Carlos
null
517--554
We contribute to the discussion on parsing performance in NLP by introducing a measurement that evaluates the differences between the distributions of edge displacement (the directed distance of edges) seen in training and test data. We hypothesize that this measurement will be related to differences observed in parsing performance across treebanks. We motivate this by building upon previous work and then attempt to falsify this hypothesis by using a number of statistical methods. We establish that there is a statistical correlation between this measurement and parsing performance even when controlling for potential covariants. We then use this to establish a sampling technique that gives us an adversarial and complementary split. This gives an idea of the lower and upper bounds of parsing systems for a given treebank in lieu of freshly sampled data. In a broader sense, the methodology presented here can act as a reference for future correlation-based exploratory work in NLP.
Computational Linguistics
48
10.1162/coli_a_00440
null
null
null
null
null
null
3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,244
article
ustun-etal-2022-udapter
{UD}apter: Typology-based Language Adapters for Multilingual Dependency Parsing and Sequence Labeling
null
sep
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-3.3/
{\"Ust{\"un, Ahmet and Bisazza, Arianna and Bouma, Gosse and van Noord, Gertjan
null
555--592
Recent advances in multilingual language modeling have brought the idea of a truly universal parser closer to reality. However, such models are still not immune to the {\textquotedblleft}curse of multilinguality{\textquotedblright}: Cross-language interference and restrained model capacity remain major obstacles. To address this, we propose a novel language adaptation approach by introducing contextual language adapters to a multilingual parser. Contextual language adapters make it possible to learn adapters via language embeddings while sharing model parameters across languages based on contextual parameter generation. Moreover, our method allows for an easy but effective integration of existing linguistic typology features into the parsing model. Because not all typological features are available for every language, we further combine typological feature prediction with parsing in a multi-task model that achieves very competitive parsing performance without the need for an external prediction system for missing features. The resulting parser, UDapter, can be used for dependency parsing as well as sequence labeling tasks such as POS tagging, morphological tagging, and NER. In dependency parsing, it outperforms strong monolingual and multilingual baselines on the majority of both high-resource and low-resource (zero-shot) languages, showing the success of the proposed adaptation approach. In sequence labeling tasks, our parser surpasses the baseline on high resource languages, and performs very competitively in a zero-shot setting. Our in-depth analyses show that adapter generation via typological features of languages is key to this success.1
Computational Linguistics
48
10.1162/coli_a_00443
null
null
null
null
null
null
3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,245
article
schiffer-etal-2022-tractable
Tractable Parsing for {CCG}s of Bounded Degree
null
sep
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-3.4/
Schiffer, Lena Katharina and Kuhlmann, Marco and Satta, Giorgio
null
593--633
Unlike other mildly context-sensitive formalisms, Combinatory Categorial Grammar (CCG) cannot be parsed in polynomial time when the size of the grammar is taken into account. Refining this result, we show that the parsing complexity of CCG is exponential only in the maximum degree of composition. When that degree is fixed, parsing can be carried out in polynomial time. Our finding is interesting from a linguistic perspective because a bounded degree of composition has been suggested as a universal constraint on natural language grammar. Moreover, ours is the first complexity result for a version of CCG that includes substitution rules, which are used in practical grammars but have been ignored in theoretical work.
Computational Linguistics
48
10.1162/coli_a_00441
null
null
null
null
null
null
3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,246
article
choenni-shutova-2022-investigating
Investigating Language Relationships in Multilingual Sentence Encoders Through the Lens of Linguistic Typology
null
sep
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-3.5/
Choenni, Rochelle and Shutova, Ekaterina
null
635--672
Multilingual sentence encoders have seen much success in cross-lingual model transfer for downstream NLP tasks. The success of this transfer is, however, dependent on the model`s ability to encode the patterns of cross-lingual similarity and variation. Yet, we know relatively little about the properties of individual languages or the general patterns of linguistic variation that the models encode. In this article, we investigate these questions by leveraging knowledge from the field of linguistic typology, which studies and documents structural and semantic variation across languages. We propose methods for separating language-specific subspaces within state-of-the-art multilingual sentence encoders (LASER, M-BERT, XLM, and XLM-R) with respect to a range of typological properties pertaining to lexical, morphological, and syntactic structure. Moreover, we investigate how typological information about languages is distributed across all layers of the models. Our results show interesting differences in encoding linguistic variation associated with different pretraining strategies. In addition, we propose a simple method to study how shared typological properties of languages are encoded in two state-of-the-art multilingual models{---}M-BERT and XLM-R. The results provide insight into their information-sharing mechanisms and suggest that these linguistic properties are encoded jointly across typologically similar languages in these models.
Computational Linguistics
48
10.1162/coli_a_00444
null
null
null
null
null
null
3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,247
article
haddow-etal-2022-survey
Survey of Low-Resource Machine Translation
null
sep
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-3.6/
Haddow, Barry and Bawden, Rachel and Miceli Barone, Antonio Valerio and Helcl, Jind{\v{r}}ich and Birch, Alexandra
null
673--732
We present a survey covering the state of the art in low-resource machine translation (MT) research. There are currently around 7,000 languages spoken in the world and almost all language pairs lack significant resources for training machine translation models. There has been increasing interest in research addressing the challenge of producing useful translation models when very little translated training data is available. We present a summary of this topical research field and provide a description of the techniques evaluated by researchers in several recent shared tasks in low-resource MT.
Computational Linguistics
48
10.1162/coli_a_00446
null
null
null
null
null
null
3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,248
article
dufter-etal-2022-position
Position Information in Transformers: An Overview
null
sep
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-3.7/
Dufter, Philipp and Schmitt, Martin and Sch{\"utze, Hinrich
null
733--763
Transformers are arguably the main workhorse in recent natural language processing research. By definition, a Transformer is invariant with respect to reordering of the input. However, language is inherently sequential and word order is essential to the semantics and syntax of an utterance. In this article, we provide an overview and theoretical comparison of existing methods to incorporate position information into Transformer models. The objectives of this survey are to (1) showcase that position information in Transformer is a vibrant and extensive research area; (2) enable the reader to compare existing methods by providing a unified notation and systematization of different approaches along important model dimensions; (3) indicate what characteristics of an application should be taken into account when selecting a position encoding; and (4) provide stimuli for future research.
Computational Linguistics
48
10.1162/coli_a_00445
null
null
null
null
null
null
3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,249
article
yu-xu-2022-noun2verb
{N}oun2{V}erb: Probabilistic Frame Semantics for Word Class Conversion
null
dec
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-4.11/
Yu, Lei and Xu, Yang
null
783--818
Humans can flexibly extend word usages across different grammatical classes, a phenomenon known as word class conversion. Noun-to-verb conversion, or denominal verb (e.g., to Google a cheap flight), is one of the most prevalent forms of word class conversion. However, existing natural language processing systems are impoverished in interpreting and generating novel denominal verb usages. Previous work has suggested that novel denominal verb usages are comprehensible if the listener can compute the intended meaning based on shared knowledge with the speaker. Here we explore a computational formalism for this proposal couched in frame semantics. We present a formal framework, Noun2Verb, that simulates the production and comprehension of novel denominal verb usages by modeling shared knowledge of speaker and listener in semantic frames. We evaluate an incremental set of probabilistic models that learn to interpret and generate novel denominal verb usages via paraphrasing. We show that a model where the speaker and listener cooperatively learn the joint distribution over semantic frame elements better explains the empirical denominal verb usages than state-of-the-art language models, evaluated against data from (1) contemporary English in both adult and child speech, (2) contemporary Mandarin Chinese, and (3) the historical development of English. Our work grounds word class conversion in probabilistic frame semantics and bridges the gap between natural language processing systems and humans in lexical creativity.
Computational Linguistics
48
10.1162/coli_a_00447
null
null
null
null
null
null
4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,252
article
kanwatchara-etal-2022-enhancing
Enhancing Lifelong Language Learning by Improving Pseudo-Sample Generation
null
dec
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-4.12/
Kanwatchara, Kasidis and Horsuwan, Thanapapas and Lertvittayakumjorn, Piyawat and Kijsirikul, Boonserm and Vateekul, Peerapon
null
819--848
To achieve lifelong language learning, pseudo-rehearsal methods leverage samples generated from a language model to refresh the knowledge of previously learned tasks. Without proper controls, however, these methods could fail to retain the knowledge of complex tasks with longer texts since most of the generated samples are low in quality. To overcome the problem, we propose three specific contributions. First, we utilize double language models, each of which specializes in a specific part of the input, to produce high-quality pseudo samples. Second, we reduce the number of parameters used by applying adapter modules to enhance training efficiency. Third, we further improve the overall quality of pseudo samples using temporal ensembling and sample regeneration. The results show that our framework achieves significant improvement over baselines on multiple task sequences. Also, our pseudo sample analysis reveals helpful insights for designing even better pseudo-rehearsal methods in the future.
Computational Linguistics
48
10.1162/coli_a_00449
null
null
null
null
null
null
4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,253
article
nivre-etal-2022-nucleus
Nucleus Composition in Transition-based Dependency Parsing
null
dec
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-4.13/
Nivre, Joakim and Basirat, Ali and D{\"urlich, Luise and Moss, Adam
null
849--886
Dependency-based approaches to syntactic analysis assume that syntactic structure can be analyzed in terms of binary asymmetric dependency relations holding between elementary syntactic units. Computational models for dependency parsing almost universally assume that an elementary syntactic unit is a word, while the influential theory of Lucien Tesni{\`e}re instead posits a more abstract notion of nucleus, which may be realized as one or more words. In this article, we investigate the effect of enriching computational parsing models with a concept of nucleus inspired by Tesni{\`e}re. We begin by reviewing how the concept of nucleus can be defined in the framework of Universal Dependencies, which has become the de facto standard for training and evaluating supervised dependency parsers, and explaining how composition functions can be used to make neural transition-based dependency parsers aware of the nuclei thus defined. We then perform an extensive experimental study, using data from 20 languages to assess the impact of nucleus composition across languages with different typological characteristics, and utilizing a variety of analytical tools including ablation, linear mixed-effects models, diagnostic classifiers, and dimensionality reduction. The analysis reveals that nucleus composition gives small but consistent improvements in parsing accuracy for most languages, and that the improvement mainly concerns the analysis of main predicates, nominal dependents, clausal dependents, and coordination structures. Significant factors explaining the rate of improvement across languages include entropy in coordination structures and frequency of certain function words, in particular determiners. Analysis using dimensionality reduction and diagnostic classifiers suggests that nucleus composition increases the similarity of vectors representing nuclei of the same syntactic type.
Computational Linguistics
48
10.1162/coli_a_00450
null
null
null
null
null
null
4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,254
article
ren-etal-2022-effective
Effective Approaches to Neural Query Language Identification
null
dec
2022
Cambridge, MA
MIT Press
https://aclanthology.org/2022.cl-4.14/
Ren, Xingzhang and Yang, Baosong and Liu, Dayiheng and Zhang, Haibo and Lv, Xiaoyu and Yao, Liang and Xie, Jun
null
887--906
Query language identification (Q-LID) plays a crucial role in a cross-lingual search engine. There exist two main challenges in Q-LID: (1) insufficient contextual information in queries for disambiguation; and (2) the lack of query-style training examples for low-resource languages. In this article, we propose a neural Q-LID model by alleviating the above problems from both model architecture and data augmentation perspectives. Concretely, we build our model upon the advanced Transformer model. In order to enhance the discrimination of queries, a variety of external features (e.g., character, word, as well as script) are fed into the model and fused by a multi-scale attention mechanism. Moreover, to remedy the low resource challenge in this task, a novel machine translation{--}based strategy is proposed to automatically generate synthetic query-style data for low-resource languages. We contribute the first Q-LID test set called QID-21, which consists of search queries in 21 languages. Experimental results reveal that our model yields better classification accuracy than strong baselines and existing LID systems on both query and traditional LID tasks.1
Computational Linguistics
48
10.1162/coli_a_00451
null
null
null
null
null
null
4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,255