entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
niu-etal-2022-composition
Composition-based Heterogeneous Graph Multi-channel Attention Network for Multi-aspect Multi-sentiment Classification
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.594/
Niu, Hao and Xiong, Yun and Gao, Jian and Miao, Zhongchen and Wang, Xiaosu and Ren, Hongrun and Zhang, Yao and Zhu, Yangyong
Proceedings of the 29th International Conference on Computational Linguistics
6827--6836
Aspect-based sentiment analysis (ABSA) has drawn more and more attention because of its extensive applications. However, towards the sentence carried with more than one aspect, most existing works generate an aspect-specific sentence representation for each aspect term to predict sentiment polarity, which neglects the sentiment relationship among aspect terms. Besides, most current ABSA methods focus on sentences containing only one aspect term or multiple aspect terms with the same sentiment polarity, which makes ABSA degenerate into sentence-level sentiment analysis. In this paper, to deal with this problem, we construct a heterogeneous graph to model inter-aspect relationships and aspect-context relationships simultaneously and propose a novel Composition-based Heterogeneous Graph Multi-channel Attention Network (CHGMAN) to encode the constructed heterogeneous graph. Meanwhile, we conduct extensive experiments on three datasets: MAMSATSA, Rest14, and Laptop14, experimental results show the effectiveness of our method.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,038
inproceedings
lemmens-etal-2022-contact
{C}o{NTACT}: A {D}utch {COVID}-19 Adapted {BERT} for Vaccine Hesitancy and Argumentation Detection
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.595/
Lemmens, Jens and Van Nooten, Jens and Kreutz, Tim and Daelemans, Walter
Proceedings of the 29th International Conference on Computational Linguistics
6837--6845
We present CoNTACT: a Dutch language model adapted to the domain of COVID-19 tweets. The model was developed by continuing the pre-training phase of RobBERT (Delobelle et al., 2020) by using 2.8M Dutch COVID-19 related tweets posted in 2021. In order to test the performance of the model and compare it to RobBERT, the two models were tested on two tasks: (1) binary vaccine hesitancy detection and (2) detection of arguments for vaccine hesitancy. For both tasks, not only Twitter but also Facebook data was used to show cross-genre performance. In our experiments, CoNTACT showed statistically significant gains over RobBERT in all experiments for task 1. For task 2, we observed substantial improvements in virtually all classes in all experiments. An error analysis indicated that the domain adaptation yielded better representations of domain-specific terminology, causing CoNTACT to make more accurate classification decisions. For task 2, we observed substantial improvements in virtually all classes in all experiments. An error analysis indicated that the domain adaptation yielded better representations of domain-specific terminology, causing CoNTACT to make more accurate classification decisions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,039
inproceedings
yuan-etal-2022-ssr
{SSR}: Utilizing Simplified Stance Reasoning Process for Robust Stance Detection
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.596/
Yuan, Jianhua and Zhao, Yanyan and Lu, Yanyue and Qin, Bing
Proceedings of the 29th International Conference on Computational Linguistics
6846--6858
Dataset bias in stance detection tasks allows models to achieve superior performance without using targets. Most existing debiasing methods are task-agnostic, which fail to utilize task knowledge to better discriminate between genuine and bias features. Motivated by how humans tackle stance detection tasks, we propose to incorporate the stance reasoning process as task knowledge to assist in learning genuine features and reducing reliance on bias features. The full stance reasoning process usually involves identifying the span of the mentioned target and corresponding opinion expressions, such fine-grained annotations are hard and expensive to obtain. To alleviate this, we simplify the stance reasoning process to relax the granularity of annotations from token-level to sentence-level, where labels for sub-tasks can be easily inferred from existing resources. We further implement those sub-tasks by maximizing mutual information between the texts and the opinioned targets. To evaluate whether stance detection models truly understand the task from various aspects, we collect and construct a series of new test sets. Our proposed model achieves better performance than previous task-agnostic debiasing methods on most of those new test sets while maintaining comparable performances to existing stance detection models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,040
inproceedings
rodrigues-branco-2022-transferring
Transferring Confluent Knowledge to Argument Mining
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.597/
Rodrigues, Jo{\~a}o Ant{\'o}nio and Branco, Ant{\'o}nio
Proceedings of the 29th International Conference on Computational Linguistics
6859--6874
Relevant to all application domains where it is important to get at the reasons underlying sentiments and decisions, argument mining seeks to obtain structured arguments from unstructured text and has been addressed by approaches typically involving some feature and/or neural architecture engineering. By adopting a transfer learning methodology, and by means of a systematic study with a wide range of knowledge sources promisingly suitable to leverage argument mining, the aim of this paper is to empirically assess the potential of transferring such knowledge learned with confluent tasks. By adopting a lean approach that dispenses with heavier feature and model engineering, this study permitted both to gain novel empirically based insights into the argument mining task and to establish new state of the art levels of performance for its three main sub-tasks, viz. identification of argument components, classification of the components, and determination of the relation among them.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,041
inproceedings
alnajjar-etal-2022-laugh
When to Laugh and How Hard? A Multimodal Approach to Detecting Humor and Its Intensity
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.598/
Alnajjar, Khalid and H{\"am{\"al{\"ainen, Mika and Tiedemann, J{\"org and Laaksonen, Jorma and Kurimo, Mikko
Proceedings of the 29th International Conference on Computational Linguistics
6875--6886
Prerecorded laughter accompanying dialog in comedy TV shows encourages the audience to laugh by clearly marking humorous moments in the show. We present an approach for automatically detecting humor in the Friends TV show using multimodal data. Our model is capable of recognizing whether an utterance is humorous or not and assess the intensity of it. We use the prerecorded laughter in the show as annotation as it marks humor and the length of the audience`s laughter tells us how funny a given joke is. We evaluate the model on episodes the model has not been exposed to during the training phase. Our results show that the model is capable of correctly detecting whether an utterance is humorous 78{\%} of the time and how long the audience`s laughter reaction should last with a mean absolute error of 600 milliseconds.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,042
inproceedings
li-etal-2022-modeling
Modeling Aspect Correlation for Aspect-based Sentiment Analysis via Recurrent Inverse Learning Guidance
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.599/
Li, Longfeng and Sun, Haifeng and Qi, Qi and Wang, Jingyu and Wang, Jing and Liao, Jianxin
Proceedings of the 29th International Conference on Computational Linguistics
6887--6896
Aspect-based sentiment analysis (ABSA) aims to distinguish sentiment polarity of every specific aspect in a given sentence. Previous researches have realized the importance of interactive learning with context and aspects. However, these methods are ill-studied to learn complex sentence with multiple aspects due to overlapped polarity feature. And they do not consider the correlation between aspects to distinguish overlapped feature. In order to solve this problem, we propose a new method called Recurrent Inverse Learning Guided Network (RILGNet). Our RILGNet has two points to improve the modeling of aspect correlation and the selecting of aspect feature. First, we use Recurrent Mechanism to improve the joint representation of aspects, which enhances the aspect correlation modeling iteratively. Second, we propose Inverse Learning Guidance to improve the selection of aspect feature by considering aspect correlation, which provides more useful information to determine polarity. Experimental results on SemEval 2014 Datasets demonstrate the effectiveness of RILGNet, and we further prove that RILGNet is state-of-the-art method in multiaspect scenarios.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,043
inproceedings
wiegmann-etal-2022-analyzing
Analyzing Persuasion Strategies of Debaters on Social Media
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.600/
Wiegmann, Matti and Al Khatib, Khalid and Khanna, Vishal and Stein, Benno
Proceedings of the 29th International Conference on Computational Linguistics
6897--6905
Existing studies on the analysis of persuasion in online discussions focus on investigating the effectiveness of comments in discussions and ignore the analysis of the effectiveness of debaters over multiple discussions. In this paper, we propose to quantify debaters effectiveness in the online discussion platform: {\textquotedblleft}ChangeMyView{\textquotedblright} in order to explore diverse insights into their persuasion strategies. In particular, targeting debaters with different levels of effectiveness (e.g., good vs. bad), various behavioral characteristics (e..g, engagement) and text stylistic features (e.g., used frames) of debaters are carefully examined, leading to several outcomes that can be the backbone of writing assistants and persuasive text generation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,044
inproceedings
xu-etal-2022-kc
{KC}-{ISA}: An Implicit Sentiment Analysis Model Combining Knowledge Enhancement and Context Features
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.601/
Xu, Minghao and Wang, Daling and Feng, Shi and Yang, Zhenfei and Zhang, Yifei
Proceedings of the 29th International Conference on Computational Linguistics
6906--6915
Sentiment analysis has always been an important research direction in natural language processing. The research can be divided into explicit sentiment analysis and implicit sentiment analysis according to whether there are sentiment words in language expression. There have been many research results in explicit sentiment analysis. However, implicit sentiment analysis is rarely studied. Compared with explicit sentiment expression, implicit sentiment expression usually omits a lot of knowledge and common sense, and context also has an important impact on implicit sentiment expression. In this paper, we use a knowledge graph to supplement implicit sentiment expression and propose a novel Implicit Sentiment Analysis model combining Knowledge enhancement and Context features (dubbed KC-ISA). The KC-ISA model can effectively integrate external knowledge and contextual features by the coattention mechanism. Finally, we conduct experiments on the SMP2019 implicit sentiment analysis dataset. Moreover, to verify the generality of the model, we also conduct experiments on two common sentiment analysis datasets. The results on three datasets show that our proposed KC-ISA model can achieve better results on text sentiment analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,045
inproceedings
tan-etal-2022-domain
Domain Generalization for Text Classification with Memory-Based Supervised Contrastive Learning
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.602/
Tan, Qingyu and He, Ruidan and Bing, Lidong and Ng, Hwee Tou
Proceedings of the 29th International Conference on Computational Linguistics
6916--6926
While there is much research on cross-domain text classification, most existing approaches focus on one-to-one or many-to-one domain adaptation. In this paper, we tackle the more challenging task of domain generalization, in which domain-invariant representations are learned from multiple source domains, without access to any data from the target domains, and classification decisions are then made on test documents in unseen target domains. We propose a novel framework based on supervised contrastive learning with a memory-saving queue. In this way, we explicitly encourage examples of the same class to be closer and examples of different classes to be further apart in the embedding space. We have conducted extensive experiments on two Amazon review sentiment datasets, and one rumour detection dataset. Experimental results show that our domain generalization method consistently outperforms state-of-the-art domain adaptation methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,046
inproceedings
gangi-reddy-etal-2022-zero
A Zero-Shot Claim Detection Framework Using Question Answering
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.603/
Gangi Reddy, Revanth and Chinthakindi, Sai Chetan and Fung, Yi R. and Small, Kevin and Ji, Heng
Proceedings of the 29th International Conference on Computational Linguistics
6927--6933
In recent years, there has been an increasing interest in claim detection as an important building block for misinformation detection. This involves detecting more fine-grained attributes relating to the claim, such as the claimer, claim topic, claim object pertaining to the topic, etc. Yet, a notable bottleneck of existing claim detection approaches is their portability to emerging events and low-resource training data settings. In this regard, we propose a fine-grained claim detection framework that leverages zero-shot Question Answering (QA) using directed questions to solve a diverse set of sub-tasks such as topic filtering, claim object detection, and claimer detection. We show that our approach significantly outperforms various zero-shot, few-shot and task-specific baselines on the NewsClaims benchmark (Reddy et al., 2021).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,047
inproceedings
li-etal-2022-asymmetric
Asymmetric Mutual Learning for Multi-source Unsupervised Sentiment Adaptation with Dynamic Feature Network
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.604/
Li, Rui and Liu, Cheng and Jiang, Dazhi
Proceedings of the 29th International Conference on Computational Linguistics
6934--6943
Recently, fine-tuning the pre-trained language model (PrLM) on labeled sentiment datasets demonstrates impressive performance. However, collecting labeled sentiment dataset is time-consuming, and fine-tuning the whole PrLM brings about much computation cost. To this end, we focus on multi-source unsupervised sentiment adaptation problem with the pre-trained features, which is more practical and challenging. We first design a dynamic feature network to fully exploit the extracted pre-trained features for efficient domain adaptation. Meanwhile, with the difference of the traditional source-target domain alignment methods, we propose a novel asymmetric mutual learning strategy, which can robustly estimate the pseudo-labels of the target domain with the knowledge from all the other source models. Experiments on multiple sentiment benchmarks show that our method outperforms the recent state-of-the-art approaches, and we also conduct extensive ablation studies to verify the effectiveness of each the proposed module.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,048
inproceedings
liu-etal-2022-target
Target Really Matters: Target-aware Contrastive Learning and Consistency Regularization for Few-shot Stance Detection
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.605/
Liu, Rui and Lin, Zheng and Ji, Huishan and Li, Jiangnan and Fu, Peng and Wang, Weiping
Proceedings of the 29th International Conference on Computational Linguistics
6944--6954
Stance detection aims to identify the attitude from an opinion towards a certain target. Despite the significant progress on this task, it is extremely time-consuming and budget-unfriendly to collect sufficient high-quality labeled data for every new target under fully-supervised learning, whereas unlabeled data can be collected easier. Therefore, this paper is devoted to few-shot stance detection and investigating how to achieve satisfactory results in semi-supervised settings. As a target-oriented task, the core idea of semi-supervised few-shot stance detection is to make better use of target-relevant information from labeled and unlabeled data. Therefore, we develop a novel target-aware semi-supervised framework. Specifically, we propose a target-aware contrastive learning objective to learn more distinguishable representations for different targets. Such an objective can be easily applied with or without unlabeled data. Furthermore, to thoroughly exploit the unlabeled data and facilitate the model to learn target-relevant stance features in the opinion content, we explore a simple but effective target-aware consistency regularization combined with a self-training strategy. The experimental results demonstrate that our approach can achieve state-of-the-art performance on multiple benchmark datasets in the few-shot setting.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,049
inproceedings
chen-etal-2022-joint
Joint Alignment of Multi-Task Feature and Label Spaces for Emotion Cause Pair Extraction
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.606/
Chen, Shunjie and Shi, Xiaochuan and Li, Jingye and Wu, Shengqiong and Fei, Hao and Li, Fei and Ji, Donghong
Proceedings of the 29th International Conference on Computational Linguistics
6955--6965
Emotion cause pair extraction (ECPE), as one of the derived subtasks of emotion cause analysis (ECA), shares rich inter-related features with emotion extraction (EE) and cause extraction (CE). Therefore EE and CE are frequently utilized as auxiliary tasks for better feature learning, modeled via multi-task learning (MTL) framework by prior works to achieve state-of-the-art (SoTA) ECPE results. However, existing MTL-based methods either fail to simultaneously model the specific features and the interactive feature in between, or suffer from the inconsistency of label prediction. In this work, we consider addressing the above challenges for improving ECPE by performing two alignment mechanisms with a novel A{\textasciicircum}2Net model. We first propose a feature-task alignment to explicitly model the specific emotion-{\&}cause-specific features and the shared interactive feature. Besides, an inter-task alignment is implemented, in which the label distance between the ECPE and the combinations of EE{\&}CE are learned to be narrowed for better label consistency. Evaluations of benchmarks show that our methods outperform current best-performing systems on all ECA subtasks. Further analysis proves the importance of our proposed alignment mechanisms for the task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,050
inproceedings
wang-etal-2022-causal
Causal Intervention Improves Implicit Sentiment Analysis
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.607/
Wang, Siyin and Zhou, Jie and Sun, Changzhi and Ye, Junjie and Gui, Tao and Zhang, Qi and Huang, Xuanjing
Proceedings of the 29th International Conference on Computational Linguistics
6966--6977
Despite having achieved great success for sentiment analysis, existing neural models struggle with implicit sentiment analysis. It is because they may latch onto spurious correlations ({\textquotedblleft}shortcuts{\textquotedblright}, e.g., focusing only on explicit sentiment words), resulting in undermining the effectiveness and robustness of the learned model. In this work, we propose a CausaL intervention model for implicit sEntiment ANalysis using instrumental variable (CLEAN). We first review sentiment analysis from a causal perspective and analyze the confounders existing in this task. Then, we introduce instrumental variable to eliminate the confounding causal effects, thus extracting the pure causal effect between sentence and sentiment. We compare the proposed CLEAN with several strong baselines on both the general implicit sentiment analysis and aspect-based implicit sentiment analysis tasks. The results indicate the great advantages of our model and the efficacy of implicit sentiment reasoning.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,051
inproceedings
ghosh-etal-2022-comma
{COMMA}-{DEER}: {CO}mmon-sense Aware Multimodal Multitask Approach for Detection of Emotion and Emotional Reasoning in Conversations
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.608/
Ghosh, Soumitra and Singh, Gopendra Vikram and Ekbal, Asif and Bhattacharyya, Pushpak
Proceedings of the 29th International Conference on Computational Linguistics
6978--6990
Mental health is a critical component of the United Nations' Sustainable Development Goals (SDGs), particularly Goal 3, which aims to provide {\textquotedblleft}good health and well-being{\textquotedblright}. The present mental health treatment gap is exacerbated by stigma, lack of human resources, and lack of research capability for implementation and policy reform. We present and discuss a novel task of detecting emotional reasoning (ER) and accompanying emotions in conversations. In particular, we create a first-of-its-kind multimodal mental health conversational corpus that is manually annotated at the utterance level with emotional reasoning and related emotion. We develop a multimodal multitask framework with a novel multimodal feature fusion technique and a contextuality learning module to handle the two tasks. Leveraging multimodal sources of information, commonsense reasoning, and through a multitask framework, our proposed model produces strong results. We achieve performance gains of 6{\%} accuracy and 4.62{\%} F1 on the emotion detection task and 3.56{\%} accuracy and 3.31{\%} F1 on the ER detection task, when compared to the existing state-of-the-art model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,052
inproceedings
atapattu-etal-2022-emoment
{E}mo{M}ent: An Emotion Annotated Mental Health Corpus from Two {S}outh {A}sian Countries
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.609/
Atapattu, Thushari and Herath, Mahen and Elvitigala, Charitha and de Zoysa, Piyanjali and Gunawardana, Kasun and Thilakaratne, Menasha and de Zoysa, Kasun and Falkner, Katrina
Proceedings of the 29th International Conference on Computational Linguistics
6991--7001
People often utilise online media (e.g., Facebook, Reddit) as a platform to express their psychological distress and seek support. State-of-the-art NLP techniques demonstrate strong potential to automatically detect mental health issues from text. Research suggests that mental health issues are reflected in emotions (e.g., sadness) indicated in a person`s choice of language. Therefore, we developed a novel emotion-annotated mental health corpus (EmoMent),consisting of 2802 Facebook posts (14845 sentences) extracted from two South Asian countries - Sri Lanka and India. Three clinical psychology postgraduates were involved in annotating these posts into eight categories, including {\textquoteleft}mental illness' (e.g., depression) and emotions (e.g., {\textquoteleft}sadness', {\textquoteleft}anger'). EmoMent corpus achieved {\textquoteleft}very good' inter-annotator agreement of 98.3{\%} (i.e. {\%} with two or more agreement) and Fleiss' Kappa of 0.82. Our RoBERTa based models achieved an F1 score of 0.76 and a macro-averaged F1 score of 0.77 for the first task (i.e. predicting a mental health condition from a post) and the second task (i.e. extent of association of relevant posts with the categories defined in our taxonomy), respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,053
inproceedings
gao-etal-2022-lego
{LEGO}-{ABSA}: A Prompt-based Task Assemblable Unified Generative Framework for Multi-task Aspect-based Sentiment Analysis
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.610/
Gao, Tianhao and Fang, Jun and Liu, Hanyu and Liu, Zhiyuan and Liu, Chao and Liu, Pengzhang and Bao, Yongjun and Yan, Weipeng
Proceedings of the 29th International Conference on Computational Linguistics
7002--7012
Aspect-based sentiment analysis (ABSA) has received increasing attention recently. ABSA can be divided into multiple tasks according to the different extracted elements. Existing generative methods usually treat the output as a whole string rather than the combination of different elements and only focus on a single task at once. This paper proposes a unified generative multi-task framework that can solve multiple ABSA tasks by controlling the type of task prompts consisting of multiple element prompts. Further, the proposed approach can train on simple tasks and transfer to difficult tasks by assembling task prompts, like assembling Lego bricks. We conduct experiments on six ABSA tasks across multiple benchmarks. Our proposed multi-task approach achieves new state-of-the-art results in almost all tasks and competitive results in task transfer scenarios.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,054
inproceedings
chen-etal-2022-hierarchical
A Hierarchical Interactive Network for Joint Span-based Aspect-Sentiment Analysis
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.611/
Chen, Wei and Du, Jinglong and Zhang, Zhao and Zhuang, Fuzhen and He, Zhongshi
Proceedings of the 29th International Conference on Computational Linguistics
7013--7019
Recently, some span-based methods have achieved encouraging performances for joint aspect-sentiment analysis, which first extract aspects (aspect extraction) by detecting aspect boundaries and then classify the span-level sentiments (sentiment classification). However, most existing approaches either sequentially extract task-specific features, leading to insufficient feature interactions, or they encode aspect features and sentiment features in a parallel manner, implying that feature representation in each task is largely independent of each other except for input sharing. Both of them ignore the internal correlations between the aspect extraction and sentiment classification. To solve this problem, we novelly propose a hierarchical interactive network (HI-ASA) to model two-way interactions between two tasks appropriately, where the hierarchical interactions involve two steps: shallow-level interaction and deep-level interaction. First, we utilize cross-stitch mechanism to combine the different task-specific features selectively as the input to ensure proper two-way interactions. Second, the mutual information technique is applied to mutually constrain learning between two tasks in the output layer, thus the aspect input and the sentiment input are capable of encoding features of the other task via backpropagation. Extensive experiments on three real-world datasets demonstrate HI-ASA`s superiority over baselines.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,055
inproceedings
zhao-etal-2022-mucdn
{M}u{CDN}: Mutual Conversational Detachment Network for Emotion Recognition in Multi-Party Conversations
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.612/
Zhao, Weixiang and Zhao, Yanyan and Qin, Bing
Proceedings of the 29th International Conference on Computational Linguistics
7020--7030
As an emerging research topic in natural language processing community, emotion recognition in multi-party conversations has attained increasing interest. Previous approaches that focus either on dyadic or multi-party scenarios exert much effort to cope with the challenge of emotional dynamics and achieve appealing results. However, since emotional interactions among speakers are often more complicated within the entangled multi-party conversations, these works are limited in capturing effective emotional clues in conversational context. In this work, we propose Mutual Conversational Detachment Network (MuCDN) to clearly and effectively understand the conversational context by separating conversations into detached threads. Specifically, two detachment ways are devised to perform context and speaker-specific modeling within detached threads and they are bridged through a mutual module. Experimental results on two datasets show that our model achieves better performance over the baseline models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,056
inproceedings
zheng-etal-2022-ueca
{UECA}-Prompt: Universal Prompt for Emotion Cause Analysis
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.613/
Zheng, Xiaopeng and Liu, Zhiyue and Zhang, Zizhen and Wang, Zhaoyang and Wang, Jiahai
Proceedings of the 29th International Conference on Computational Linguistics
7031--7041
Emotion cause analysis (ECA) aims to extract emotion clauses and find the corresponding cause of the emotion. Existing methods adopt fine-tuning paradigm to solve certain types of ECA tasks. These task-specific methods have a deficiency of universality. And the relations among multiple objectives in one task are not explicitly modeled. Moreover, the relative position information introduced in most existing methods may make the model suffer from dataset bias. To address the first two problems, this paper proposes a universal prompt tuning method to solve different ECA tasks in the unified framework. As for the third problem, this paper designs a directional constraint module and a sequential learning module to ease the bias. Considering the commonalities among different tasks, this paper proposes a cross-task training method to further explore the capability of the model. The experimental results show that our method achieves competitive performance on the ECA datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,057
inproceedings
chang-etal-2022-one
One-Teacher and Multiple-Student Knowledge Distillation on Sentiment Classification
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.614/
Chang, Xiaoqin and Lee, Sophia Yat Mei and Zhu, Suyang and Li, Shoushan and Zhou, Guodong
Proceedings of the 29th International Conference on Computational Linguistics
7042--7052
Knowledge distillation is an effective method to transfer knowledge from a large pre-trained teacher model to a compacted student model. However, in previous studies, the distilled student models are still large and remain impractical in highly speed-sensitive systems (e.g., an IR system). In this study, we aim to distill a deep pre-trained model into an extremely compacted shallow model like CNN. Specifically, we propose a novel one-teacher and multiple-student knowledge distillation approach to distill a deep pre-trained teacher model into multiple shallow student models with ensemble learning. Moreover, we leverage large-scale unlabeled data to improve the performance of students. Empirical studies on three sentiment classification tasks demonstrate that our approach achieves better results with much fewer parameters (0.9{\%}-18{\%}) and extremely high speedup ratios (100X-1000X).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,058
inproceedings
zhou-etal-2022-making
Making Parameter-efficient Tuning More Efficient: A Unified Framework for Classification Tasks
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.615/
Zhou, Xin and Ma, Ruotian and Zou, Yicheng and Chen, Xuanting and Gui, Tao and Zhang, Qi and Huang, Xuanjing and Xie, Rui and Wu, Wei
Proceedings of the 29th International Conference on Computational Linguistics
7053--7064
Large pre-trained language models (PLMs) have demonstrated superior performance in industrial applications. Recent studies have explored parameter-efficient PLM tuning, which only updates a small amount of task-specific parameters while achieving both high efficiency and comparable performance against standard fine-tuning. However, all these methods ignore the inefficiency problem caused by the task-specific output layers, which is inflexible for us to re-use PLMs and introduces non-negligible parameters. In this work, we focus on the text classification task and propose plugin-tuning, a framework that further improves the efficiency of existing parameter-efficient methods with a unified classifier. Specifically, we re-formulate both token and sentence classification tasks into a unified language modeling task, and map label spaces of different tasks into the same vocabulary space. In this way, we can directly re-use the language modeling heads of PLMs, avoiding introducing extra parameters for different tasks. We conduct experiments on six classification benchmarks. The experimental results show that plugin-tuning can achieve comparable performance against fine-tuned PLMs, while further saving around 50{\%} parameters on top of other parameter-efficient methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,059
inproceedings
zhao-etal-2022-multi
A Multi-Task Dual-Tree Network for Aspect Sentiment Triplet Extraction
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.616/
Zhao, Yichun and Meng, Kui and Liu, Gongshen and Du, Jintao and Zhu, Huijia
Proceedings of the 29th International Conference on Computational Linguistics
7065--7074
Aspect Sentiment Triplet Extraction (ASTE) aims at extracting triplets from a given sentence, where each triplet includes an aspect, its sentiment polarity, and a corresponding opinion explaining the polarity. Existing methods are poor at detecting complicated relations between aspects and opinions as well as classifying multiple sentiment polarities in a sentence. Detecting unclear boundaries of multi-word aspects and opinions is also a challenge. In this paper, we propose a Multi-Task Dual-Tree Network (MTDTN) to address these issues. We employ a constituency tree and a modified dependency tree in two sub-tasks of Aspect Opinion Co-Extraction (AOCE) and ASTE, respectively. To enhance the information interaction between the two sub-tasks, we further design a Transition-Based Inference Strategy (TBIS) that transfers the boundary information from tags of AOCE to ASTE through a transition matrix. Extensive experiments are conducted on four popular datasets, and the results show the effectiveness of our model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,060
inproceedings
wang-etal-2022-exploiting
Exploiting Unlabeled Data for Target-Oriented Opinion Words Extraction
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.617/
Wang, Yidong and Wu, Hao and Liu, Ao and Hou, Wenxin and Wu, Zhen and Wang, Jindong and Shinozaki, Takahiro and Okumura, Manabu and Zhang, Yue
Proceedings of the 29th International Conference on Computational Linguistics
7075--7085
Target-oriented Opinion Words Extraction (TOWE) is a fine-grained sentiment analysis task that aims to extract the corresponding opinion words of a given opinion target from the sentence. Recently, deep learning approaches have made remarkable progress on this task. Nevertheless, the TOWE task still suffers from the scarcity of training data due to the expensive data annotation process. Limited labeled data increase the risk of distribution shift between test data and training data. In this paper, we propose exploiting massive unlabeled data to reduce the risk by increasing the exposure of the model to varying distribution shifts. Specifically, we propose a novel Multi-Grained Consistency Regularization (MGCR) method to make use of unlabeled data and design two filters specifically for TOWE to filter noisy data at different granularity. Extensive experimental results on four TOWE benchmark datasets indicate the superiority of MGCR compared with current state-of-the-art methods. The in-depth analysis also demonstrates the effectiveness of the different-granularity filters.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,061
inproceedings
ma-pang-2022-learnable
Learnable Dependency-based Double Graph Structure for Aspect-based Sentiment Analysis
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.618/
Ma, Yinglong and Pang, Yunhe
Proceedings of the 29th International Conference on Computational Linguistics
7086--7092
Dependency tree-based methods might be susceptible to the dependency tree due to that they inevitably introduce noisy information and neglect the rich relation information between words. In this paper, we propose a learnable dependency-based double graph (LD2G) model for aspect-based sentiment classification. We use multi-task learning for domain adaptive pretraining, which combines Biaffine Attention and Mask Language Model by incorporating features such as structure, relations and linguistic features in the sentiment text. Then we utilize the dependency enhanced double graph-based MPNN to deeply fuse structure features and relation features that are affected with each other for ASC. Experiment on four benchmark datasets shows that our model is superior to the state-of-the-art approaches.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,062
inproceedings
li-etal-2022-structure
A Structure-Aware Argument Encoder for Literature Discourse Analysis
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.619/
Li, Yinzi and Chen, Wei and Wei, Zhongyu and Huang, Yujun and Wang, Chujun and Wang, Siyuan and Zhang, Qi and Huang, Xuanjing and Wu, Libo
Proceedings of the 29th International Conference on Computational Linguistics
7093--7098
Existing research for argument representation learning mainly treats tokens in the sentence equally and ignores the implied structure information of argumentative context. In this paper, we propose to separate tokens into two groups, namely framing tokens and topic ones, to capture structural information of arguments. In addition, we consider high-level structure by incorporating paragraph-level position information. A novel structure-aware argument encoder is proposed for literature discourse analysis. Experimental results on both a self-constructed corpus and a public corpus show the effectiveness of our model. Resources are available at \url{https://github.com/lemuria-wchen/SAE}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,063
inproceedings
luo-etal-2022-mere
Mere Contrastive Learning for Cross-Domain Sentiment Analysis
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.620/
Luo, Yun and Guo, Fang and Liu, Zihan and Zhang, Yue
Proceedings of the 29th International Conference on Computational Linguistics
7099--7111
Cross-domain sentiment analysis aims to predict the sentiment of texts in the target domain using the model trained on the source domain to cope with the scarcity of labeled data. Previous studies are mostly cross-entropy-based methods for the task, which suffer from instability and poor generalization. In this paper, we explore contrastive learning on the cross-domain sentiment analysis task. We propose a modified contrastive objective with in-batch negative samples so that the sentence representations from the same class can be pushed close while those from the different classes become further apart in the latent space. Experiments on two widely used datasets show that our model can achieve state-of-the-art performance in both cross-domain and multi-domain sentiment analysis tasks. Meanwhile, visualizations demonstrate the effectiveness of transferring knowledge learned in the source domain to the target domain and the adversarial test verifies the robustness of our model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,064
inproceedings
luo-etal-2022-exploiting
Exploiting Sentiment and Common Sense for Zero-shot Stance Detection
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.621/
Luo, Yun and Liu, Zihan and Shi, Yuefeng and Li, Stan Z. and Zhang, Yue
Proceedings of the 29th International Conference on Computational Linguistics
7112--7123
The stance detection task aims to classify the stance toward given documents and topics. Since the topics can be implicit in documents and unseen in training data for zero-shot settings, we propose to boost the transferability of the stance detection model by using sentiment and commonsense knowledge, which are seldom considered in previous studies. Our model includes a graph autoencoder module to obtain commonsense knowledge and a stance detection module with sentiment and commonsense. Experimental results show that our model outperforms the state-of-the-art methods on the zero-shot and few-shot benchmark dataset{--}VAST. Meanwhile, ablation studies prove the significance of each module in our model. Analysis of the relations between sentiment, common sense, and stance indicates the effectiveness of sentiment and common sense.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,065
inproceedings
lin-etal-2022-modeling
Modeling Intra- and Inter-Modal Relations: Hierarchical Graph Contrastive Learning for Multimodal Sentiment Analysis
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.622/
Lin, Zijie and Liang, Bin and Long, Yunfei and Dang, Yixue and Yang, Min and Zhang, Min and Xu, Ruifeng
Proceedings of the 29th International Conference on Computational Linguistics
7124--7135
The existing research efforts in Multimodal Sentiment Analysis (MSA) have focused on developing the expressive ability of neural networks to fuse information from different modalities. However, these approaches lack a mechanism to understand the complex relations within and across different modalities, since some sentiments may be scattered in different modalities. To this end, in this paper, we propose a novel hierarchical graph contrastive learning (HGraph-CL) framework for MSA, aiming to explore the intricate relations of intra- and inter-modal representations for sentiment extraction. Specifically, regarding the intra-modal level, we build a unimodal graph for each modality representation to account for the modality-specific sentiment implications. Based on it, a graph contrastive learning strategy is adopted to explore the potential relations based on unimodal graph augmentations. Furthermore, we construct a multimodal graph of each instance based on the unimodal graphs to grasp the sentiment relations between different modalities. Then, in light of the multimodal augmentation graphs, a graph contrastive learning strategy over the inter-modal level is proposed to ulteriorly seek the possible graph structures for precisely learning sentiment relations. This essentially allows the framework to understand the appropriate graph structures for learning intricate relations among different modalities. Experimental results on two benchmark datasets show that the proposed framework outperforms the state-of-the-art baselines in MSA.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,066
inproceedings
li-etal-2022-amoa
{AMOA}: Global Acoustic Feature Enhanced Modal-Order-Aware Network for Multimodal Sentiment Analysis
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.623/
Li, Ziming and Zhou, Yan and Zhang, Weibo and Liu, Yaxin and Yang, Chuanpeng and Lian, Zheng and Hu, Songlin
Proceedings of the 29th International Conference on Computational Linguistics
7136--7146
In recent years, multimodal sentiment analysis (MSA) has attracted more and more interest, which aims to predict the sentiment polarity expressed in a video. Existing methods typically 1) treat three modal features (textual, acoustic, visual) equally, without distinguishing the importance of different modalities; and 2) split the video into frames, leading to missing the global acoustic information. In this paper, we propose a global Acoustic feature enhanced Modal-Order-Aware network (AMOA) to address these problems. Firstly, a modal-order-aware network is designed to obtain the multimodal fusion feature. This network integrates the three modalities in a certain order, which makes the modality at the core position matter more. Then, we introduce the global acoustic feature of the whole video into our model. Since the global acoustic feature and multimodal fusion feature originally reside in their own spaces, contrastive learning is further employed to align them before concatenation. Experiments on two public datasets show that our model outperforms the state-of-the-art models. In addition, we also generalize our model to the sentiment with more complex semantics, such as sarcasm detection. Our model also achieves state-of-the-art performance on a widely used sarcasm dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,067
inproceedings
veyseh-etal-2022-keyphrase
Keyphrase Prediction from Video Transcripts: New Dataset and Directions
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.624/
Veyseh, Amir Pouran Ben and Tran, Quan Hung and Yoon, Seunghyun and Manjunatha, Varun and Deilamsalehy, Hanieh and Jain, Rajiv and Bui, Trung and Chang, Walter W. and Dernoncourt, Franck and Nguyen, Thien Huu
Proceedings of the 29th International Conference on Computational Linguistics
7147--7155
Keyphrase Prediction (KP) is an established NLP task, aiming to yield representative phrases to summarize the main content of a given document. Despite major progress in recent years, existing works on KP have mainly focused on formal texts such as scientific papers or weblogs. The challenges of KP in informal-text domains are not yet fully studied. To this end, this work studies new challenges of KP in transcripts of videos, an understudied domain for KP that involves informal texts and non-cohesive presentation styles. A bottleneck for KP research in this domain involves the lack of high-quality and large-scale annotated data that hinders the development of advanced KP models. To address this issue, we introduce a large-scale manually-annotated KP dataset in the domain of live-stream video transcripts obtained by automatic speech recognition tools. Concretely, transcripts of 500+ hours of videos streamed on the behance.net platform are manually labeled with important keyphrases. Our analysis of the dataset reveals the challenging nature of KP in transcripts. Moreover, for the first time in KP, we demonstrate the idea of improving KP for long documents (i.e., transcripts) by feeding models with paragraph-level keyphrases, i.e., hierarchical extraction. To foster future research, we will publicly release the dataset and code.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,068
inproceedings
veyseh-etal-2022-event
Event Extraction in Video Transcripts
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.625/
Veyseh, Amir Pouran Ben and Lai, Viet Dac and Dernoncourt, Franck and Nguyen, Thien Huu
Proceedings of the 29th International Conference on Computational Linguistics
7156--7165
Event extraction (EE) is one of the fundamental tasks for information extraction whose goal is to identify mentions of events and their participants in text. Due to its importance, different methods and datasets have been introduced for EE. However, existing EE datasets are limited to formally written documents such as news articles or scientific papers. As such, the challenges of EE in informal and noisy texts are not adequately studied. In particular, video transcripts constitute an important domain that can benefit tremendously from EE systems (e.g., video retrieval), but has not been studied in EE literature due to the lack of necessary datasets. To address this limitation, we propose the first large-scale EE dataset obtained for transcripts of streamed videos on the video hosting platform Behance to promote future research in this area. In addition, we extensively evaluate existing state-of-the-art EE methods on our new dataset. We demonstrate that such systems cannot achieve adequate performance on the proposed dataset, revealing challenges and opportunities for further research effort.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,069
inproceedings
cambara-etal-2022-recycle
Recycle Your {W}av2{V}ec2 Codebook: A Speech Perceiver for Keyword Spotting
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.626/
C{\'a}mbara, Guillermo and Luque, Jordi and Farr{\'u}s, Mireia
Proceedings of the 29th International Conference on Computational Linguistics
7166--7170
Speech information in a pretrained wav2vec2.0 model is usually leveraged through its encoder, which has at least 95M parameters, being not so suitable for small footprint Keyword Spotting. In this work, we show an efficient way of profiting from wav2vec2.0`s linguistic knowledge, by recycling the phonetic information encoded in its latent codebook, which has been typically thrown away after pretraining. We do so by transferring the codebook as weights for the latent bottleneck of a Keyword Spotting Perceiver, thus initializing such model with phonetic embeddings already. The Perceiver design relies on cross-attention between these embeddings and input data to generate better representations. Our method delivers accuracy gains compared to random initialization, at no latency costs. Plus, we show that the phonetic embeddings can easily be downsampled with k-means clustering, speeding up inference in 3.5 times at only slight accuracy penalties.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,070
inproceedings
chi-bell-2022-improving
Improving Code-switched {ASR} with Linguistic Information
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.627/
Chi, Jie and Bell, Peter
Proceedings of the 29th International Conference on Computational Linguistics
7171--7176
This paper seeks to improve the performance of automatic speech recognition (ASR) systems operating on code-switched speech. Code-switching refers to the alternation of languages within a conversation, a phenomenon that is of increasing importance considering the rapid rise in the number of bilingual speakers in the world. It is particularly challenging for ASR owing to the relative scarcity of code-switching speech and text data, even when the individual languages are themselves well-resourced. This paper proposes to overcome this challenge by applying linguistic theories in order to generate more realistic code-switching text, necessary for language modelling in ASR. Working with English-Spanish code-switching, we find that Equivalence Constraint theory and part-of-speech labelling are particularly helpful for text generation, and bring 2{\%} improvement to ASR performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,071
inproceedings
choe-etal-2022-language
Language-specific Effects on Automatic Speech Recognition Errors for World Englishes
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.628/
Choe, June and Chen, Yiran and Chan, May Pik Yu and Li, Aini and Gao, Xin and Holliday, Nicole
Proceedings of the 29th International Conference on Computational Linguistics
7177--7186
Despite recent advancements in automated speech recognition (ASR) technologies, reports of unequal performance across speakers of different demographic groups abound. At the same time, the focus on performance metrics such as the Word Error Rate (WER) in prior studies limit the specificity and scope of recommendations that can be offered for system engineering to overcome these challenges. The current study bridges this gap by investigating the performance of Otter`s automatic captioning system on native and non-native English speakers of different language background through a linguistic analysis of segment-level errors. By examining language-specific error profiles for vowels and consonants motivated by linguistic theory, we find that certain categories of errors can be predicted from the phonological structure of a speaker`s native language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,072
inproceedings
chen-etal-2022-transformer
A Transformer-based Threshold-Free Framework for Multi-Intent {NLU}
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.629/
Chen, Lisung and Chen, Nuo and Zou, Yuexian and Wang, Yong and Sun, Xinzhong
Proceedings of the 29th International Conference on Computational Linguistics
7187--7192
Multi-intent natural language understanding (NLU) has recently gained attention. It detects multiple intents in an utterance, which is better suited to real-world scenarios. However, the state-of-the-art joint NLU models mainly detect multiple intents on threshold-based strategy, resulting in one main issue: the model is extremely sensitive to the threshold settings. In this paper, we propose a transformer-based Threshold-Free Multi-intent NLU model (TFMN) with multi-task learning (MTL). Specifically, we first leverage multiple layers of a transformer-based encoder to generate multi-grain representations. Then we exploit the information of the number of multiple intents in each utterance without additional manual annotations and propose an auxiliary detection task: Intent Number detection (IND). Furthermore, we propose a threshold-free intent multi-intent classifier that utilizes the output of IND task and detects the multiple intents without depending on the threshold. Extensive experiments demonstrate that our proposed model achieves superior results on two public multi-intent datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,073
inproceedings
chen-etal-2022-unsupervised-multi
Unsupervised Multi-scale Expressive Speaking Style Modeling with Hierarchical Context Information for Audiobook Speech Synthesis
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.630/
Chen, Xueyuan and Lei, Shun and Wu, Zhiyong and Xu, Dong and Zhao, Weifeng and Meng, Helen
Proceedings of the 29th International Conference on Computational Linguistics
7193--7202
Naturalness and expressiveness are crucial for audiobook speech synthesis, but now are limited by the averaged global-scale speaking style representation. In this paper, we propose an unsupervised multi-scale context-sensitive text-to-speech model for audiobooks. A multi-scale hierarchical context encoder is specially designed to predict both global-scale context style embedding and local-scale context style embedding from a wider context of input text in a hierarchical manner. Likewise, a multi-scale reference encoder is introduced to extract reference style embeddings at both global and local scales from the reference speech, which is used to guide the prediction of speaking styles. On top of these, a bi-reference attention mechanism is used to align both local-scale reference style embedding sequence and local-scale context style embedding sequence with corresponding phoneme embedding sequence. Both objective and subjective experiment results on a real-world multi-speaker Mandarin novel audio dataset demonstrate the excellent performance of our proposed method over all baselines in terms of naturalness and expressiveness of the synthesized speech.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,074
inproceedings
wu-etal-2022-incorporating
Incorporating Instructional Prompts into a Unified Generative Framework for Joint Multiple Intent Detection and Slot Filling
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.631/
Wu, Yangjun and Wang, Han and Zhang, Dongxiang and Chen, Gang and Zhang, Hao
Proceedings of the 29th International Conference on Computational Linguistics
7203--7208
The joint multiple Intent Detection (ID) and Slot Filling (SF) is a significant challenge in spoken language understanding. Because the slots in an utterance may relate to multi-intents, most existing approaches focus on utilizing task-specific components to capture the relations between intents and slots. The customized networks restrict models from modeling commonalities between tasks and generalization for broader applications. To address the above issue, we propose a Unified Generative framework (UGEN) based on a prompt-based paradigm, and formulate the task as a question-answering problem. Specifically, we design 5-type templates as instructional prompts, and each template includes a question that acts as the driver to teach UGEN to grasp the paradigm, options that list the candidate intents or slots to reduce the answer search space, and the context denotes original utterance. Through the instructional prompts, UGEN is guided to understand intents, slots, and their implicit correlations. On two popular multi-intent benchmark datasets, experimental results demonstrate that UGEN achieves new SOTA performances on full-data and surpasses the baselines by a large margin on 5-shot (28.1{\%}) and 10-shot (23{\%}) scenarios, which verify that UGEN is robust and effective.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,075
inproceedings
wang-etal-2022-adaptive
Adaptive Unsupervised Self-training for Disfluency Detection
Calzolari, Nicoletta and Huang, Chu-Ren and Kim, Hansaem and Pustejovsky, James and Wanner, Leo and Choi, Key-Sun and Ryu, Pum-Mo and Chen, Hsin-Hsi and Donatelli, Lucia and Ji, Heng and Kurohashi, Sadao and Paggio, Patrizia and Xue, Nianwen and Kim, Seokhwan and Hahm, Younggyun and He, Zhong and Lee, Tony Kyungil and Santus, Enrico and Bond, Francis and Na, Seung-Hoon
oct
2022
Gyeongju, Republic of Korea
International Committee on Computational Linguistics
https://aclanthology.org/2022.coling-1.632/
Wang, Zhongyuan and Wang, Yixuan and Wang, Shaolei and Che, Wanxiang
Proceedings of the 29th International Conference on Computational Linguistics
7209--7218
Supervised methods have achieved remarkable results in disfluency detection. However, in real-world scenarios, human-annotated data is difficult to obtain. Recent works try to handle disfluency detection with unsupervised self-training, which can exploit existing large-scale unlabeled data efficiently. However, their self-training-based methods suffer from the problems of selection bias and error accumulation. To tackle these problems, we propose an adaptive unsupervised self-training method for disfluency detection. Specifically, we re-weight the importance of each training example according to its grammatical feature and prediction confidence. Experiments on the Switchboard dataset show that our method improves 2.3 points over the current SOTA unsupervised method. Moreover, our method is competitive with the SOTA supervised method.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,076
inproceedings
hollenstein-etal-2022-patterns
Patterns of Text Readability in Human and Predicted Eye Movements
Zock, Michael and Chersoni, Emmanuele and Hsu, Yu-Yin and Santus, Enrico
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.cogalex-1.1/
Hollenstein, Nora and Gonzalez-Dios, Itziar and Beinborn, Lisa and J{\"ager, Lena
Proceedings of the Workshop on Cognitive Aspects of the Lexicon
1--15
It has been shown that multilingual transformer models are able to predict human reading behavior when fine-tuned on small amounts of eye tracking data. As the cumulated prediction results do not provide insights into the linguistic cues that the model acquires to predict reading behavior, we conduct a deeper analysis of the predictions from the perspective of readability. We try to disentangle the three-fold relationship between human eye movements, the capability of language models to predict these eye movement patterns, and sentence-level readability measures for English. We compare a range of model configurations to multiple baselines. We show that the models exhibit difficulties with function words and that pre-training only provides limited advantages for linguistic generalization.
null
null
10.18653/v1/2022.cogalex-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,078
inproceedings
kong-hsu-2022-alienable
(In)Alienable Possession in {M}andarin Relative Clauses
Zock, Michael and Chersoni, Emmanuele and Hsu, Yu-Yin and Santus, Enrico
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.cogalex-1.2/
Kong, Deran and Hsu, Yu-Yin
Proceedings of the Workshop on Cognitive Aspects of the Lexicon
16--24
Inalienable possession differs from alienable possession in that, in the former {--} e.g., kinships and part-whole relations {--} there is an intrinsic semantic dependency between the possessor and possessum. This paper reports two studies that used acceptability-judgment tasks to investigate whether native Mandarin speakers experienced different levels of interpretational costs while resolving different types of possessive relations, i.e., inalienable possessions (kinship terms and body parts) and alienable ones, expressed within relative clauses. The results show that sentences received higher acceptability ratings when body parts were the possessum as compared to sentences with alienable possessum, indicating that the inherent semantic dependency facilitates the resolution. However, inalienable kinship terms received the lowest acceptability ratings. We argue that this was because the kinship terms, which had the [+human] feature and appeared at the beginning of the experimental sentences, tended to be interpreted as the subject in shallow processing; these features contradicted the semantic-syntactic requirements of the experimental sentences.
null
null
10.18653/v1/2022.cogalex-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,079
inproceedings
momenian-2022-age
Do Age of Acquisition and Orthographic Transparency Have the Same Effects in Different Modalities?
Zock, Michael and Chersoni, Emmanuele and Hsu, Yu-Yin and Santus, Enrico
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.cogalex-1.3/
Momenian, Mohammad
Proceedings of the Workshop on Cognitive Aspects of the Lexicon
25--30
This paper is intended to study the effects of age of acquisition (AoA) and orthographic transparency on word retrieval in Persian, which is an understudied language. A naming task (both pictures and words) and a recall task (both pictures and words) were used to explore how lexical retrieval and verbal memory are affected by AoA and transparency. Seventy two native speakers of Persian were recruited to participate in two experiments. The results showed that early acquired words are processed faster than late acquired words only when pictures were used as stimuli. Transparency of the words was not an influential factor. However, in the recall experiment a three-way interaction was observed: early acquired pictures and words were processed faster than late acquired stimuli except the words in the transparent condition. The findings speak to the fact that language-specific properties of languages are very important.
null
null
10.18653/v1/2022.cogalex-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,080
inproceedings
dominguez-orfila-etal-2022-cat
{CAT} {M}any{N}ames: A New Dataset for Object Naming in {C}atalan
Zock, Michael and Chersoni, Emmanuele and Hsu, Yu-Yin and Santus, Enrico
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.cogalex-1.4/
Dom{\'i}nguez Orfila, Mar and Melero Nogu{\'e}s, Maite and Boleda Torrent, Gemma
Proceedings of the Workshop on Cognitive Aspects of the Lexicon
31--36
Object Naming is an important task within the field of Language and Vision that consists of generating a correct and appropriate name for an object given an image. The ManyNames dataset uses real-world human annotated images with multiple labels, instead of just one. In this work, we describe the adaptation of this dataset (originally in English) to Catalan, by (i) machine-translating the English labels and (ii) collecting human annotations for a subset of the original corpus and comparing both resources. Analyses reveal divergences in the lexical variation of the two sets showing potential problems of directly translated resources, particularly when there is no resource to a proper context, which in this case is conveyed by the image. The analysis also points to the impact of cultural factors in the naming task, which should be accounted for in future cross-lingual naming tasks.
null
null
10.18653/v1/2022.cogalex-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,081
inproceedings
lendvai-wick-2022-finetuning
Finetuning {L}atin {BERT} for Word Sense Disambiguation on the Thesaurus Linguae Latinae
Zock, Michael and Chersoni, Emmanuele and Hsu, Yu-Yin and Santus, Enrico
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.cogalex-1.5/
Lendvai, Piroska and Wick, Claudia
Proceedings of the Workshop on Cognitive Aspects of the Lexicon
37--41
The Thesaurus Linguae Latinae (TLL) is a comprehensive monolingual dictionary that records contextualized meanings and usages of Latin words in antique sources at an unprecedented scale. We created a new dataset based on a subset of sense representations in the TLL, with which we finetuned the Latin-BERT neural language model (Bamman and Burns, 2020) on a supervised Word Sense Disambiguation task. We observe that the contextualized BERT representations finetuned on TLL data score better than static embeddings used in a bidirectional LSTM classifier on the same dataset, and that our per-lemma BERT models achieve higher and more robust performance than reported by Bamman and Burns (2020) based on data from a bilingual Latin dictionary. We demonstrate the differences in sense organizational principles between these two lexical resources, and report about our dataset construction and improved evaluation methodology.
null
null
10.18653/v1/2022.cogalex-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,082
inproceedings
almeman-espinosa-anke-2022-putting
Putting {W}ord{N}et`s Dictionary Examples in the Context of Definition Modelling: An Empirical Analysis
Zock, Michael and Chersoni, Emmanuele and Hsu, Yu-Yin and Santus, Enrico
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.cogalex-1.6/
Almeman, Fatemah and Espinosa Anke, Luis
Proceedings of the Workshop on Cognitive Aspects of the Lexicon
42--48
Definition modeling is the task to generate a valid definition for a given input term. This relatively novel task has been approached either with no context (i.e., given a word embedding alone) and, more recently, as word-in-context modeling. Despite their success, most works make little to no distinction between resources and their specific features (e.g., type and style of definitions, or quality of examples) when used for training. Given the high diversity lexicographic resources exhibit in terms of topic coverage, style and formal structure, it is desirable for downstream definition modeling to better understand which of them are better suited for the task. In this paper, we propose an empirical evaluation of the well-known lexical database WordNet, and specifically, its dictionary examples. We evaluate them both directly, by matching them against criteria for good dictionary writing, and indirectly, in the task of definition modeling. Our results suggest that WordNet`s dictionary examples could be improved by extending them in length, and incorporating prototypicality.
null
null
10.18653/v1/2022.cogalex-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,083
inproceedings
liu-chersoni-2022-exploring
Exploring Nominal Coercion in Semantic Spaces with Static and Contextualized Word Embeddings
Zock, Michael and Chersoni, Emmanuele and Hsu, Yu-Yin and Santus, Enrico
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.cogalex-1.7/
Liu, Chenxin and Chersoni, Emmanuele
Proceedings of the Workshop on Cognitive Aspects of the Lexicon
49--57
The distinction between mass nouns and count nouns has a long history in formal semantics, and linguists have been trying to identify the semantic properties defining the two classes. However, they also recognized that both can undergo meaning shifts and be used in contexts of a different type, via nominal coercion. In this paper, we present an approach to measure the meaning shift in count-mass coercion in English that makes use of static and contextualized word embedding distance. Our results show that the coercion shifts are detected only by a small subset of the traditional word embedding models, and that the shifts detected by the contextualized embedding of BERT are more pronounced for mass nouns.
null
null
10.18653/v1/2022.cogalex-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,084
inproceedings
long-etal-2022-frame
A Frame-Based Model of Inherent Polysemy, Copredication and Argument Coercion
Zock, Michael and Chersoni, Emmanuele and Hsu, Yu-Yin and Santus, Enrico
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.cogalex-1.8/
Long, Chen and Kallmeyer, Laura and Osswald, Rainer
Proceedings of the Workshop on Cognitive Aspects of the Lexicon
58--67
The paper presents a frame-based model of inherently polysemous nouns (such as {\textquoteleft}book', which denotes both a physical object and an informational content) in which the meaning facets are directly accessible via attributes and which also takes into account the semantic relations between the facets. Predication over meaning facets (as in {\textquoteleft}memorize the book') is then modeled as targeting the value of the corresponding facet attribute while coercion (as in {\textquoteleft}finish the book') is modeled via specific patterns that enrich the predication. We use a compositional framework whose basic components are lexicalized syntactic trees paired with semantic frames and in which frame unification is triggered by tree composition. The approach is applied to a variety of combinations of predications over meaning facets and coercions.
null
null
10.18653/v1/2022.cogalex-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,085
inproceedings
winiwarter-wloka-2022-viscose
{VISCOSE} - a Kanji Dictionary Enriched with {VIS}ual, {CO}mpositional, and {SE}mantic Information
Zock, Michael and Chersoni, Emmanuele and Hsu, Yu-Yin and Santus, Enrico
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.cogalex-1.9/
Winiwarter, Werner and Wloka, Bartholom{\"aus
Proceedings of the Workshop on Cognitive Aspects of the Lexicon
68--77
In this paper, we present a novel approach for building kanji dictionaries by enriching the lexical data of 3,500 kanji with images, structural decompositions, and semantically based cross-media mappings from the textual to the visual dimension. Our kanji dictionary is part of a Web-based contextual language learning environment based on augmented browsing technology. We display our multimodal kanji information as kanji cards in the Web browser, offering a versatile representation that can be integrated into other advanced creative language learning applications, such as memorization puzzles, creative storytelling assignments, or educational games.
null
null
10.18653/v1/2022.cogalex-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,086
inproceedings
rambelli-etal-2022-compositionality
Compositionality as an Analogical Process: Introducing {ANNE}
Zock, Michael and Chersoni, Emmanuele and Hsu, Yu-Yin and Santus, Enrico
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.cogalex-1.10/
Rambelli, Giulia and Chersoni, Emmanuele and Blache, Philippe and Lenci, Alessandro
Proceedings of the Workshop on Cognitive Aspects of the Lexicon
78--96
Usage-based constructionist approaches consider language a structured inventory of constructions, form-meaning pairings of different schematicity and complexity, and claim that the more a linguistic pattern is encountered, the more it becomes accessible to speakers. However, when an expression is unavailable, what processes underlie the interpretation? While traditional answers rely on the principle of compositionality, for which the meaning is built word-by-word and incrementally, usage-based theories argue that novel utterances are created based on previously experienced ones through analogy, mapping an existing structural pattern onto a novel instance. Starting from this theoretical perspective, we propose here a computational implementation of these assumptions. As the principle of compositionality has been used to generate distributional representations of phrases, we propose a neural network simulating the construction of phrasal embedding as an analogical process. Our framework, inspired by word2vec and computer vision techniques, was evaluated on tasks of generalization from existing vectors.
null
null
10.18653/v1/2022.cogalex-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,087
inproceedings
soubki-etal-2022-kojak
{KOJAK}: A New Corpus for Studying {G}erman Discourse Particle ja
Braud, Chloe and Hardmeier, Christian and Li, Junyi Jessy and Loaiciga, Sharid and Strube, Michael and Zeldes, Amir
oct
2022
Gyeongju, Republic of Korea and Online
International Conference on Computational Linguistics
https://aclanthology.org/2022.codi-1.1/
Soubki, Adil and Rambow, Owen and Kang, Chong
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
1--6
In German, ja can be used as a discourse particle to indicate that a proposition, according to the speaker, is believed by both the speaker and audience. We use this observation to create KoJaK, a distantly-labeled English dataset derived from Europarl for studying when a speaker believes a statement to be common ground. This corpus is then analyzed to identify lexical choices in English that correspond with German ja. Finally, we perform experiments on the dataset to predict if an English clause corresponds to a German clause containing ja and achieve an F-measure of 75.3{\%} on a balanced test corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,089
inproceedings
xing-etal-2022-improving
Improving Topic Segmentation by Injecting Discourse Dependencies
Braud, Chloe and Hardmeier, Christian and Li, Junyi Jessy and Loaiciga, Sharid and Strube, Michael and Zeldes, Amir
oct
2022
Gyeongju, Republic of Korea and Online
International Conference on Computational Linguistics
https://aclanthology.org/2022.codi-1.2/
Xing, Linzi and Huber, Patrick and Carenini, Giuseppe
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
7--18
Recent neural supervised topic segmentation models achieve distinguished superior effectiveness over unsupervised methods, with the availability of large-scale training corpora sampled from Wikipedia. These models may, however, suffer from limited robustness and transferability caused by exploiting simple linguistic cues for prediction, but overlooking more important inter-sentential topical consistency. To address this issue, we present a discourse-aware neural topic segmentation model with the injection of above-sentence discourse dependency structures to encourage the model make topic boundary prediction based more on the topical consistency between sentences. Our empirical study on English evaluation datasets shows that injecting above-sentence discourse structures to a neural topic segmenter with our proposed strategy can substantially improve its performances on intra-domain and out-of-domain data, with little increase of model`s complexity.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,090
inproceedings
cho-etal-2022-evaluating
Evaluating How Users Game and Display Conversation with Human-Like Agents
Braud, Chloe and Hardmeier, Christian and Li, Junyi Jessy and Loaiciga, Sharid and Strube, Michael and Zeldes, Amir
oct
2022
Gyeongju, Republic of Korea and Online
International Conference on Computational Linguistics
https://aclanthology.org/2022.codi-1.3/
Cho, Won Ik and Kim, Soomin and Choi, Eujeong and Jeong, Younghoon
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
19--27
Recently, with the advent of high-performance generative language models, artificial agents that communicate directly with the users have become more human-like. This development allows users to perform a diverse range of trials with the agents, and the responses are sometimes displayed online by users who share or show-off their experiences. In this study, we explore dialogues with a social chatbot uploaded to an online community, with the aim of understanding how users game human-like agents and display their conversations. Having done this, we assert that user postings can be investigated from two aspects, namely conversation topic and purpose of testing, and suggest a categorization scheme for the analysis. We analyze 639 dialogues to develop an annotation protocol for the evaluation, and measure the agreement to demonstrate the validity. We find that the dialogue content does not necessarily reflect the purpose of testing, and also that users come up with creative strategies to game the agent without being penalized.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,091
inproceedings
he-etal-2022-evaluating
Evaluating Discourse Cohesion in Pre-trained Language Models
Braud, Chloe and Hardmeier, Christian and Li, Junyi Jessy and Loaiciga, Sharid and Strube, Michael and Zeldes, Amir
oct
2022
Gyeongju, Republic of Korea and Online
International Conference on Computational Linguistics
https://aclanthology.org/2022.codi-1.4/
He, Jie and Long, Wanqiu and Xiong, Deyi
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
28--34
Large pre-trained neural models have achieved remarkable success in natural language process (NLP), inspiring a growing body of research analyzing their ability from different aspects. In this paper, we propose a test suite to evaluate the cohesive ability of pre-trained language models. The test suite contains multiple cohesion phenomena between adjacent and non-adjacent sentences. We try to compare different pre-trained language models on these phenomena and analyze the experimental results,hoping more attention can be given to discourse cohesion in the future. The built discourse cohesion test suite will be publicly available at \url{https://github.com/probe2/discourse_cohesion}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,092
inproceedings
shen-etal-2022-easy
Easy-First Bottom-Up Discourse Parsing via Sequence Labelling
Braud, Chloe and Hardmeier, Christian and Li, Junyi Jessy and Loaiciga, Sharid and Strube, Michael and Zeldes, Amir
oct
2022
Gyeongju, Republic of Korea and Online
International Conference on Computational Linguistics
https://aclanthology.org/2022.codi-1.5/
Shen, Andrew and Koto, Fajri and Lau, Jey Han and Baldwin, Timothy
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
35--41
We propose a novel unconstrained bottom-up approach for rhetorical discourse parsing based on sequence labelling of adjacent pairs of discourse units (DUs), based on the framework of Koto et al. (2021). We describe the unique training requirements of an unconstrained parser, and explore two different training procedures: (1) fixed left-to-right; and (2) random order in tree construction. Additionally, we introduce a novel dynamic oracle for unconstrained bottom-up parsing. Our proposed parser achieves competitive results for bottom-up rhetorical discourse parsing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,093
inproceedings
lapshinova-koltunski-carl-2022-using
Using Translation Process Data to Explore Explicitation and Implicitation through Discourse Connectives
Braud, Chloe and Hardmeier, Christian and Li, Junyi Jessy and Loaiciga, Sharid and Strube, Michael and Zeldes, Amir
oct
2022
Gyeongju, Republic of Korea and Online
International Conference on Computational Linguistics
https://aclanthology.org/2022.codi-1.6/
Lapshinova-Koltunski, Ekaterina and Carl, Michael
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
42--47
We look into English-German translation process data to analyse explicitation and implicitation phenomena of discourse connectives. For this, we use the database CRITT TPR-DB which contains translation process data with various features that elicit online translation behaviour. We explore the English-German part of the data for discourse connectives that are either omitted or inserted in the target, as well as cases when changing a weak signal to strong one, or the other way around. We determine several features that have an impact on cognitive effort during translation for explicitation and implicitation. Our results show that cognitive load caused by implicitation and explicitation may depend on the discourse connectives used, as well as on the strength and the type of the relations the connectives convey.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,094
inproceedings
yung-etal-2022-label
Label distributions help implicit discourse relation classification
Braud, Chloe and Hardmeier, Christian and Li, Junyi Jessy and Loaiciga, Sharid and Strube, Michael and Zeldes, Amir
oct
2022
Gyeongju, Republic of Korea and Online
International Conference on Computational Linguistics
https://aclanthology.org/2022.codi-1.7/
Yung, Frances and Anuranjana, Kaveri and Scholman, Merel and Demberg, Vera
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
48--53
Implicit discourse relations can convey more than one relation sense, but much of the research on discourse relations has focused on single relation senses. Recently, DiscoGeM, a novel multi-domain corpus, which contains 10 crowd-sourced labels per relational instance, has become available. In this paper, we analyse the co-occurrences of relations in DiscoGem and show that they are systematic and characteristic of text genre. We then test whether information on multi-label distributions in the data can help implicit relation classifiers. Our results show that incorporating multiple labels in parser training can improve its performance, and yield label distributions which are more similar to human label distributions, compared to a parser that is trained on just a single most frequent label per instance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,095
inproceedings
kikteva-etal-2022-keystone
The Keystone Role Played by Questions in Debate
Braud, Chloe and Hardmeier, Christian and Li, Junyi Jessy and Loaiciga, Sharid and Strube, Michael and Zeldes, Amir
oct
2022
Gyeongju, Republic of Korea and Online
International Conference on Computational Linguistics
https://aclanthology.org/2022.codi-1.8/
Kikteva, Zlata and Gorska, Kamila and Siskou, Wassiliki and Hautli-Janisz, Annette and Reed, Chris
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
54--63
Building on the recent results of a study into the roles that are played by questions in argumentative dialogue (Hautli-Janisz et al.,2022a), we expand the analysis to investigate a newly released corpus that constitutes the largest extant corpus of closely annotated debate. Questions play a critical role in driving dialogical discourse forward; in combative or critical discursive environments, they not only provide a range of discourse management techniques, they also scaffold the semantic structure of the positions that interlocutors develop. The boundaries, however, between providing substantive answers to questions, merely responding to questions, and evading questions entirely, are fuzzy and the way in which answers, responses and evasions affect the subsequent development of dialogue and argumentation structure are poorly understood. In this paper, we explore how questions have ramifications on the large-scale structure of a debate using as our substrate the BBC television programme Question Time, the foremost topical debate show in the UK. Analysis of the data demonstrates not only that questioning plays a particularly prominent role in such debate, but also that its repercussions can reverberate through a discourse.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,096
inproceedings
niklaus-etal-2022-shallow
Shallow Discourse Parsing for Open Information Extraction and Text Simplification
Braud, Chloe and Hardmeier, Christian and Li, Junyi Jessy and Loaiciga, Sharid and Strube, Michael and Zeldes, Amir
oct
2022
Gyeongju, Republic of Korea and Online
International Conference on Computational Linguistics
https://aclanthology.org/2022.codi-1.9/
Niklaus, Christina and Freitas, Andr{\'e} and Handschuh, Siegfried
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
64--76
We present a discourse-aware text simplification (TS) approach that recursively splits and rephrases complex English sentences into a semantic hierarchy of simplified sentences. Using a set of linguistically principled transformation patterns, sentences are converted into a hierarchical representation in the form of core sentences and accompanying contexts that are linked via rhetorical relations. As opposed to previously proposed sentence splitting approaches, which commonly do not take into account discourse-level aspects, our TS approach preserves the semantic relationship of the decomposed constituents in the output. A comparative analysis with the annotations contained in RST-DT shows that we capture the contextual hierarchy between the split sentences with a precision of 89{\%} and reach an average precision of 69{\%} for the classification of the rhetorical relations that hold between them. Moreover, an integration into state-of-the-art Open Information Extraction (IE) systems reveals that when applying our TS approach as a pre-processing step, the generated relational tuples are enriched with additional meta information, resulting in a novel lightweight semantic representation for the task of Open IE.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,097
inproceedings
devatine-etal-2022-predicting
Predicting Political Orientation in News with Latent Discourse Structure to Improve Bias Understanding
Braud, Chloe and Hardmeier, Christian and Li, Junyi Jessy and Loaiciga, Sharid and Strube, Michael and Zeldes, Amir
oct
2022
Gyeongju, Republic of Korea and Online
International Conference on Computational Linguistics
https://aclanthology.org/2022.codi-1.10/
Devatine, Nicolas and Muller, Philippe and Braud, Chlo{\'e}
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
77--85
With the growing number of information sources, the problem of media bias becomes worrying for a democratic society. This paper explores the task of predicting the political orientation of news articles, with a goal of analyzing how bias is expressed. We demonstrate that integrating rhetorical dimensions via latent structures over sub-sentential discourse units allows for large improvements, with a +7.4 points difference between the base LSTM model and its discourse-based version, and +3 points improvement over the previous BERT-based state-of-the-art model. We also argue that this gives a new relevant handle for analyzing political bias in news articles.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,098
inproceedings
veron-etal-2022-attention
Attention Modulation for Zero-Shot Cross-Domain Dialogue State Tracking
Braud, Chloe and Hardmeier, Christian and Li, Junyi Jessy and Loaiciga, Sharid and Strube, Michael and Zeldes, Amir
oct
2022
Gyeongju, Republic of Korea and Online
International Conference on Computational Linguistics
https://aclanthology.org/2022.codi-1.11/
Veron, Mathilde and Galibert, Olivier and Bernard, Guillaume and Rosset, Sophie
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
86--91
Dialog state tracking (DST) is a core step for task-oriented dialogue systems aiming to track the user`s current goal during a dialogue. Recently a special focus has been put on applying existing DST models to new domains, in other words performing zero-shot cross-domain transfer. While recent state-of-the-art models leverage large pre-trained language models, no work has been made on understanding and improving the results of first developed zero-shot models like SUMBT. In this paper, we thus propose to improve SUMBT zero-shot results on MultiWOZ by using attention modulation during inference. This method improves SUMBT zero-shot results significantly on two domains and does not worsen the initial performance with the great advantage of needing no additional training.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,099
inproceedings
soni-etal-2022-empirical
An Empirical Study of Topic Transition in Dialogue
Braud, Chloe and Hardmeier, Christian and Li, Junyi Jessy and Loaiciga, Sharid and Strube, Michael and Zeldes, Amir
oct
2022
Gyeongju, Republic of Korea and Online
International Conference on Computational Linguistics
https://aclanthology.org/2022.codi-1.12/
Soni, Mayank and Spillane, Brendan and Muckley, Leo and Cooney, Orla and Gilmartin, Emer and Saam, Christian and Cowan, Benjamin and Wade, Vincent
Proceedings of the 3rd Workshop on Computational Approaches to Discourse
92--99
Although topic transition has been studied in dialogue for decades, only a handful of corpora based quantitative studies have been conducted to investigate the nature of topic transitions. Towards this end, this study annotates 215 conversations from the switchboard corpus, perform quantitative analysis and finds that 1) longer conversations consists of more topic transitions, 2) topic transition are usually lead by one participant and 3) we found no pattern in time series progression of topic transition. We also model topic transition with a precision of 91{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,100
inproceedings
yu-etal-2022-codi
The {CODI}-{CRAC} 2022 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue
Yu, Juntao and Khosla, Sopan and Manuvinakurike, Ramesh and Levin, Lori and Ng, Vincent and Poesio, Massimo and Strube, Michael and Rose, Carolyn
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.codi-crac.1/
Yu, Juntao and Khosla, Sopan and Manuvinakurike, Ramesh and Levin, Lori and Ng, Vincent and Poesio, Massimo and Strube, Michael and Ros{\'e}, Carolyn
Proceedings of the CODI-CRAC 2022 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue
1--14
The CODI-CRAC 2022 Shared Task on Anaphora Resolution in Dialogues is the second edition of an initiative focused on detecting different types of anaphoric relations in conversations of different kinds. Using five conversational datasets, four of which have been newly annotated with a wide range of anaphoric relations: identity, bridging references and discourse deixis, we defined multiple tasks focusing individually on these key relations. The second edition of the shared task maintained the focus on these relations and used the same datasets as in 2021, but new test data were annotated, the 2021 data were checked, and new subtasks were added. In this paper, we discuss the annotation schemes, the datasets, the evaluation scripts used to assess the system performance on these tasks, and provide a brief summary of the participating systems and the results obtained across 230 runs from three teams, with most submissions achieving significantly better results than our baseline methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,102
inproceedings
anikina-etal-2022-anaphora
Anaphora Resolution in Dialogue: System Description ({CODI}-{CRAC} 2022 Shared Task)
Yu, Juntao and Khosla, Sopan and Manuvinakurike, Ramesh and Levin, Lori and Ng, Vincent and Poesio, Massimo and Strube, Michael and Rose, Carolyn
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.codi-crac.2/
Anikina, Tatiana and Skachkova, Natalia and Renner, Joseph and Trivedi, Priyansh
Proceedings of the CODI-CRAC 2022 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue
15--27
We describe three models submitted for the CODI-CRAC 2022 shared task. To perform identity anaphora resolution, we test several combinations of the incremental clustering approach based on the Workspace Coreference System (WCS) with other coreference models. The best result is achieved by adding the {\textquotedblleft}cluster merging{\textquotedblright} version of the coref-hoi model, which brings up to 10.33{\%} improvement1 over vanilla WCS clustering. Discourse deixis resolution is implemented as multi-task learning: we combine the learning objective of coref-hoi with anaphor type classification. We adapt the higher-order resolution model introduced in Joshi et al. (2019) for bridging resolution given gold mentions and anaphors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,103
inproceedings
kim-etal-2022-pipeline
Pipeline Coreference Resolution Model for Anaphoric Identity in Dialogues
Yu, Juntao and Khosla, Sopan and Manuvinakurike, Ramesh and Levin, Lori and Ng, Vincent and Poesio, Massimo and Strube, Michael and Rose, Carolyn
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.codi-crac.3/
Kim, Damrin and Park, Seongsik and Han, Mirae and Kim, Harksoo
Proceedings of the CODI-CRAC 2022 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue
28--31
CODI-CRAC 2022 Shared Task in Dialogues consists of three sub-tasks: Sub-task 1 is the resolution of anaphoric identity, sub-task 2 is the resolution of bridging references, and sub-task 3 is the resolution of discourse deixis/abstract anaphora. Anaphora resolution is the task of detecting mentions from input documents and clustering the mentions of the same entity. The end-to-end model proceeds with the pruning of the candidate mention, and the pruning has the possibility of removing the correct mention. Also, the end-to-end anaphora resolution model has high model complexity, which takes a long time to train. Therefore, we proceed with the anaphora resolution as a two-stage pipeline model. In the first mention detection step, the score of the candidate word span is calculated, and the mention is predicted without pruning. In the second anaphora resolution step, the pair of mentions of the anaphora resolution relationship is predicted using the mentions predicted in the mention detection step. We propose a two-stage anaphora resolution pipeline model that reduces model complexity and training time, and maintains similar performance to end-to-end models. As a result of the experiment, the anaphora resolution showed a performance of 68.27{\%} in Light, 48.87{\%} in AMI, 69.06{\%} in Persuasion, and 60.99{\%} on Switchboard. Our final system ranked 3rd on the leaderboard of sub-task 1.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,104
inproceedings
li-etal-2022-neural-anaphora
Neural Anaphora Resolution in Dialogue Revisited
Yu, Juntao and Khosla, Sopan and Manuvinakurike, Ramesh and Levin, Lori and Ng, Vincent and Poesio, Massimo and Strube, Michael and Rose, Carolyn
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.codi-crac.4/
Li, Shengjie and Kobayashi, Hideo and Ng, Vincent
Proceedings of the CODI-CRAC 2022 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue
32--47
We present the systems that we developed for all three tracks of the CODI-CRAC 2022 shared task, namely the anaphora resolution track, the bridging resolution track, and the discourse deixis resolution track. Combining an effective encoding of the input using the SpanBERT$_{\text{Large}}$ encoder with an extensive hyperparameter search process, our systems achieved the highest scores in all phases of all three tracks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,105
inproceedings
pais-etal-2022-challenges
Challenges in Creating a Representative Corpus of {R}omanian Micro-Blogging Text
Banski, Piotr and Barbaresi, Adrien and Clematide, Simon and Kupietz, Marc and L{\"ungen, Harald
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cmlc-1.1/
Pais, Vasile and Mitrofan, Maria and Barbu Mititelu, Verginica and Irimia, Elena and Micu, Roxana and Gasan, Carol Luca
Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-10)
1--7
Following the successful creation of a national representative corpus of contemporary Romanian language, we turned our attention to the social media text, as present in micro-blogging platforms. In this paper, we present the current activities as well as the challenges faced when trying to apply existing tools (for both annotation and indexing) to a Romanian language micro-blogging corpus. These challenges are encountered at all annotation levels, including tokenization, and at the indexing stage. We consider that existing tools for Romanian language processing must be adapted to recognize features such as emoticons, emojis, hashtags, unusual abbreviations, elongated words (commonly used for emphasis in micro-blogging), multiple words joined together (within oroutside hashtags), and code-mixed text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,107
inproceedings
von-korff-2022-exhaustive
Exhaustive Indexing of {P}ub{M}ed Records with Medical Subject Headings
Banski, Piotr and Barbaresi, Adrien and Clematide, Simon and Kupietz, Marc and L{\"ungen, Harald
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cmlc-1.2/
von Korff, Modest
Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-10)
8--15
With fourteen million publication records the PubMed database is one of the largest repositories in medical science. Analysing this database to relate biological targets to diseases is an important task in pharmaceutical research. We developed a software tool, MeSHTreeIndexer, for indexing the PubMed medical literature with disease terms. The disease terms were taken from the Medical Subject Heading (MeSH) Terms compiled by the National Institutes of Health (NIH) of the US. In a first semi-automatic step we identified about 5`900 terms as disease related. The MeSH terms contain so-called entry points that are synonymously used for the terms. We created an inverted index for these 5`900 MeSH terms and their 58`000 entry points. From the PubMed database fourteen million publication records were stored in Lucene. These publication records were tagged by the inverted MeSH term index. In this contribution we demonstrate that our approach provided a significant higher enrichment in MeSH terms than the indexing of the PubMed records by the NIH themselves. Manual control proved that our enrichment is meaningful. Our software was written in Java and is available as open source.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,108
inproceedings
diewald-2022-matrix
Matrix and Double-Array Representations for Efficient Finite State Tokenization
Banski, Piotr and Barbaresi, Adrien and Clematide, Simon and Kupietz, Marc and L{\"ungen, Harald
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cmlc-1.4/
Diewald, Nils
Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-10)
20--26
This paper presents an algorithm and implementation for efficient tokenization of space-delimited languages based on a deterministic finite state automaton. Two representations of the underlying data structure are presented and a model implementation for German is compared with state-of-the-art approaches. The presented solution is faster than other tools while maintaining comparable quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,110
inproceedings
fankhauser-kupietz-2022-count
Count-Based and Predictive Language Models for Exploring {D}e{R}e{K}o
Banski, Piotr and Barbaresi, Adrien and Clematide, Simon and Kupietz, Marc and L{\"ungen, Harald
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cmlc-1.5/
Fankhauser, Peter and Kupietz, Marc
Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-10)
27--31
We present the use of count-based and predictive language models for exploring language use in the German Reference Corpus DeReKo. For collocation analysis along the syntagmatic axis we employ traditional association measures based on co-occurrence counts as well as predictive association measures derived from the output weights of skipgram word embeddings. For inspecting the semantic neighbourhood of words along the paradigmatic axis we visualize the high dimensional word embeddings in two dimensions using t-stochastic neighbourhood embeddings. Together, these visualizations provide a complementary, explorative approach to analysing very large corpora in addition to corpus querying. Moreover, we discuss count-based and predictive models w.r.t. scalability and maintainability in very large corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,111
inproceedings
biber-2022-word
{\textquotedblleft}The word expired when that world awoke.{\textquotedblright} New Challenges for Research with Large Text Corpora and Corpus-Based Discourse Studies in Totalitarian Times
Banski, Piotr and Barbaresi, Adrien and Clematide, Simon and Kupietz, Marc and L{\"ungen, Harald
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cmlc-1.6/
Biber, Hanno
Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-10)
32--35
In the following poster proposal a report will be given on the prospects of a promising corpus project initiated by one of the large digital text corpora hosted by the Austrian Academy of Sciences. First, the resources of the AAC-Austrian Academy Corpus, that has been founded in 2001, which is one of the very valuable examples of digital diachronic text corpora suitable for corpus-based discourse studies and lexicography based upon historical sources, can be used as a basis for trying to answer new questions concerning the challenges for doing linguistic research with large digital text corpora in the context of studying totalitarian language use. The questions, as well as the chances and limits of such an approach, have very obvious actual references to the historic events unfolding today as well as a clearly historical dimension, precisely because the digital text sources that have been created to analyse the German language use of the Nazi-period from 1933 to 1945 can be understood as a model to deal with related questions of contemporary language use, particularly in the context of the new war of extermination of Russia in Ukraine of the year 2022 and how it is represented in contemporary media.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,112
inproceedings
merkx-etal-2022-seeing
Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.1/
Merkx, Danny and Frank, Stefan and Ernestus, Mirjam
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
1--11
Distributional semantic models capture word-level meaning that is useful in many natural language processing tasks and have even been shown to capture cognitive aspects of word meaning. The majority of these models are purely text based, even though the human sensory experience is much richer. In this paper we create visually grounded word embeddings by combining English text and images and compare them to popular text-based methods, to see if visual information allows our model to better capture cognitive aspects of word meaning. Our analysis shows that visually grounded embedding similarities are more predictive of the human reaction times in a large priming experiment than the purely text-based embeddings. The visually grounded embeddings also correlate well with human word similarity ratings. Importantly, in both experiments we show that the grounded embeddings account for a unique portion of explained variance, even when we include text-based embeddings trained on huge corpora. This shows that visual grounding allows our model to capture information that cannot be extracted using text as the only source of information.
null
null
10.18653/v1/2022.cmcl-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,114
inproceedings
lappin-bernardy-2022-neural
A Neural Model for Compositional Word Embeddings and Sentence Processing
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.2/
Lappin, Shalom and Bernardy, Jean-Philippe
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
12--22
We propose a new neural model for word embeddings, which uses Unitary Matrices as the primary device for encoding lexical information. It uses simple matrix multiplication to derive matrices for large units, yielding a sentence processing model that is strictly compositional, does not lose information over time steps, and is transparent, in the sense that word embeddings can be analysed regardless of context. This model does not employ activation functions, and so the network is fully accessible to analysis by the methods of linear algebra at each point in its operation on an input sequence. We test it in two NLP agreement tasks and obtain rule like perfect accuracy, with greater stability than current state-of-the-art systems. Our proposed model goes some way towards offering a class of computationally powerful deep learning systems that can be fully understood and compared to human cognitive processes for natural language learning and representation.
null
null
10.18653/v1/2022.cmcl-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,115
inproceedings
lang-etal-2022-visually
Visually Grounded Interpretation of Noun-Noun Compounds in {E}nglish
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.3/
Lang, Inga and Plas, Lonneke and Nissim, Malvina and Gatt, Albert
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
23--35
Noun-noun compounds (NNCs) occur frequently in the English language. Accurate NNC interpretation, i.e. determining the implicit relationship between the constituents of a NNC, is crucial for the advancement of many natural language processing tasks. Until now, computational NNC interpretation has been limited to approaches involving linguistic representations only. However, much research suggests that grounding linguistic representations in vision or other modalities can increase performance on this and other tasks. Our work is a novel comparison of linguistic and visuo-linguistic representations for the task of NNC interpretation. We frame NNC interpretation as a relation classification task, evaluating on a large, relationally-annotated NNC dataset. We combine distributional word vectors with image vectors to investigate how visual information can help improve NNC interpretation systems. We find that adding visual vectors increases classification performance on our dataset in many cases.
null
null
10.18653/v1/2022.cmcl-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,116
inproceedings
takmaz-etal-2022-less
Less Descriptive yet Discriminative: Quantifying the Properties of Multimodal Referring Utterances via {CLIP}
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.4/
Takmaz, Ece and Pezzelle, Sandro and Fern{\'a}ndez, Raquel
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
36--42
In this work, we use a transformer-based pre-trained multimodal model, CLIP, to shed light on the mechanisms employed by human speakers when referring to visual entities. In particular, we use CLIP to quantify the degree of descriptiveness (how well an utterance describes an image in isolation) and discriminativeness (to what extent an utterance is effective in picking out a single image among similar images) of human referring utterances within multimodal dialogues. Overall, our results show that utterances become less descriptive over time while their discriminativeness remains unchanged. Through analysis, we propose that this trend could be due to participants relying on the previous mentions in the dialogue history, as well as being able to distill the most discriminative information from the visual context. In general, our study opens up the possibility of using this and similar models to quantify patterns in human data and shed light on the underlying cognitive mechanisms.
null
null
10.18653/v1/2022.cmcl-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,117
inproceedings
cserhati-etal-2022-codenames
Codenames as a Game of Co-occurrence Counting
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.5/
Cserh{\'a}ti, R{\'e}ka and Kollath, Istvan and Kicsi, Andr{\'a}s and Berend, G{\'a}bor
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
43--53
Codenames is a popular board game, in which knowledge and cooperation between players play an important role. The task of a player playing as a spymaster is to find words (clues) that a teammate finds related to as many of some given words as possible, but not to other specified words. This is a hard challenge even with today`s advanced language technology methods. In our study, we create spymaster agents using four types of relatedness measures that require only a raw text corpus to produce. These include newly introduced ones based on co-occurrences, which outperform FastText cosine similarity on gold standard relatedness data. To generate clues in Codenames, we combine relatedness measures with four different scoring functions, for two languages, English and Hungarian. For testing, we collect decisions of human guesser players in an online game, and our configurations outperform previous agents among methods using raw corpora only.
null
null
10.18653/v1/2022.cmcl-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,118
inproceedings
futrell-2022-estimating
Estimating word co-occurrence probabilities from pretrained static embeddings using a log-bilinear model
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.6/
Futrell, Richard
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
54--60
We investigate how to use pretrained static word embeddings to deliver improved estimates of bilexical co-occurrence probabilities: conditional probabilities of one word given a single other word in a specific relationship. Such probabilities play important roles in psycholinguistics, corpus linguistics, and usage-based cognitive modeling of language more generally. We propose a log-bilinear model taking pretrained vector representations of the two words as input, enabling generalization based on the distributional information contained in both vectors. We show that this model outperforms baselines in estimating probabilities of adjectives given nouns that they attributively modify, and probabilities of nominal direct objects given their head verbs, given limited training data in Arabic, English, Korean, and Spanish.
null
null
10.18653/v1/2022.cmcl-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,119
inproceedings
kodner-2022-modeling
Modeling the Relationship between Input Distributions and Learning Trajectories with the Tolerance Principle
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.7/
Kodner, Jordan
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
61--67
Child language learners develop with remarkable uniformity, both in their learning trajectories and ultimate outcomes, despite major differences in their learning environments. In this paper, we explore the role that the frequencies and distributions of irregular lexical items in the input plays in driving learning trajectories. We conclude that while the Tolerance Principle, a type-based model of productivity learning, accounts for inter-learner uniformity, it also interacts with input distributions to drive cross-linguistic variation in learning trajectories.
null
null
10.18653/v1/2022.cmcl-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,120
inproceedings
hu-etal-2022-predicting
Predicting scalar diversity with context-driven uncertainty over alternatives
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.8/
Hu, Jennifer and Levy, Roger and Schuster, Sebastian
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
68--74
Scalar implicature (SI) arises when a speaker uses an expression (e.g., {\textquotedblleft}some{\textquotedblright}) that is semantically compatible with a logically stronger alternative on the same scale (e.g., {\textquotedblleft}all{\textquotedblright}), leading the listener to infer that they did not intend to convey the stronger meaning. Prior work has demonstrated that SI rates are highly variable across scales, raising the question of what factors determine the SI strength for a particular scale. Here, we test the hypothesis that SI rates depend on the listener`s confidence in the underlying scale, which we operationalize as uncertainty over the distribution of possible alternatives conditioned on the context. We use a T5 model fine-tuned on a text infilling task to estimate this distribution. We find that scale uncertainty predicts human SI rates, measured as entropy over the sampled alternatives and over latent classes among alternatives in sentence embedding space. Furthermore, we do not find a significant effect of the surprisal of the strong scalemate. Our results suggest that pragmatic inferences depend on listeners' context-driven uncertainty over alternatives.
null
null
10.18653/v1/2022.cmcl-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,121
inproceedings
bensemann-etal-2022-eye
Eye Gaze and Self-attention: How Humans and Transformers Attend Words in Sentences
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.9/
Bensemann, Joshua and Peng, Alex and Benavides-Prado, Diana and Chen, Yang and Tan, Neset and Corballis, Paul Michael and Riddle, Patricia and Witbrock, Michael
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
75--87
Attention describes cognitive processes that are important to many human phenomena including reading. The term is also used to describe the way in which transformer neural networks perform natural language processing. While attention appears to be very different under these two contexts, this paper presents an analysis of the correlations between transformer attention and overt human attention during reading tasks. An extensive analysis of human eye tracking datasets showed that the dwell times of human eye movements were strongly correlated with the attention patterns occurring in the early layers of pre-trained transformers such as BERT. Additionally, the strength of a correlation was not related to the number of parameters within a transformer. This suggests that something about the transformers' architecture determined how closely the two measures were correlated.
null
null
10.18653/v1/2022.cmcl-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,122
inproceedings
metheniti-etal-2022-time
About Time: Do Transformers Learn Temporal Verbal Aspect?
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.10/
Metheniti, Eleni and Van De Cruys, Tim and Hathout, Nabil
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
88--101
Aspect is a linguistic concept that describes how an action, event, or state of a verb phrase is situated in time. In this paper, we explore whether different transformer models are capable of identifying aspectual features. We focus on two specific aspectual features: telicity and duration. Telicity marks whether the verb`s action or state has an endpoint or not (telic/atelic), and duration denotes whether a verb expresses an action (dynamic) or a state (stative). These features are integral to the interpretation of natural language, but also hard to annotate and identify with NLP methods. We perform experiments in English and French, and our results show that transformer models adequately capture information on telicity and duration in their vectors, even in their non-finetuned forms, but are somewhat biased with regard to verb tense and word order.
null
null
10.18653/v1/2022.cmcl-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,123
inproceedings
srivastava-2022-poirot
Poirot at {CMCL} 2022 Shared Task: Zero Shot Crosslingual Eye-Tracking Data Prediction using Multilingual Transformer Models
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.11/
Srivastava, Harshvardhan
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
102--107
Eye tracking data during reading is a useful source of information to understand the cognitive processes that take place during language comprehension processes. Different languages account for different cognitive triggers, however there seems to be some uniform indicatorsacross languages. In this paper, we describe our submission to the CMCL 2022 shared task on predicting human reading patterns for multi-lingual dataset. Our model uses text representations from transformers and some hand engineered features with a regression layer on top to predict statistical measures of mean and standard deviation for 2 main eye-tracking features. We train an end-to-end model to extract meaningful information from different languages and test our model on two separate datasets. We compare different transformer models andshow ablation studies affecting model performance. Our final submission ranked 4th place for SubTask-1 and 1st place for SubTask-2 forthe shared task.
null
null
10.18653/v1/2022.cmcl-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,124
inproceedings
imperial-2022-nu
{NU} {HLT} at {CMCL} 2022 Shared Task: Multilingual and Crosslingual Prediction of Human Reading Behavior in Universal Language Space
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.12/
Imperial, Joseph Marvin
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
108--113
In this paper, we present a unified model that works for both multilingual and crosslingual prediction of reading times of words in various languages. The secret behind the success of this model is in the preprocessing step where all words are transformed to their universal language representation via the International Phonetic Alphabet (IPA). To the best of our knowledge, this is the first study to favorably exploit this phonological property of language for the two tasks. Various feature types were extracted covering basic frequencies, n-grams, information theoretic, and psycholinguistically-motivated predictors for model training. A finetuned Random Forest model obtained best performance for both tasks with 3.8031 and 3.9065 MAE scores for mean first fixation duration (FFDAvg) and mean total reading time (TRTAvg) respectively.
null
null
10.18653/v1/2022.cmcl-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,125
inproceedings
salicchi-etal-2022-hkamsters
{H}k{A}msters at {CMCL} 2022 Shared Task: Predicting Eye-Tracking Data from a Gradient Boosting Framework with Linguistic Features
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.13/
Salicchi, Lavinia and Xiang, Rong and Hsu, Yu-Yin
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
114--120
Eye movement data are used in psycholinguistic studies to infer information regarding cognitive processes during reading. In this paper, we describe our proposed method for the Shared Task of Cognitive Modeling and Computational Linguistics (CMCL) 2022 - Subtask 1, which involves data from multiple datasets on 6 languages. We compared different regression models using features of the target word and its previous word, and target word surprisal as regression features. Our final system, using a gradient boosting regressor, achieved the lowest mean absolute error (MAE), resulting in the best system of the competition.
null
null
10.18653/v1/2022.cmcl-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,126
inproceedings
hollenstein-etal-2022-cmcl
{CMCL} 2022 Shared Task on Multilingual and Crosslingual Prediction of Human Reading Behavior
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.14/
Hollenstein, Nora and Chersoni, Emmanuele and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
121--129
We present the second shared task on eye-tracking data prediction of the Cognitive Modeling and Computational Linguistics Workshop (CMCL). Differently from the previous edition, participating teams are asked to predict eye-tracking features from multiple languages, including a surprise language for which there were no available training data. Moreover, the task also included the prediction of standard deviations of feature values in order to account for individual differences between readers.A total of six teams registered to the task. For the first subtask on multilingual prediction, the winning team proposed a regression model based on lexical features, while for the second subtask on cross-lingual prediction, the winning team used a hybrid model based on a multilingual transformer embeddings as well as statistical features.
null
null
10.18653/v1/2022.cmcl-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,127
inproceedings
bhattacharya-etal-2022-team
Team {{\'U}FAL} at {CMCL} 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.15/
Bhattacharya, Sunit and Kumar, Rishu and Bojar, Ondrej
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
130--135
Eye-Tracking data is a very useful source of information to study cognition and especially language comprehension in humans. In this paper, we describe our systems for the CMCL 2022 shared task on predicting eye-tracking information. We describe our experiments withpretrained models like BERT and XLM and the different ways in which we used those representations to predict four eye-tracking features. Along with analysing the effect of using two different kinds of pretrained multilingual language models and different ways of pooling the token-level representations, we also explore how contextual information affects the performance of the systems. Finally, we also explore if factors like augmenting linguistic information affect the predictions. Our submissions achieved an average MAE of 5.72 and ranked 5th in the shared task. The average MAE showed further reduction to 5.25 in post task evaluation.
null
null
10.18653/v1/2022.cmcl-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,128
inproceedings
takmaz-2022-team
Team {DMG} at {CMCL} 2022 Shared Task: Transformer Adapters for the Multi- and Cross-Lingual Prediction of Human Reading Behavior
Chersoni, Emmanuele and Hollenstein, Nora and Jacobs, Cassandra and Oseki, Yohei and Pr{\'e}vot, Laurent and Santus, Enrico
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.cmcl-1.16/
Takmaz, Ece
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
136--144
In this paper, we present the details of our approaches that attained the second place in the shared task of the ACL 2022 Cognitive Modeling and Computational Linguistics Workshop. The shared task is focused on multi- and cross-lingual prediction of eye movement features in human reading behavior, which could provide valuable information regarding language processing. To this end, we train {\textquoteleft}adapters' inserted into the layers of frozen transformer-based pretrained language models. We find that multilingual models equipped with adapters perform well in predicting eye-tracking features. Our results suggest that utilizing language- and task-specific adapters is beneficial and translating test sets into similar languages that exist in the training set could help with zero-shot transferability in the prediction of human reading behavior.
null
null
10.18653/v1/2022.cmcl-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,129
inproceedings
heinecke-shimorina-2022-multilingual
Multilingual {A}bstract {M}eaning {R}epresentation for {C}eltic Languages
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.1/
Heinecke, Johannes and Shimorina, Anastasia
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
1--6
Deep Semantic Parsing into Abstract Meaning Representation (AMR) graphs has reached a high quality with neural-based seq2seq approaches. However, the training corpus for AMR is only available for English. Several approaches to process other languages exist, but only for high resource languages. We present an approach to create a multilingual text-to-AMR model for three Celtic languages, Welsh (P-Celtic) and the closely related Irish and Scottish-Gaelic (Q-Celtic). The main success of this approach are underlying multilingual transformers like mT5. We finally show that machine translated test corpora unfairly improve the AMR evaluation for about 1 or 2 points (depending on the language).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,131
inproceedings
scannell-2022-diachronic
Diachronic Parsing of Pre-Standard {I}rish
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.2/
Scannell, Kevin
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
7--13
Irish underwent a major spelling standardization in the 1940`s and 1950`s, and as a result it can be challenging to apply language technologies designed for the modern language to older, {\textquotedblleft}pre-standard{\textquotedblright} texts. Lemmatization, tagging, and parsing of these pre-standard texts play an important role in a number of applications, including the lexicographical work on Focl{\'o}ir Stairi{\'u}il na Gaeilge, a historical dictionary of Irish covering the period from 1600 to the present. We have two main goals in this paper. First, we introduce a small benchmark corpus containing just over 3800 words, annotated according to the Universal Dependencies guidelines and covering a range of dialects and time periods since 1600. Second, we establish baselines for lemmatization, tagging, and dependency parsing on this corpus by experimenting with a variety of machine learning approaches.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,132
inproceedings
el-haj-etal-2022-creation
Creation of an Evaluation Corpus and Baseline Evaluation Scores for {W}elsh Text Summarisation
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.3/
El-Haj, Mahmoud and Ezeani, Ignatius and Morris, Jonathan and Knight, Dawn
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
14--21
As part of the effort to increase the availability of Welsh digital technology, this paper introduces the first human vs metrics Welsh summarisation evaluation results and dataset, which we provide freely for research purposes to help advance the work on Welsh summarisation. The system summaries were created using an extractive graph-based Welsh summariser. The system summaries were evaluated by both human and a range of ROUGE metric variants (e.g. ROUGE 1, 2, L and SU4). The summaries and evaluation results will serve as benchmarks for the development of summarisers and evaluation metrics in other minority language contexts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,133
inproceedings
o-donaill-2022-clilstore
{CLILSTORE}.{EU} - A Multilingual online {CLIL} platform
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.4/
{\'O} D{\'o}naill, Caoimh{\'i}n
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
22--29
CLILSTORE.EU is an open educational resource (OER) that was created by the Erasmus + funded CLIL Open Online Learning (COOL) project which ran from 2018-2021. The project consortium included teaching practitioners from the primary, secondary, tertiary and vocational sectors who each brought their influence to bear on the design and functionality of the OER and subsequently evaluated its development within the learning contexts of their respective sectors. CLILSTORE.EU serves as both an authoring and sharing platform where multimedia learning materials can be created and accessed. Its name comprises the acronym CLIL, owing to its particular suitablity as a tool to support the Content and Language Integrated Learning methodology (Marsh, D. (ed.), 2002). The main educational aims of the OER are to provide teachers with a relatively straightforward means of creating reusable, multimodal learning units that can be used within the classroom or via remote learning to underpin and scaffold the delivery of curricular content in any subject area, especially in contexts where learners are acquiring new knowledge through the medium of a second or additional language. The following account details recent development work on the OER`s functionality and usability and presents case studies showing how it can benefit Celtic languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,134
inproceedings
prys-watkins-2022-evaluation
Evaluation of Three {W}elsh Language {POS} Taggers
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.5/
Prys, Gruffudd and Watkins, Gareth
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
30--39
In this paper we describe our quantitative and qualitative evaluation of three Welsh language Part of Speech (POS) taggers. Following an introductory section, we explore some of the issues which face POS taggers, discuss the state of the art in English language tagging, and describe the three Welsh language POS taggers that will be evaluated in this paper, namely WNLT2, CyTag and TagTeg. In section 3 we describe the challenges involved in evaluating POS taggers which make use of different tagsets, and introduce our mapping of the taggers' individual tagsets to an Intermediate Tagset used to facilitate their comparative evaluation. Section 4 introduces our benchmarking corpus as an important component of our methodology. In section 5 we describe how the inconsistencies in text tokenization between the different taggers present an issue when undertaking such evaluations, and discuss the method used to overcome this complication. Section 6 illustrates how we annotated the benchmark corpus, while section 7 describes the scoring method used. Section 8 provides an in-depth analysis of the results, and a summary of the work is presented in the conclusion found in section 9. Keywords: POS Tagger, Welsh, Evaluation, Machine Learning
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,135
inproceedings
foret-etal-2022-iterated
Iterated Dependencies in a {B}reton treebank and implications for a Categorial Dependency Grammar
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.6/
Foret, Annie and B{\'e}chet, Denis and Bellynck, Val{\'e}rie
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
40--46
Categorial Dependency Grammars (CDG) are computational grammars for natural language processing, defining dependency structures. They can be viewed as a formal system, where types are attached to words, combining the classical categorial grammars' elimination rules with valency pairing rules able to define discontinuous (non-projective) dependencies. Algorithms have been proposed to infer grammars in this class from treebanks, with respect to Mel'{\v{c}}uk principles. We consider this approach with experiments on Breton. We focus in particular on {\textquotedblright}repeatable dependencies{\textquotedblright} (iterated) and their patterns. A dependency $d$ is iterated in a dependency structure if some word in this structure governs several other words through dependency d. We illustrate this approach with data in the universal dependencies format and dependency patterns written in Grew (a graph rewriting tool dedicated to applications in natural Language Processing).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,136
inproceedings
lonergan-etal-2022-automatic
Automatic Speech Recognition for {I}rish: the {ABAIR}-{{\'E}IST} System
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.7/
Lonergan, Liam and Qian, Mengjie and Berthelsen, Harald and Murphy, Andy and Wendler, Christoph and N{\'i} Chiar{\'a}in, Neasa and Gobl, Christer and N{\'i} Chasaide, Ailbhe
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
47--51
This paper describes {\'E}IST, automatic speech recogniser for Irish, developed as part of the ongoing ABAIR initiative, combining (1) acoustic models, (2) pronunciation lexicons and (3) language models into a hybrid system. A priority for now is a system that can deal with the multiple diverse native-speaker dialects. Consequently, (1) was built using predominately native-speaker speech, which included earlier recordings used for synthesis development as well as more diverse recordings obtained using the M{\'i}leGl{\'o}r platform. The pronunciation variation across the dialects is a particular challenge in the development of (2) and is explored by testing both Trans-dialect and Multi-dialect letter-to-sound rules. Two approaches to language modelling (3) are used in the hybrid system, a simple n-gram model and recurrent neural network lattice rescoring, the latter garnering impressive performance improvements. The system is evaluated using a test set that is comprised of both native and non-native speakers, which allows for some inferences to be made on the performance of the system on both cohorts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,137
inproceedings
jones-2022-development
Development and Evaluation of Speech Recognition for the {W}elsh Language
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.8/
Jones, Dewi
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
52--59
This paper reports on ongoing work on developing and evaluating speech recognition models for the Welsh language using data from the Common Voice project and two popular open development kits {--} HuggingFace wav2vec2 and coqui STT. Activities for ensuring the growth and improvement of the Welsh Common Voice dataset are described. Two applications have been developed {--} a voice assistant and an online transcription service that allow users and organisations to use the new models in a practical and useful context, but which have also helped source additional test data for better evaluation of recognition accuracy and establishing the optimal selection and configurations of models. Test results suggest that in transcription good accuracy can be achieved for read speech, but further data and research is required for improving recognition results of freely spoken formal and informal speech. Meanwhile a limited domain language model provides excellent accuracy for a voice assistant. All code, data and models produced from this work are freely available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,138
inproceedings
lamb-etal-2022-handwriting
Handwriting recognition for {S}cottish {G}aelic
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.9/
Lamb, William and Alex, Beatrice and Sinclair, Mark
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
60--70
Like most other minority languages, Scottish Gaelic has limited tools and resources available for Natural Language Processing research and applications. These limitations restrict the potential of the language to participate in modern speech technology, while also restricting research in fields such as corpus linguistics and the Digital Humanities. At the same time, Gaelic has a long written history, is well-described linguistically, and is unusually well-supported in terms of potential NLP training data. For instance, archives such as the School of Scottish Studies hold thousands of digitised recordings of vernacular speech, many of which have been transcribed as paper-based, handwritten manuscripts. In this paper, we describe a project to digitise and recognise a corpus of handwritten narrative transcriptions, with the intention of re-purposing it to develop a Gaelic speech recognition system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,139
inproceedings
ni-chiarain-etal-2022-celtic
{C}eltic {CALL}: strengthening the vital role of education for language transmission
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.10/
N{\'i} Chiar{\'a}in, Neasa and Comtois, Madeleine and Nolan, Ois{\'i}n and Robinson-Gunning, Neimhin and Sloan, John and Berthelsen, Harald and N{\'i} Chasaide, Ailbhe
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
71--76
In this paper, we present the Irish language learning platform, An Sc ́eala ́{\i}, an intelligent Computer-Assisted Language Learning (iCALL) system which incorporates speech and language technologies in ways that promote the holistic development of the language skills - writing, listening, reading, and speaking. The technologies offer the advantage of extensive feedback in spoken and written form, enabling learners to improve their production. The system works equally as a classroom-based tool and as a standalone platform for the autonomous learner. Given the key role of education for the transmission of all the Celtic languages, it is vital that digital technologies be harnessed to maximise the effectiveness of language teaching/learning. An Sc{\'e}ala{\'i} has been used by large numbers of learners and teachers and has received very positive feedback. It is built as a modular system which allows existing and newly emerging technologies to be readily integrated, even if those technologies are still in development phase. The architecture is largely language-independent, and as an open-source system, it is hoped that it can be usefully deployed in other Celtic languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,140
inproceedings
ui-dhonnchadha-etal-2022-cipher
Cipher {--} Faoi Gheasa: A Game-with-a-Purpose for {I}rish
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.11/
U{\'i} Dhonnchadha, Elaine and Ward, Monica and Xu, Liang
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
77--84
This paper describes Cipher {--} Faoi Gheasa, a {\textquoteleft}game with a purpose' designed to support the learning of Irish in a fun and enjoyable way. The aim of the game is to promote language {\textquoteleft}noticing' and to combine the benefits of reading with the enjoyment of computer game playing, in a pedagogically beneficial way. In this paper we discuss pedagogical challenges for Irish, the development of measures for the selection and ranking of reading materials, as well as initial results of game evaluation. Overall user feedback is positive and further testing and development is envisaged.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,141
inproceedings
darling-etal-2022-towards
Towards Coreference Resolution for Early {I}rish
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.12/
Darling, Mark and Meelen, Marieke and Willis, David
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
85--93
In this article, we present an outline of some of the issues involved in developing a semi-supervised procedure for coreference resolution for early Irish as part of a wider enterprise to create a parsed corpus of historical Irish with enriched annotation for information structure and anaphoric coreference. We outline the ways in which existing resources, notably the POMIC historical Irish corpus and the Cesax annotation algorithm, have had to be adapted, the first to provide suitable input for coreference resolution, the second to cope with specific aspects of early Irish grammar. We also outline features of a part-of-speech tagger that we have developed for early Irish as part of the first task and with a view to expanding the size of the future corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,142
inproceedings
gow-smith-etal-2022-use
Use of Transformer-Based Models for Word-Level Transliteration of the Book of the Dean of Lismore
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.13/
Gow-Smith, Edward and McConville, Mark and Gillies, William and Scott, Jade and {\'O} Maolalaigh, Roibeard
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
94--98
The Book of the Dean of Lismore (BDL) is a 16th-century Scottish Gaelic manuscript written in a non-standard orthography. In this work, we outline the problem of transliterating the text of the BDL into a standardised orthography, and perform exploratory experiments using Transformer-based models for this task. In particular, we focus on the task of word-level transliteration, and achieve a character-level BLEU score of 54.15 with our best model, a BART architecture pre-trained on the text of Scottish Gaelic Wikipedia and then fine-tuned on around 2,000 word-level parallel examples. Our initial experiments give promising results, but we highlight the shortcomings of our model, and discuss directions for future work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,143
inproceedings
o-meachair-etal-2022-introducing
Introducing the National Corpus of {I}rish Project
Fransen, Theodorus and Lamb, William and Prys, Delyth
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.cltw-1.14/
{\'O} Meachair, M{\'i}che{\'a}l and Bhreathnach, {\'U}na and {\'O} Cleirc{\'i}n, Gear{\'o}id
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
99--103
This paper introduces the National Corpus of Irish, an initiative to develop a large national corpus of written and spoken contemporary Irish as well as related specialised corpora. The newly-compiled corpora will be hosted at corpas.ie, in what will become a hub for corpus-based research on the Irish language. Users will be able to search the corpora and download data generated during the project from the corpas.ie website and appropriate third-party repositories. Corpus 1 will be a balanced general-purpose corpus containing c.155m words. Corpus 2 will be a written corpus consisting of c100m words. Corpus 3 will be a spoken corpus containing 6.5m words. Corpus 4 will be a monitor corpus with a target size of 1m words per year from 2000 onwards. Token, lemma, and n-gram frequency lists will be published at regular intervals on the project website, and language models will be published there and on other appropriate platforms during the course of the project. This paper focuses on the background and crucial scoping stage of the project, and examines user needs as identified in a survey of potential users.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,144