entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
gupta-etal-2022-iit
{IIT} Dhanbad @{LT}-{EDI}-{ACL}2022- Hope Speech Detection for Equality, Diversity, and Inclusion
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.32/
Gupta, Vishesh and Kumar, Ritesh and Pamula, Rajendra
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
229--233
Hope is considered significant for the wellbeing,recuperation and restoration of humanlife by health professionals. Hope speech reflectsthe belief that one can discover pathwaysto their desired objectives and become rousedto utilise those pathways. Hope speech offerssupport, reassurance, suggestions, inspirationand insight. Hate speech is a prevalent practicethat society has to struggle with everyday. The freedom of speech and ease of anonymitygranted by social media has also resulted inincitement to hatred. In this paper, we workto identify and promote positive and supportivecontent on these platforms. We work withseveral machine learning models to classify socialmedia comments as hope speech or nonhopespeech in English. This paper portraysour work for the Shared Task on Hope SpeechDetection for Equality, Diversity, and Inclusionat LT-EDI-ACL 2022.
null
null
10.18653/v1/2022.ltedi-1.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,348
inproceedings
basu-2022-iiserb
{IISERB}@{LT}-{EDI}-{ACL}2022: A Bag of Words and Document Embeddings Based Framework to Identify Severity of Depression Over Social Media
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.33/
Basu, Tanmay
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
234--238
The DepSign-LT-EDI-ACL2022 shared task focuses on early prediction of severity of depression over social media posts. The BioNLP group at Department of Data Science and Engineering in Indian Institute of Science Education and Research Bhopal (IISERB) has participated in this challenge and submitted three runs based on three different text mining models. The severity of depression were categorized into three classes, viz., no depression, moderate, and severe and the data to build models were released as part of this shared task. The objective of this work is to identify relevant features from the given social media texts for effective text classification. As part of our investigation, we explored features derived from text data using document embeddings technique and simple bag of words model following different weighting schemes. Subsequently, adaptive boosting, logistic regression, random forest and support vector machine (SVM) classifiers were used to identify the scale of depression from the given texts. The experimental analysis on the given validation data show that the SVM classifier using the bag of words model following term frequency and inverse document frequency weighting scheme outperforms the other models for identifying depression. However, this framework could not achieve a place among the top ten runs of the shared task. This paper describes the potential of the proposed framework as well as the possible reasons behind mediocre performance on the given data.
null
null
10.18653/v1/2022.ltedi-1.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,349
inproceedings
swaminathan-etal-2022-ssncse
{SSNCSE}{\_}{NLP}@{LT}-{EDI}-{ACL}2022: Homophobia/Transphobia Detection in Multiple Languages using {SVM} Classifiers and {BERT}-based Transformers
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.34/
Swaminathan, Krithika and B, Bharathi and G L, Gayathri and Sampath, Hrishik
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
239--244
Over the years, there has been a slow but steady change in the attitude of society towards different kinds of sexuality. However, on social media platforms, where people have the license to be anonymous, toxic comments targeted at homosexuals, transgenders and the LGBTQ+ community are not uncommon. Detection of homophobic comments on social media can be useful in making the internet a safer place for everyone. For this task, we used a combination of word embeddings and SVM Classifiers as well as some BERT-based transformers. We achieved a weighted F1-score of 0.93 on the English dataset, 0.75 on the Tamil dataset and 0.87 on the Tamil-English Code-Mixed dataset.
null
null
10.18653/v1/2022.ltedi-1.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,350
inproceedings
agirrezabal-amann-2022-kucst
{KUCST}@{LT}-{EDI}-{ACL}2022: Detecting Signs of Depression from Social Media Text
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.35/
Agirrezabal, Manex and Amann, Janek
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
245--250
In this paper we present our approach for detecting signs of depression from social media text. Our model relies on word unigrams, part-of-speech tags, readabilitiy measures and the use of first, second or third person and the number of words. Our best model obtained a macro F1-score of 0.439 and ranked 25th, out of 31 teams. We further take advantage of the interpretability of the Logistic Regression model and we make an attempt to interpret the model coefficients with the hope that these will be useful for further research on the topic.
null
null
10.18653/v1/2022.ltedi-1.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,351
inproceedings
tavchioski-etal-2022-e8
E8-{IJS}@{LT}-{EDI}-{ACL}2022 - {BERT}, {A}uto{ML} and Knowledge-graph backed Detection of Depression
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.36/
Tavchioski, Ilija and Koloski, Boshko and {\v{S}}krlj, Bla{\v{z}} and Pollak, Senja
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
251--257
Depression is a mental illness that negatively affects a person`s well-being and can, if left untreated, lead to serious consequences such as suicide. Therefore, it is important to recognize the signs of depression early. In the last decade, social media has become one of the most common places to express one`s feelings. Hence, there is a possibility of text processing and applying machine learning techniques to detect possible signs of depression. In this paper, we present our approaches to solving the shared task titled Detecting Signs of Depression from Social Media Text. We explore three different approaches to solve the challenge: fine-tuning BERT model, leveraging AutoML for the construction of features and classifier selection and finally, we explore latent spaces derived from the combination of textual and knowledge-based representations. We ranked 9th out of 31 teams in the competition. Our best solution, based on knowledge graph and textual representations, was 4.9{\%} behind the best model in terms of Macro F1, and only 1.9{\%} behind in terms of Recall.
null
null
10.18653/v1/2022.ltedi-1.36
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,352
inproceedings
nozza-2022-nozza
Nozza@{LT}-{EDI}-{ACL}2022: Ensemble Modeling for Homophobia and Transphobia Detection
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.37/
Nozza, Debora
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
258--264
In this paper, we describe our approach for the task of homophobia and transphobia detection in English social media comments. The dataset consists of YouTube comments, and it has been released for the shared task on Homophobia/Transphobia Detection in social media comments. Given the high class imbalance, we propose a solution based on data augmentation and ensemble modeling. We fine-tuned different large language models (BERT, RoBERTa, and HateBERT) and used the weighted majority vote on their predictions. Our proposed model obtained 0.48 and 0.94 for macro and weighted F1-score, respectively, ranking at the third position.
null
null
10.18653/v1/2022.ltedi-1.37
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,353
inproceedings
janatdoust-etal-2022-kado
{KADO}@{LT}-{EDI}-{ACL}2022: {BERT}-based Ensembles for Detecting Signs of Depression from Social Media Text
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.38/
Janatdoust, Morteza and Ehsani-Besheli, Fatemeh and Zeinali, Hossein
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
265--269
Depression is a common and serious mental illness that early detection can improve the patient`s symptoms and make depression easier to treat. This paper mainly introduces the relevant content of the task {\textquotedblleft}Detecting Signs of Depression from Social Media Text at DepSign-LT-EDI@ACL-2022{\textquotedblright}. The goal of DepSign is to classify the signs of depression into three labels namely {\textquotedblleft}not depressed{\textquotedblright}, {\textquotedblleft}moderately depressed{\textquotedblright}, and {\textquotedblleft}severely depressed{\textquotedblright} based on social media`s posts. In this paper, we propose a predictive ensemble model that utilizes the fine-tuned contextualized word embedding, ALBERT, DistilBERT, RoBERTa, and BERT base model. We show that our model outperforms the baseline models in all considered metrics and achieves an F1 score of 54{\%} and accuracy of 61{\%}, ranking 5th on the leader-board for the DepSign task.
null
null
10.18653/v1/2022.ltedi-1.38
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,354
inproceedings
upadhyay-etal-2022-sammaan
Sammaan@{LT}-{EDI}-{ACL}2022: Ensembled Transformers Against Homophobia and Transphobia
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.39/
Upadhyay, Ishan Sanjeev and Srivatsa, Kv Aditya and Mamidi, Radhika
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
270--275
Hateful and offensive content on social media platforms can have negative effects on users and can make online communities more hostile towards certain people and hamper equality, diversity and inclusion. In this paper, we describe our approach to classify homophobia and transphobia in social media comments. We used an ensemble of transformer-based models to build our classifier. Our model ranked 2nd for English, 8th for Tamil and 10th for Tamil-English.
null
null
10.18653/v1/2022.ltedi-1.39
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,355
inproceedings
poswiata-perelkiewicz-2022-opi
{OPI}@{LT}-{EDI}-{ACL}2022: Detecting Signs of Depression from Social Media Text using {R}o{BERT}a Pre-trained Language Models
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.40/
Po{\'s}wiata, Rafa{\l} and Pere{\l}kiewicz, Micha{\l}
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
276--282
This paper presents our winning solution for the Shared Task on Detecting Signs of Depression from Social Media Text at LT-EDI-ACL2022. The task was to create a system that, given social media posts in English, should detect the level of depression as {\textquoteleft}not depressed', {\textquoteleft}moderately depressed' or {\textquoteleft}severely depressed'. We based our solution on transformer-based language models. We fine-tuned selected models: BERT, RoBERTa, XLNet, of which the best results were obtained for RoBERTa. Then, using the prepared corpus, we trained our own language model called DepRoBERTa (RoBERTa for Depression Detection). Fine-tuning of this model improved the results. The third solution was to use the ensemble averaging, which turned out to be the best solution. It achieved a macro-averaged F1-score of 0.583. The source code of prepared solution is available at \url{https://github.com/rafalposwiata/depression-detection-lt-edi-2022}.
null
null
10.18653/v1/2022.ltedi-1.40
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,356
inproceedings
nilsson-kovacs-2022-filipn
{F}ilip{N}@{LT}-{EDI}-{ACL}2022-Detecting signs of Depression from Social Media: Examining the use of summarization methods as data augmentation for text classification
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.41/
Nilsson, Filip and Kov{\'acs, Gy{\"orgy
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
283--286
Depression is a common mental disorder that severely affects the quality of life, and can lead to suicide. When diagnosed in time, mild, moderate, and even severe depression can be treated. This is why it is vital to detect signs of depression in time. One possibility for this is the use of text classification models on social media posts. Transformers have achieved state-of-the-art performance on a variety of similar text classification tasks. One drawback, however, is that when the dataset is imbalanced, the performance of these models may be negatively affected. Because of this, in this paper, we examine the effect of balancing a depression detection dataset using data augmentation. In particular, we use abstractive summarization techniques for data augmentation. We examine the effect of this method on the LT-EDI-ACL2022 task. Our results show that when increasing the multiplicity of the minority classes to the right degree, this data augmentation method can in fact improve classification scores on the task.
null
null
10.18653/v1/2022.ltedi-1.41
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,357
inproceedings
ashraf-etal-2022-nayel
{NAYEL} @{LT}-{EDI}-{ACL}2022: Homophobia/Transphobia Detection for Equality, Diversity, and Inclusion using {SVM}
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.42/
Ashraf, Nsrin and Taha, Mohamed and Abd Elfattah, Ahmed and Nayel, Hamada
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
287--290
Analysing the contents of social media platforms such as YouTube, Facebook and Twitter gained interest due to the vast number of users. One of the important tasks is homophobia/transphobia detection. This paper illustrates the system submitted by our team for the homophobia/transphobia detection in social media comments shared task. A machine learning-based model has been designed and various classification algorithms have been implemented for automatic detection of homophobia in YouTube comments. TF/IDF has been used with a range of bigram model for vectorization of comments. Support Vector Machines has been used to develop the proposed model and our submission reported 0.91, 0.92, 0.88 weighted f1-score for English, Tamil and Tamil-English datasets respectively.
null
null
10.18653/v1/2022.ltedi-1.42
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,358
inproceedings
surana-chinagundi-2022-ginius
gini{U}s @{LT}-{EDI}-{ACL}2022: Aasha: Transformers based Hope-{EDI}
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.43/
Chinagundi, Basavraj and Surana, Harshul
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
291--295
This paper describes team giniUs' submission to the Hope Speech Detection for Equality, Diversity and Inclusion Shared Task organised by LT-EDI ACL 2022. We have fine-tuned the Roberta-large pre-trained model and extracted the last four decoder layers to build a classifier. Our best result on the leaderboard achieve a weighted F1 score of 0.86 and a Macro F1 score of 0.51 for English. We have secured a rank of 4 for the English task. We have open-sourced our code implementations on GitHub to facilitate easy reproducibility by the scientific community.
null
null
10.18653/v1/2022.ltedi-1.43
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,359
inproceedings
anantharaman-etal-2022-ssn
{SSN}{\_}{MLRG}1@{LT}-{EDI}-{ACL}2022: Multi-Class Classification using {BERT} models for Detecting Depression Signs from Social Media Text
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.44/
Anantharaman, Karun and S, Angel and Sivanaiah, Rajalakshmi and Madhavan, Saritha and Rajendram, Sakaya Milton
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
296--300
DepSign-LT-EDI@ACL-2022 aims to ascer-tain the signs of depression of a person fromtheir messages and posts on social mediawherein people share their feelings and emo-tions. Given social media postings in English,the system should classify the signs of depres-sion into three labels namely {\textquotedblleft}not depressed{\textquotedblright},{\textquotedblleft}moderately depressed{\textquotedblright}, and {\textquotedblleft}severely de-pressed{\textquotedblright}. To achieve this objective, we haveadopted a fine-tuned BERT model. This solu-tion from team SSN{\_}MLRG1 achieves 58.5{\%}accuracy on the DepSign-LT-EDI@ACL-2022test set.
null
null
10.18653/v1/2022.ltedi-1.44
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,360
inproceedings
dowlagar-mamidi-2022-depressionone
{D}epression{O}ne@{LT}-{EDI}-{ACL}2022: Using Machine Learning with {SMOTE} and Random {U}nder{S}ampling to Detect Signs of Depression on Social Media Text.
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.45/
Dowlagar, Suman and Mamidi, Radhika
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
301--305
Depression is a common and serious medical illness that negatively affects how you feel, the way you think, and how you act. Detecting depression is essential as it must be treated early to avoid painful consequences. Nowadays, people are broadcasting how they feel via posts and comments. Using social media, we can extract many comments related to depression and use NLP techniques to train and detect depression. This work presents the submission of the DepressionOne team at LT-EDI-2022 for the shared task, detecting signs of depression from social media text. The depression data is small and unbalanced. Thus, we have used oversampling and undersampling methods such as SMOTE and RandomUnderSampler to represent the data. Later, we used machine learning methods to train and detect the signs of depression.
null
null
10.18653/v1/2022.ltedi-1.45
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,361
inproceedings
muti-etal-2022-leaningtower
{L}eaning{T}ower@{LT}-{EDI}-{ACL}2022: When Hope and Hate Collide
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.46/
Muti, Arianna and Marchiori Manerba, Marta and Korre, Katerina and Barr{\'o}n-Cede{\~n}o, Alberto
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
306--311
The 2022 edition of LT-EDI proposed two tasks in various languages. Task Hope Speech Detection required models for the automatic identification of hopeful comments for equality, diversity, and inclusion. Task Homophobia/Transphobia Detection focused on the identification of homophobic and transphobic comments. We targeted both tasks in English by using reinforced BERT-based approaches. Our core strategy aimed at exploiting the data available for each given task to augment the amount of supervised instances in the other. On the basis of an active learning process, we trained a model on the dataset for Task $i$ and applied it to the dataset for Task $j$ to iteratively integrate new silver data for Task $i$. Our official submissions to the shared task obtained a macro-averaged F$_1$ score of 0.53 for Hope Speech and 0.46 for Homo/Transphobia, placing our team in the third and fourth positions out of 11 and 12 participating teams respectively.
null
null
10.18653/v1/2022.ltedi-1.46
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,362
inproceedings
hegde-etal-2022-mucs-text
{MUCS}@Text-{LT}-{EDI}@{ACL} 2022: Detecting Sign of Depression from Social Media Text using Supervised Learning Approach
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.47/
Hegde, Asha and Coelho, Sharal and Dashti, Ahmad Elyas and Shashirekha, Hosahalli
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
312--316
Social media has seen enormous growth in its users recently and knowingly or unknowingly the behavior of a person will be reflected in the comments she/he posts on social media. Users having the sign of depression may post negative or disturbing content seeking the attention of other users. Hence, social media data can be analysed to check whether the users' have the sign of depression and help them to get through the situation if required. However, as analyzing the increasing amount of social media data manually in laborious and error-prone, automated tools have to be developed for the same. To address the issue of detecting the sign of depression content on social media, in this paper, we - team MUCS, describe an Ensemble of Machine Learning (ML) models and a Transfer Learning (TL) model submitted to {\textquotedblleft}Detecting Signs of Depression from Social Media Text-LT-EDI@ACL 2022{\textquotedblright} (DepSign-LT-EDI@ACL-2022) shared task at Association for Computational Linguistics (ACL) 2022. Both frequency and text based features are used to train an Ensemble model and Bidirectional Encoder Representations from Transformers (BERT) fine-tuned with raw text is used to train the TL model. Among the two models, the TL model performed better with a macro averaged F-score of 0.479 and placed 18th rank in the shared task. The code to reproduce the proposed models is available in github page1.
null
null
10.18653/v1/2022.ltedi-1.47
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,363
inproceedings
srinivasan-etal-2022-ssncse
{SSNCSE}{\_}{NLP}@{LT}-{EDI}-{ACL}2022: Speech Recognition for Vulnerable Individuals in {T}amil using pre-trained {XLSR} models
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.48/
Srinivasan, Dhanya and B, Bharathi and Durairaj, Thenmozhi and B, Senthil Kumar
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
317--320
Automatic speech recognition is a tool used to transform human speech into a written form. It is used in a variety of avenues, such as in voice commands, customer, service and more. It has emerged as an essential tool in the digitisation of daily life. It has been known to be of vital importance in making the lives of elderly and disabled people much easier. In this paper we describe an automatic speech recognition model, determined by using three pre-trained models, fine-tuned from the Facebook XLSR Wav2Vec2 model, which was trained using the Common Voice Dataset. The best model for speech recognition in Tamil is determined by finding the word error rate of the data. This work explains the submission made by SSNCSE{\_}NLP in the shared task organized by LT-EDI at ACL 2022. A word error rate of 39.4512 is achieved.
null
null
10.18653/v1/2022.ltedi-1.48
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,364
inproceedings
khanna-etal-2022-idiap
{IDIAP}{\_}{TIET}@{LT}-{EDI}-{ACL}2022 : Hope Speech Detection in Social Media using Contextualized {BERT} with Attention Mechanism
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.49/
Khanna, Deepanshu and Singh, Muskaan and Motlicek, Petr
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
321--325
With the increase of users on social media platforms, manipulating or provoking masses of people has become a piece of cake. This spread of hatred among people, which has become a loophole for freedom of speech, must be minimized. Hence, it is essential to have a system that automatically classifies the hatred content, especially on social media, to take it down. This paper presents a simple modular pipeline classifier with BERT embeddings and attention mechanism to classify hope speech content in the Hope Speech Detection shared task for Equality, Diversity, and Inclusion-ACL 2022. Our system submission ranks fourth with an F1-score of 0.84. We release our code-base here \url{https://github.com/Deepanshu-beep/hope-speech-attention} .
null
null
10.18653/v1/2022.ltedi-1.49
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,365
inproceedings
s-antony-2022-ssn
{SSN}@{LT}-{EDI}-{ACL}2022: Transfer Learning using {BERT} for Detecting Signs of Depression from Social Media Texts
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.50/
S, Adarsh and Antony, Betina
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
326--330
Depression is one of the most common mentalissues faced by people. Detecting signs ofdepression early on can help in the treatmentand prevention of extreme outcomes like suicide. Since the advent of the internet, peoplehave felt more comfortable discussing topicslike depression online due to the anonymityit provides. This shared task has used datascraped from various social media sites andaims to develop models that detect signs andthe severity of depression effectively. In thispaper, we employ transfer learning by applyingenhanced BERT model trained for Wikipediadataset to the social media text and performtext classification. The model gives a F1-scoreof 63.8{\%} which was reasonably better than theother competing models.
null
null
10.18653/v1/2022.ltedi-1.50
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,366
inproceedings
s-etal-2022-findings
Findings of the Shared Task on Detecting Signs of Depression from Social Media
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.51/
S, Kayalvizhi and Durairaj, Thenmozhi and Chakravarthi, Bharathi Raja and C, Jerin Mahibha
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
331--338
Social media is considered as a platform whereusers express themselves. The rise of social me-dia as one of humanity`s most important publiccommunication platforms presents a potentialprospect for early identification and manage-ment of mental illness. Depression is one suchillness that can lead to a variety of emotionaland physical problems. It is necessary to mea-sure the level of depression from the socialmedia text to treat them and to avoid the nega-tive consequences. Detecting levels of depres-sion is a challenging task since it involves themindset of the people which can change period-ically. The aim of the DepSign-LT-EDI@ACL-2022 shared task is to classify the social me-dia text into three levels of depression namely{\textquotedblleft}Not Depressed{\textquotedblright}, {\textquotedblleft}Moderately Depressed{\textquotedblright}, and{\textquotedblleft}Severely Depressed{\textquotedblright}. This overview presentsa description on the task, the data set, method-ologies used and an analysis on the results ofthe submissions. The models that were submit-ted as a part of the shared task had used a va-riety of technologies from traditional machinelearning algorithms to deep learning models. It could be observed from the result that thetransformer based models have outperformedthe other models. Among the 31 teams whohad submitted their results for the shared task,the best macro F1-score of 0.583 was obtainedusing transformer based model.
null
null
10.18653/v1/2022.ltedi-1.51
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,367
inproceedings
b-etal-2022-findings-shared
Findings of the Shared Task on Speech Recognition for Vulnerable Individuals in {T}amil
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.52/
B, Bharathi and Chakravarthi, Bharathi Raja and Cn, Subalalitha and N, Sripriya and Pandian, Arunaggiri and Valli, Swetha
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
339--345
This paper illustrates the overview of the sharedtask on automatic speech recognition in the Tamillanguage. In the shared task, spontaneousTamil speech data gathered from elderly andtransgender people was given for recognitionand evaluation. These utterances were collected from people when they communicatedin the public locations such as hospitals, markets, vegetable shop, etc. The speech corpusincludes utterances of male, female, and transgender and was split into training and testingdata. The given task was evaluated using WER(Word Error Rate). The participants used thetransformer-based model for automatic speechrecognition. Different results using differentpre-trained transformer models are discussedin this overview paper.
null
null
10.18653/v1/2022.ltedi-1.52
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,368
inproceedings
sharen-rajalakshmi-2022-dlrg
{DLRG}@{LT}-{EDI}-{ACL}2022:Detecting signs of Depression from Social Media using {XGB}oost Method
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.53/
Sharen, Herbert and Rajalakshmi, Ratnavel
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
346--349
Depression is linked to the development of dementia. Cognitive functions such as thinkingand remembering generally deteriorate in dementiapatients. Social media usage has beenincreased among the people in recent days. Thetechnology advancements help the communityto express their views publicly. Analysing thesigns of depression from texts has become animportant area of research now, as it helps toidentify this kind of mental disorders among thepeople from their social media posts. As part ofthe shared task on detecting signs of depressionfrom social media text, a dataset has been providedby the organizers (Sampath et al.). Weapplied different machine learning techniquessuch as Support Vector Machine, Random Forestand XGBoost classifier to classify the signsof depression. Experimental results revealedthat, the XGBoost model outperformed othermodels with the highest classification accuracyof 0.61{\%} and an Macro F1 score of 0.54.
null
null
10.18653/v1/2022.ltedi-1.53
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,369
inproceedings
singh-motlicek-2022-idiap
{IDIAP} Submission@{LT}-{EDI}-{ACL}2022 : Hope Speech Detection for Equality, Diversity and Inclusion
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.54/
Singh, Muskaan and Motlicek, Petr
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
350--355
Social media platforms have been provoking masses of people. The individual comments affect a prevalent way of thinking by moving away from preoccupation with discrimination, loneliness, or influence in building confidence, support, and good qualities. This paper aims to identify hope in these social media posts. Hope significantly impacts the well-being of people, as suggested by health professionals. It reflects the belief to achieve an objective, discovers a new path, or become motivated to formulate pathways. In this paper we classify given a social media post, hope speech or not hope speech, using ensembled voting of BERT, ERNIE 2.0 and RoBERTa for English language with 0.54 macro F1-score ($2^{st}$ rank). For non-English languages Malayalam, Spanish and Tamil we utilized XLM RoBERTA with 0.50, 0.81, 0.3 macro F1 score ($1^{st}$, $1^{st}$,$3^{rd}$ rank) respectively. For Kannada, we use Multilingual BERT with 0.32 F1 score($5^{th}$)position. We release our code-base here: \url{https://github.com/Muskaan-Singh/Hate-Speech-detection.git}.
null
null
10.18653/v1/2022.ltedi-1.54
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,370
inproceedings
singh-motlicek-2022-idiap-submission
{IDIAP} Submission@{LT}-{EDI}-{ACL}2022: Homophobia/Transphobia Detection in social media comments
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.55/
Singh, Muskaan and Motlicek, Petr
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
356--361
The increased expansion of abusive content on social media platforms negatively affects online users. Transphobic/homophobic content indicates hatred comments for lesbian, gay, transgender, or bisexual people. It leads to offensive speech and causes severe social problems that can make online platforms toxic and unpleasant to LGBT+people, endeavoring to eliminate equality, diversity, and inclusion. In this paper, we present our classification system; given comments, it predicts whether or not it contains any form of homophobia/transphobia with a Zero-Shot learning framework. Our system submission achieved 0.40, 0.85, 0.89 F1-score for Tamil and Tamil-English, English with ($1^{st}$, $1^{st}$,$8^{th}$) ranks respectively. We release our codebase here: \url{https://github.com/Muskaan-Singh/Homophobia-and-Transphobia-ACL-Submission.git}.
null
null
10.18653/v1/2022.ltedi-1.55
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,371
inproceedings
singh-motlicek-2022-idiap-submission-lt
{IDIAP} Submission@{LT}-{EDI}-{ACL}2022: Detecting Signs of Depression from Social Media Text
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.56/
Singh, Muskaan and Motlicek, Petr
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
362--368
Depression is a common illness involving sadness and lack of interest in all day-to-day activities. It is important to detect depression at an early stage as it is treated at an early stage to avoid consequences. In this paper, we present our system submission of ARGUABLY for DepSign-LT-EDI@ACL-2022. We aim to detect the signs of depression of a person from their social media postings wherein people share their feelings and emotions. The proposed system is an ensembled voting model with fine-tuned BERT, RoBERTa, and XLNet. Given social media postings in English, the submitted system classify the signs of depression into three labels, namely {\textquotedblleft}not depressed,{\textquotedblright} {\textquotedblleft}moderately depressed,{\textquotedblright} and {\textquotedblleft}severely depressed.{\textquotedblright} Our best model is ranked $3^{rd}$ position with 0.54{\%} accuracy . We make our codebase accessible here.
null
null
10.18653/v1/2022.ltedi-1.56
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,372
inproceedings
chakravarthi-etal-2022-overview
Overview of The Shared Task on Homophobia and Transphobia Detection in Social Media Comments
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.57/
Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Durairaj, Thenmozhi and McCrae, John and Buitelaar, Paul and Kumaresan, Prasanna and Ponnusamy, Rahul
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
369--377
Homophobia and Transphobia Detection is the task of identifying homophobia, transphobia, and non-anti-LGBT+ content from the given corpus. Homophobia and transphobia are both toxic languages directed at LGBTQ+ individuals that are described as hate speech. This paper summarizes our findings on the {\textquotedblleft}Homophobia and Transphobia Detection in social media comments{\textquotedblright} shared task held at LT-EDI 2022 - ACL 2022 1. This shared taskfocused on three sub-tasks for Tamil, English, and Tamil-English (code-mixed) languages. It received 10 systems for Tamil, 13 systems for English, and 11 systems for Tamil-English. The best systems for Tamil, English, and Tamil-English scored 0.570, 0.870, and 0.610, respectively, on average macro F1-score.
null
null
10.18653/v1/2022.ltedi-1.57
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,373
inproceedings
chakravarthi-etal-2022-overview-shared
Overview of the Shared Task on Hope Speech Detection for Equality, Diversity, and Inclusion
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.58/
Chakravarthi, Bharathi Raja and Muralidaran, Vigneshwaran and Priyadharshini, Ruba and Cn, Subalalitha and McCrae, John and Garc{\'i}a, Miguel {\'A}ngel and Jim{\'e}nez-Zafra, Salud Mar{\'i}a and Valencia-Garc{\'i}a, Rafael and Kumaresan, Prasanna and Ponnusamy, Rahul and Garc{\'i}a-Baena, Daniel and Garc{\'i}a-D{\'i}az, Jos{\'e}
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
378--388
Hope Speech detection is the task of classifying a sentence as hope speech or non-hope speech given a corpus of sentences. Hope speech is any message or content that is positive, encouraging, reassuring, inclusive and supportive that inspires and engenders optimism in the minds of people. In contrast to identifying and censoring negative speech patterns, hope speech detection is focussed on recognising and promoting positive speech patterns online. In this paper, we report an overview of the findings and results from the shared task on hope speech detection for Tamil, Malayalam, Kannada, English and Spanish languages conducted in the second workshop on Language Technology for Equality, Diversity and Inclusion (LT-EDI-2022) organised as a part of ACL 2022. The participants were provided with annotated training {\&} development datasets and unlabelled test datasets in all the five languages. The goal of the shared task is to classify the given sentences into one of the two hope speech classes. The performances of the systems submitted by the participants were evaluated in terms of micro-F1 score and weighted-F1 score. The datasets for this challenge are openly available
null
null
10.18653/v1/2022.ltedi-1.58
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,374
inproceedings
gambardella-etal-2022-identifying
Identifying Cleartext in Historical Ciphers
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.1/
Gambardella, Maria-Elena and Megyesi, Beata and Pettersson, Eva
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
1--9
In historical encrypted sources we can find encrypted text sequences, also called ciphertext, as well as non-encrypted cleartexts written in a known language. While most of the cryptanalysis focuses on the decryption of ciphertext, cleartext is often overlooked although it can give us important clues about the historical interpretation and contextualisation of the manuscript. In this paper, we investigate to what extent we can automatically distinguish cleartext from ciphertext in historical ciphers and to what extent we are able to identify its language. The problem is challenging as cleartext sequences in ciphers are often short, up to a few words, in different languages due to historical code-switching. To identify the sequences and the language(s), we chose a rule-based approach and run 7 different models using historical language models on various ciphertexts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,376
inproceedings
hellwig-sellmer-2022-detecting
Detecting Diachronic Syntactic Developments in Presence of Bias Terms
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.2/
Hellwig, Oliver and Sellmer, Sven
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
10--19
Corpus-based studies of diachronic syntactic changes are typically guided by the results of previous qualitative research. When such results are missing or, as is the case for Vedic Sanskrit, are restricted to small parts of a transmitted corpus, an exploratory framework that detects such changes in a data-driven fashion can substantially support the research process. In this paper, we introduce a customized version of the infinite relational model that groups syntactic constituents based on their structural similarities and their diachronic distributions. We propose a simple way to control for register and intellectual affiliation, and discuss our findings for four syntactic structures in Vedic texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,377
inproceedings
nehrdich-hellwig-2022-accurate
Accurate Dependency Parsing and Tagging of {L}atin
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.3/
Nehrdich, Sebastian and Hellwig, Oliver
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
20--25
Having access to high-quality grammatical annotations is important for downstream tasks in NLP as well as for corpus-based research. In this paper, we describe experiments with the Latin BERT word embeddings that were recently be made available by Bamman and Burns (2020). We show that these embeddings produce competitive results in the low-level task morpho-syntactic tagging. In addition, we describe a graph-based dependency parser that is trained with these embeddings and that clearly outperforms various baselines.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,378
inproceedings
brigada-villa-etal-2022-annotating
Annotating {\textquotedblleft}Absolute{\textquotedblright} Preverbs in the {H}omeric and {V}edic Treebanks
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.4/
Brigada Villa, Luca and Biagetti, Erica and Zanchi, Chiara
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
26--30
Indo-European preverbs are uninflected morphemes attaching to verbs and modifying their meaning. In Early Vedic and Homeric Greek, these morphemes held ambiguous morphosyntactic status raising issues for syntactic annotation. This paper focuses on the annotation of preverbs in so-called {\textquotedblleft}absolute{\textquotedblright} position in two Universal Dependencies treebanks. This issue is related to the broader topic of how to annotate ellipsis in Universal Dependencies. After discussing some of the current annotations, we propose a new scheme that better accounts for the variety of absolute constructions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,379
inproceedings
asahara-etal-2022-chj
{CHJ}-{WLSP}: Annotation of {\textquoteleft}Word List by Semantic Principles' Labels for the Corpus of Historical {J}apanese
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.5/
Asahara, Masayuki and Ikegami, Nao and Suzuki, Tai and Ichimura, Taro and Kondo, Asuko and Kato, Sachi and Yamazaki, Makoto
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
31--37
This article presents a word-sense annotation for the Corpus of Historical Japanese: a mashed-up Japanese lexicon based on the {\textquoteleft}Word List by Semantic Principles' (WLSP). The WLSP is a large-scale Japanese thesaurus that includes 98,241 entries with syntactic and hierarchical semantic categories. The historical WLSP is also compiled for the words in ancient Japanese. We utilized a morpheme-word sense alignment table to extract all possible word sense candidates for each word appearing in the target corpus. Then, we manually disambiguated the word senses for 647,751 words in the texts from the 10th century to 1910.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,380
inproceedings
dehouck-2022-ikuvina
The {IKUVINA} Treebank
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.6/
Dehouck, Mathieu
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
38--42
In this paper, we introduce the first dependency treebank for the Umbrian language (an extinct Indo-European language from the Italic family, once spoken in modern day Italy). We present the source of the corpus : a set of seven bronze tablets describing religious ceremonies, written using two different scripts, unearthed in Umbria in the XVth century. The corpus itself has already been studied extensively by specialists of old Italic and classical Indo-European languages. So we discuss a number of challenges that we encountered as we annotated the corpus following Universal Dependencies' guidelines from existing textual analyses.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,381
inproceedings
fischer-etal-2022-machine
Machine Translation of 16{T}h Century Letters from {L}atin to {G}erman
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.7/
Fischer, Lukas and Scheurer, Patricia and Schwitter, Raphael and Volk, Martin
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
43--50
This paper outlines our work in collecting training data for and developing a Latin{--}German Neural Machine Translation (NMT) system, for translating 16th century letters. While Latin{--}German is a low-resource language pair in terms of NMT, the domain of 16th century epistolary Latin is even more limited in this regard. Through our efforts in data collection and data generation, we are able to train a NMT model that provides good translations for short to medium sentences, and outperforms GoogleTranslate overall. We focus on the correspondence of the Swiss reformer Heinrich Bullinger, but our parallel corpus and our NMT system will be of use for many other texts of the time.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,382
inproceedings
cecchini-pedonese-2022-treebank
A Treebank-based Approach to the Supprema Constructio in Dante`s {L}atin Works
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.8/
Cecchini, Flavio Massimiliano and Pedonese, Giulia
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
51--58
This paper aims to apply a corpus-driven approach to Dante Alighieri`s Latin works using UDante, a treebank based on Dante Search and part of the Universal Dependencies project. We present a method based on the notion of barycentre applied to a dependency tree as a way to calculate the {\textquotedblleft}syntactic balance{\textquotedblright} of a sentence. Its application to Dante`s Latin works shows its potential in analysing the style of an author, and contributes to the interpretation of the supprema constructio mentioned in DVE II vi 7 as a well balanced syntactic pattern modeled on Latin literary writing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,383
inproceedings
quochi-etal-2022-inscriptions
From Inscriptions to Lexica and Back: A Platform for Editing and Linking the Languages of {A}ncient {I}taly
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.9/
Quochi, Valeria and Bellandi, Andrea and Khan, Fahad and Mallia, Michele and Murano, Francesca and Piccini, Silvia and Rigobianco, Luca and Tommasi, Alessandro and Zavattari, Cesare
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
59--67
Available language technology is hardly applicable to scarcely attested ancient languages, yet their digital semantic representation, though challenging, is an asset for the purpose of sharing and preserving existing cultural knowledge. In the context of a project on the languages and cultures of ancient Italy, we took up this challenge. The paper thus describes the development of a user friendly web platform, EpiLexO, for the creation and editing of an integrated system of language resources for ancient fragmentary languages centered on the lexicon, in compliance with current digital humanities and Linked Open Data principles. EpiLexo allows for the editing of lexica with all relevant cross-references: for their linking to their testimonies, as well as to bibliographic information and other (external) resources and common vocabularies. The focus of the current implementation is on the languages of ancient Italy, in particular Oscan, Faliscan, Celtic and Venetic; however, the technological solutions are designed to be general enough to be potentially applicable to different scenarios.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,384
inproceedings
palmero-aprosio-etal-2022-bertoldo
{BERT}oldo, the Historical {BERT} for {I}talian
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.10/
Palmero Aprosio, Alessio and Menini, Stefano and Tonelli, Sara
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
68--72
Recent works in historical language processing have shown that transformer-based models can be successfully created using historical corpora, and that using them for analysing and classifying data from the past can be beneficial compared to standard transformer models. This has led to the creation of BERT-like models for different languages trained with digital repositories from the past. In this work we introduce the Italian version of historical BERT, which we call BERToldo. We evaluate the model on the task of PoS-tagging Dante Alighieri`s works, considering not only the tagger performance but also the model size and the time needed to train it. We also address the problem of duplicated data, which is rather common for languages with a limited availability of historical corpora. We show that deduplication reduces training time without affecting performance. The model and its smaller versions are all made available to the research community.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,385
inproceedings
keersmaekers-van-hal-2022-search
In Search of the Flocks: How to Perform Onomasiological Queries in an {A}ncient {G}reek Corpus?
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.11/
Keersmaekers, Alek and Van Hal, Toon
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
73--83
This paper explores the possibilities of onomasiologically querying corpus data of Ancient Greek. The significance of the onomasiological approach has been highlighted in recent studies, yet the possibilities of performing {\textquoteleft}word-finding' investigations into corpus data have not been dealt with in depth. The case study chosen focuses on collective nouns denoting animate groups (such as flocks of people, herds of cattle). By relying on a large automatically annotated corpus of Ancient Greek and on token-based vector information, a longlist of collective nouns was compiled through morpho-syntactic extraction and successive clustering procedures. After reducing this longlist to a shortlist, the results obtained are evaluated. In general, we find that {\ensuremath{\pi}}{\ensuremath{\lambda}}ῆ{\ensuremath{\theta}}{o}{\ensuremath{\varsigma}} can be considered to be the default collective noun of both humans and animals, becoming especially prominent during the Hellenistic period. In addition, specific tendencies in the use of collective nouns are discerned for specific semantic classes (e.g. gods and insects) and over time. Throughout the paper, special attention is paid to methodological issues related to onomasiologically searching.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,386
inproceedings
corazza-etal-2022-contextual
Contextual Unsupervised Clustering of Signs for Ancient Writing Systems
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.12/
Corazza, Michele and Tamburini, Fabio and Val{\'e}rio, Miguel and Ferrara, Silvia
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
84--93
The application of machine learning techniques to ancient writing systems is a relatively new idea, and it poses interesting challenges for researchers. One particularly challenging aspect is the scarcity of data for these scripts, which contrasts with the large amounts of data usually available when applying neural models to computational linguistics and other fields. For this reason, any method that attempts to work on ancient scripts needs to be ad-hoc and consider paleographic aspects, in addition to computational ones. Considering the peculiar characteristics of the script that we used is therefore be a crucial part of our work, as any solution needs to consider the particular nature of the writing system that it is applied to. In this work we propose a preliminary evaluation of a novel unsupervised clustering method on Cypro-Greek syllabary, a writing system from Cyprus. This evaluation shows that our method improves clustering performance using information about the attested sequences of signs in combination with an unsupervised model for images, with the future goal of applying the methodology to undeciphered writing systems from a related and typologically similar script.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,387
inproceedings
favaro-etal-2022-towards
Towards the Creation of a Diachronic Corpus for {I}talian: A Case Study on the {GDLI} Quotations
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.13/
Favaro, Manuel and Guadagnini, Elisa and Sassolini, Eva and Biffi, Marco and Montemagni, Simonetta
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
94--100
In this paper we describe some experiments related to a corpus derived from an authoritative historical Italian dictionary, namely the Grande dizionario della lingua italiana ({\textquoteleft}Great Dictionary of Italian Language', in short GDLI). Thanks to the digitization and structuring of this dictionary, we have been able to set up the first nucleus of a diachronic annotated corpus that selects{---}according to specific criteria, and distinguishing between prose and poetry{---}some of the quotations that within the entries illustrate the different definitions and sub-definitions. In fact, the GDLI presents a huge collection of quotations covering the entire history of the Italian language and thus ranging from the Middle Ages to the present day. The corpus was enriched with linguistic annotation and used to train and evaluate NLP models for POS tagging and lemmatization, with promising results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,388
inproceedings
yousef-etal-2022-automatic-translation
Automatic Translation Alignment for {A}ncient {G}reek and {L}atin
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.14/
Yousef, Tariq and Palladino, Chiara and Wright, David J. and Berti, Monica
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
101--107
This paper presents the results of automatic translation alignment experiments on a corpus of texts in Ancient Greek translated into Latin. We used a state-of-the-art alignment workflow based on a contextualized multilingual language model that is fine-tuned on the alignment task for Ancient Greek and Latin. The performance of the alignment model is evaluated on an alignment gold standard consisting of 100 parallel fragments aligned manually by two domain experts, with a 90.5{\%} Inter-Annotator-Agreement (IAA). An interactive online interface is provided to enable users to explore the aligned fragments collection and examine the alignment model`s output.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,389
inproceedings
swanson-tyers-2022-handling
Handling Stress in Finite-State Morphological Analyzers for {A}ncient {G}reek and {A}ncient {H}ebrew
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.15/
Swanson, Daniel and Tyers, Francis
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
108--113
Modeling stress placement has historically been a challenge for computational morphological analysis, especially in finite-state systems because lexically conditioned stress cannot be modeled using only rewrite rules on the phonological form of a word. However, these phenomena can be modeled fairly easily if the lexicon`s internal representation is allowed to contain more information than the pure phonological form. In this paper we describe the stress systems of Ancient Greek and Ancient Hebrew and we present two prototype finite-state morphological analyzers, one for each language, which successfully implement these stress systems by inserting a small number of control characters into the phonological form, thus conclusively refuting the claim that finite-state systems are not powerful enough to model such stress systems and arguing in favor of the continued relevance of finite-state systems as an appropriate tool for modeling the morphology of historical languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,390
inproceedings
vertan-prager-2022-inscription
From Inscription to Semi-automatic Annotation of {M}aya Hieroglyphic Texts
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.16/
Vertan, Cristina and Prager, Christian
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
114--118
The Maya script is the only readable autochthonous writing system of the Americas and consists of more than 1000 word signs and syllables. It is only partially deciphered and is the subject of the project {\textquotedblleft}Text Database and Dictionary of the Classic Maya{\textquotedblright} . Texts are recorded in TEI XML and on the basis of a digital sign and graph catalog, which are stored in the TextGrid virtual repository. Due to the state of decipherment, it is not possible to record hieroglyphic texts directly in phonemically transliterated values. The texts are therefore documented numerically using numeric sign codes based on Eric Thompson`s catalog of the Maya script. The workflow for converting numerical transliteration into textual form involves several steps, with variable solutions possible at each step. For this purpose, the authors have developed ALMAH {\textquotedblleft}Annotator for the Linguistic Analysis of Maya Hieroglyphs{\textquotedblright}. The tool is a client application and allows semi-automatic generation of phonemic transliteration from numerical transliteration and enables multi-step linguistic annotation. Alternative readings can be entered, and two or more decipherment proposals can be processed in parallel. ALMAH is implemented in JAVA, is based on a graph-data model, and has a user-friendly interface.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,391
inproceedings
torres-aguilar-2022-multilingual
Multilingual Named Entity Recognition for Medieval Charters Using Stacked Embeddings and Bert-based Models.
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.17/
Torres Aguilar, Sergio
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
119--128
In recent years the availability of medieval charter texts has increased thanks to advances in OCR and HTR techniques. But the lack of models that automatically structure the textual output continues to hinder the extraction of large-scale lectures from these historical sources that are among the most important for medieval studies. This paper presents the process of annotating and modelling a corpus to automatically detect named entities in medieval charters in Latin, French and Spanish and address the problem of multilingual writing practices in the Late Middle Ages. It introduces a new annotated multilingual corpus and presents a training pipeline using two approaches: (1) a method using contextual and static embeddings coupled to a Bi-LSTM-CRF classifier; (2) a fine-tuning method using the pre-trained multilingual BERT and RoBERTa models. The experiments described here are based on a corpus encompassing about 2.3M words (7576 charters) coming from five charter collections ranging from the 10th to the 15th centuries. The evaluation proves that both multilingual classifiers based on general purpose models and those specifically designed achieve high-performance results and do not show performance drop compared to their monolingual counterparts. This paper describes the corpus and the annotation guideline, and discusses the issues related to the linguistic of the charters, the multilingual writing practices, so as to interpret the results within a larger historical perspective.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,392
inproceedings
fantoli-de-lhoneux-2022-linguistic
Linguistic Annotation of Neo-{L}atin Mathematical Texts: A Pilot-Study to Improve the Automatic Parsing of the Archimedes Latinus
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.18/
Fantoli, Margherita and de Lhoneux, Miryam
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
129--134
This paper describes the process of syntactically parsing the Latin translation by Jacopo da San Cassiano of the Greek mathematical work The Spirals of Archimedes. The Universal Dependencies formalism is adopted. First, we introduce the historical and linguistic importance of Jacopo da San Cassiano`s translation. Subsequently, we describe the deep Biaffine parser used for this pilot study. In particular, we motivate the choice of using the technique of treebank embeddings in light of the characteristics of mathematical texts. The paper then details the process of creation of training and test data, by highlighting the most compelling linguistic features of the text and the choices implemented in the current version of the treebank. Finally, the results of the parsing are discussed in comparison to a baseline and the most prominent errors are discussed. Overall, the paper shows the added value of creating specific training data, and of using targeted strategies (as treebank embeddings) to exploit existing annotated corpora while preserving the features of one specific text when performing syntactic parsing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,393
inproceedings
li-etal-2022-first
The First International {A}ncient {C}hinese Word Segmentation and {POS} Tagging Bakeoff: Overview of the {E}va{H}an 2022 Evaluation Campaign
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.19/
Li, Bin and Yuan, Yiguo and Lu, Jingya and Feng, Minxuan and Xu, Chao and Qu, Weiguang and Wang, Dongbo
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
135--140
This paper presents the results of the First Ancient Chinese Word Segmentation and POS Tagging Bakeoff (EvaHan), which was held at the Second Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) 2022, in the context of the 13th Edition of the Language Resources and Evaluation Conference (LREC 2022). We give the motivation for having an international shared contest, as well as the data and tracks. The contest is consisted of two modalities, closed and open. In the closed modality, the participants are only allowed to use the training data, obtained the highest F1 score of 96.03{\%} and 92.05{\%} in word segmentation and POS tagging. In the open modality, the participants can use whatever resource they have, with the highest F1 score of 96.34{\%} and 92.56{\%} in word segmentation and POS tagging. The scores on the blind test dataset decrease around 3 points, which shows that the out-of-vocabulary words still are the bottleneck for lexical analyzers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,394
inproceedings
chang-etal-2022-automatic
Automatic Word Segmentation and Part-of-Speech Tagging of {A}ncient {C}hinese Based on {BERT} Model
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.20/
Chang, Yu and Zhu, Peng and Wang, Chaoping and Wang, Chaofan
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
141--145
In recent years, new deep learning methods and pre-training language models have been emerging in the field of natural language processing (NLP). These methods and models can greatly improve the accuracy of automatic word segmentation and part-of-speech tagging in the field of ancient Chinese research. In these models, the BERT model has made amazing achievements in the top-level test of machine reading comprehension SQuAD-1.1. In addition, it also showed better results than other models in 11 different NLP tests. In this paper, SIKU-RoBERTa pre-training language model based on the high-quality full-text corpus of SiKuQuanShu have been adopted, and part corpus of ZuoZhuan that has been word segmented and part-of-speech tagged is used as training sets to build a deep network model based on BERT for word segmentation and POS tagging experiments. In addition, we also use other classical NLP network models for comparative experiments. The results show that using SIKU-RoBERTa pre-training language model, the overall prediction accuracy of word segmentation and part-of-speech tagging of this model can reach 93.87{\%} and 88.97{\%}, with excellent overall performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,395
inproceedings
tian-guo-2022-ancient
{A}ncient {C}hinese Word Segmentation and Part-of-Speech Tagging Using Data Augmentation
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.21/
Tian, Yanzhi and Guo, Yuhang
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
146--149
We attended the EvaHan2022 ancient Chinese word segmentation and Part-of-Speech (POS) tagging evaluation. We regard the Chinese word segmentation and POS tagging as sequence tagging tasks. Our system is based on a BERT-BiLSTM-CRF model which is trained on the data provided by the EvaHan2022 evaluation. Besides, we also employ data augmentation techniques to enhance the performance of our model. On the Test A and Test B of the evaluation, the F1 scores of our system achieve 94.73{\%} and 90.93{\%} for the word segmentation, 89.19{\%} and 83.48{\%} for the POS tagging.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,396
inproceedings
zhang-etal-2022-bert
{BERT} 4{EVER}@{E}va{H}an 2022: {A}ncient {C}hinese Word Segmentation and Part-of-Speech Tagging Based on Adversarial Learning and Continual Pre-training
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.22/
Zhang, Hailin and Yang, Ziyu and Fu, Yingwen and Ding, Ruoyao
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
150--154
With the development of artificial intelligence (AI) and digital humanities, ancient Chinese resources and language technology have also developed and grown, which have become an increasingly important part to the study of historiography and traditional Chinese culture. In order to promote the research on automatic analysis technology of ancient Chinese, we conduct various experiments on ancient Chinese word segmentation and part-of-speech (POS) tagging tasks for the EvaHan 2022 shared task. We model the word segmentation and POS tagging tasks jointly as a sequence tagging problem. In addition, we perform a series of training strategies based on the provided ancient Chinese pre-trained model to enhance the model performance. Concretely, we employ several augmentation strategies, including continual pre-training, adversarial training, and ensemble learning to alleviate the limited amount of training data and the imbalance between POS labels. Extensive experiments demonstrate that our proposed models achieve considerable performance on ancient Chinese word segmentation and POS tagging tasks. Keywords: ancient Chinese, word segmentation, part-of-speech tagging, adversarial learning, continuing pre-training
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,397
inproceedings
jiang-etal-2022-construction
Construction of Segmentation and Part of Speech Annotation Model in {A}ncient {C}hinese
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.23/
Jiang, Longjie and Chang, Qinyu C. and Xie, Huyin H. and Xia, Zhuying Z.
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
155--158
Among the four civilizations in the world with the longest history, only Chinese civilization has been inherited and never interrupted for 5000 years. An important factor is that the Chinese nation has the fine tradition of sorting out classics. Recording history with words, inheriting culture through continuous collation of indigenous accounts, and maintaining the spread of Chinese civilization. In this competition, the siku-roberta model was introduced into the part-of-speech tagging task of ancient Chinese by using the Zuozhuan data set, and good prediction results were obtained.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,398
inproceedings
tang-etal-2022-simple
Simple Tagging System with {R}o{BERT}a for {A}ncient {C}hinese
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.24/
Tang, Binghao and Lin, Boda and Li, Si
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
159--163
This paper describes the system submitted for the EvaHan 2022 Shared Task on word segmentation and part-of-speech tagging for Ancient Chinese. Our system is based on the pre-trained language model SIKU-RoBERTa and the simple tagging layers. Our system significantly outperforms the official baselines in the released test sets and shows the effectiveness.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,399
inproceedings
wang-ren-2022-uncertainty
The Uncertainty-based Retrieval Framework for {A}ncient {C}hinese {CWS} and {POS}
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.25/
Wang, Pengyu and Ren, Zhichen
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
164--168
Automatic analysis for modern Chinese has greatly improved the accuracy of text mining in related fields, but the study of ancient Chinese is still relatively rare. Ancient text division and lexical annotation are important parts of classical literature comprehension, and previous studies have tried to construct auxiliary dictionary and other fused knowledge to improve the performance. In this paper, we propose a framework for ancient Chinese Word Segmentation and Part-of-Speech Tagging that makes a twofold effort: on the one hand, we try to capture the wordhood semantics; on the other hand, we re-predict the uncertain samples of baseline model by introducing external knowledge. The performance of our architecture outperforms pre-trained BERT with CRF and existing tools such as Jiayan.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,400
inproceedings
shen-etal-2022-data
Data Augmentation for Low-resource Word Segmentation and {POS} Tagging of {A}ncient {C}hinese Texts
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.26/
Shen, Yutong and Li, Jiahuan and Huang, Shujian and Zhou, Yi and Xie, Xiaopeng and Zhao, Qinxin
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
169--173
Automatic word segmentation and part-of-speech tagging of ancient books can help relevant researchers to study ancient texts. In recent years, pre-trained language models have achieved significant improvements on text processing tasks. SikuRoberta is a pre-trained language model specially designed for automatic analysis of ancient Chinese texts. Although SikuRoberta significantly boosts performance on WSG and POS tasks on ancient Chinese texts, the lack of labeled data still limits the performance of the model. In this paper, to alleviate the problem of insufficient training data, We define hybrid tags to integrate WSG and POS tasks and design Roberta-CRF model to predict tags for each Chinese characters. Moreover, We generate synthetic labeled data based on the LSTM language model. To further mine knowledge in SikuRoberta, we generate the synthetic unlabeled data based on the Masked LM. Experiments show that the performance of the model is improved with the synthetic data, indicating that the effectiveness of the data augmentation methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,401
inproceedings
yang-2022-joint
A Joint Framework for {A}ncient {C}hinese {WS} and {POS} Tagging Based on Adversarial Ensemble Learning
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.27/
Yang, Shuxun
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
174--177
Ancient Chinese word segmentation and part-of-speech tagging tasks are crucial to facilitate the study of ancient Chinese and the dissemination of traditional Chinese culture. Current methods face problems such as lack of large-scale labeled data, individual task error propagation, and lack of robustness and generalization of models. Therefore, we propose a joint framework for ancient Chinese WS and POS tagging based on adversarial ensemble learning, called AENet. On the basis of pre-training and fine-tuning, AENet uses a joint tagging approach of WS and POS tagging and treats it as a joint sequence tagging task. Meanwhile, AENet incorporates adversarial training and ensemble learning, which effectively improves the model recognition efficiency while enhancing the robustness and generalization of the model. Our experiments demonstrate that AENet improves the F1 score of word segmentation by 4.48{\%} and the score of part-of-speech tagging by 2.29{\%} on test dataset compared with the baseline, which shows high performance and strong generalization.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,402
inproceedings
xinyuan-etal-2022-glyph
Glyph Features Matter: A Multimodal Solution for {E}va{H}an in {LT}4{HALA}2022
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.28/
Xinyuan, Wei and Weihao, Liu and Zong, Qing and Shaoqing, Zhang and Hu, Baotian
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
178--182
We participate in the LT4HALA2022 shared task EvaHan. This task has two subtasks. Subtask 1 is word segmentation, and subtask 2 is part-of-speech tagging. Each subtask consists of two tracks, a close track that can only use the data and models provided by the organizer, and an open track without restrictions. We employ three pre-trained models, two of which are open-source pre-trained models for ancient Chinese (Siku-Roberta and roberta-classical-chinese), and one is our pre-trained GlyphBERT combined with glyph features. Our methods include data augmentation, data pre-processing, model pretraining, downstream fine-tuning, k-fold cross validation and model ensemble. We achieve competitive P, R, and F1 scores on both our own validation set and the final public test set.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,403
inproceedings
sprugnoli-etal-2022-overview
Overview of the {E}va{L}atin 2022 Evaluation Campaign
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.29/
Sprugnoli, Rachele and Passarotti, Marco and Cecchini, Flavio Massimiliano and Fantoli, Margherita and Moretti, Giovanni
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
183--188
This paper describes the organization and the results of the second edition of EvaLatin, the campaign for the evaluation of Natural Language Processing tools for Latin. The three shared tasks proposed in EvaLatin 2022, i.,e.,Lemmatization, Part-of-Speech Tagging and Features Identification, are aimed to foster research in the field of language technologies for Classical languages. The shared dataset consists of texts mainly taken from the LASLA corpus. More specifically, the training set includes only prose texts of the Classical period, whereas the test set is organized in three sub-tasks: a Classical sub-task on a prose text of an author not included in the training data, a Cross-genre sub-task on poetic and scientific texts, and a Cross-time sub-task on a text of the 15th century. The results obtained by the participants for each task and sub-task are presented and discussed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,404
inproceedings
mercelis-keersmaekers-2022-electra
An {ELECTRA} Model for {L}atin Token Tagging Tasks
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.30/
Mercelis, Wouter and Keersmaekers, Alek
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
189--192
This report describes the KU Leuven / Brepols-CTLO submission to EvaLatin 2022. We present the results of our current small Latin ELECTRA model, which will be expanded to a larger model in the future. For the lemmatization task, we combine a neural token-tagging approach with the in-house rule-based lemma lists from Brepols' ReFlex software. The results are decent, but suffer from inconsistencies between Brepols' and EvaLatin`s definitions of a lemma. For POS-tagging, the results come up just short from the first place in this competition, mainly struggling with proper nouns. For morphological tagging, there is much more room for improvement. Here, the constraints added to our Multiclass Multilabel model were often not tight enough, causing missing morphological features. We will further investigate why the combination of the different morphological features, which perform fine on their own, leads to issues.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,405
inproceedings
wrobel-nowak-2022-transformer
Transformer-based Part-of-Speech Tagging and Lemmatization for {L}atin
Sprugnoli, Rachele and Passarotti, Marco
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lt4hala-1.31/
Wr{\'o}bel, Krzysztof and Nowak, Krzysztof
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
193--197
The paper presents a submission to the EvaLatin 2022 shared task. Our system places first for lemmatization, part-of-speech and morphological tagging in both closed and open modalities. The results for cross-genre and cross-time sub-tasks show that the system handles the diachronic and diastratic variation of Latin. The architecture employs state-of-the-art transformer models. For part-of-speech and morphological tagging, we use XLM-RoBERTa large, while for lemmatization a ByT5 small model was employed. The paper features a thorough discussion of part-of-speech and lemmatization errors which shows how the system performance may be improved for Classical, Medieval and Neo-Latin texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,406
inproceedings
costa-etal-2022-domain
Domain Adaptation in Neural Machine Translation using a Qualia-Enriched {F}rame{N}et
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.1/
Costa, Alexandre Diniz da and Coutinho Marim, Mateus and Matos, Ely and Timponi Torrent, Tiago
Proceedings of the Thirteenth Language Resources and Evaluation Conference
1--12
In this paper we present Scylla, a methodology for domain adaptation of Neural Machine Translation (NMT) systems that make use of a multilingual FrameNet enriched with qualia relations as an external knowledge base. Domain adaptation techniques used in NMT usually require fine-tuning and in-domain training data, which may pose difficulties for those working with lesser-resourced languages and may also lead to performance decay of the NMT system for out-of-domain sentences. Scylla does not require fine-tuning of the NMT model, avoiding the risk of model over-fitting and consequent decrease in performance for out-of-domain translations. Two versions of Scylla are presented: one using the source sentence as input, and another one using the target sentence. We evaluate Scylla in comparison to a state-of-the-art commercial NMT system in an experiment in which 50 sentences from the Sports domain are translated from Brazilian Portuguese to English. The two versions of Scylla significantly outperform the baseline commercial system in HTER.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,408
inproceedings
gladkoff-han-2022-hope
{HOPE}: A Task-Oriented and Human-Centric Evaluation Framework Using Professional Post-Editing Towards More Effective {MT} Evaluation
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.2/
Gladkoff, Serge and Han, Lifeng
Proceedings of the Thirteenth Language Resources and Evaluation Conference
13--21
Traditional automatic evaluation metrics for machine translation have been widely criticized by linguists due to their low accuracy, lack of transparency, focus on language mechanics rather than semantics, and low agreement with human quality evaluation. Human evaluations in the form of MQM-like scorecards have always been carried out in real industry setting by both clients and translation service providers (TSPs). However, traditional human translation quality evaluations are costly to perform and go into great linguistic detail, raise issues as to inter-rater reliability (IRR) and are not designed to measure quality of worse than premium quality translations. In this work, we introduce \textbf{HOPE}, a task-oriented and \textit{ \textbf{h}}uman-centric evaluation framework for machine translation output based \textit{ \textbf{o}}n professional \textit{ \textbf{p}}ost-\textit{ \textbf{e}}diting annotations. It contains only a limited number of commonly occurring error types, and uses a scoring model with geometric progression of error penalty points (EPPs) reflecting error severity level to each translation unit. The initial experimental work carried out on English-Russian language pair MT outputs on marketing content type of text from highly technical domain reveals that our evaluation framework is quite effective in reflecting the MT output quality regarding both overall system-level performance and segment-level transparency, and it increases the IRR for error type interpretation. The approach has several key advantages, such as ability to measure and compare less than perfect MT output from different systems, ability to indicate human perception of quality, immediate estimation of the labor effort required to bring MT output to premium quality, low-cost and faster application, as well as higher IRR. Our experimental data is available at \url{https://github.com/lHan87/HOPE}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,409
inproceedings
park-etal-2022-priming
Priming {A}ncient {K}orean Neural Machine Translation
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.3/
Park, Chanjun and Lee, Seolhwa and Seo, Jaehyung and Moon, Hyeonseok and Eo, Sugyeong and Lim, Heuiseok
Proceedings of the Thirteenth Language Resources and Evaluation Conference
22--28
In recent years, there has been an increasing need for the restoration and translation of historical languages. In this study, we attempt to translate historical records in ancient Korean language based on neural machine translation (NMT). Inspired by priming, a cognitive science theory that two different stimuli influence each other, we propose novel priming ancient-Korean NMT (AKNMT) using bilingual subword embedding initialization with structural property awareness in the ancient documents. Finally, we obtain state-of-the-art results in the AKNMT task. To the best of our knowledge, we confirm the possibility of developing a human-centric model that incorporates the concepts of cognitive science and analyzes the result from the perspective of interference and cognitive dissonance theory for the first time.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,410
inproceedings
colman-etal-2022-geco
{GECO}-{MT}: The Ghent Eye-tracking Corpus of Machine Translation
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.4/
Colman, Toon and Fonteyne, Margot and Daems, Joke and Dirix, Nicolas and Macken, Lieve
Proceedings of the Thirteenth Language Resources and Evaluation Conference
29--38
In the present paper, we describe a large corpus of eye movement data, collected during natural reading of a human translation and a machine translation of a full novel. This data set, called GECO-MT (Ghent Eye tracking Corpus of Machine Translation) expands upon an earlier corpus called GECO (Ghent Eye-tracking Corpus) by Cop et al. (2017). The eye movement data in GECO-MT will be used in future research to investigate the effect of machine translation on the reading process and the effects of various error types on reading. In this article, we describe in detail the materials and data collection procedure of GECO-MT. Extensive information on the language proficiency of our participants is given, as well as a comparison with the participants of the original GECO. We investigate the distribution of a selection of important eye movement variables and explore the possibilities for future analyses of the data. GECO-MT is freely available at \url{https://www.lt3.ugent.be/resources/geco-mt}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,411
inproceedings
remijnse-etal-2022-introducing
Introducing Frege to {F}illmore: A {F}rame{N}et Dataset that Captures both Sense and Reference
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.5/
Remijnse, Levi and Vossen, Piek and Fokkens, Antske and Titarsolej, Sam
Proceedings of the Thirteenth Language Resources and Evaluation Conference
39--50
This article presents the first output of the Dutch FrameNet annotation tool, which facilitates both referential- and frame annotations of language-independent corpora. On the referential level, the tool links in-text mentions to structured data, grounding the text in the real world. On the frame level, those same mentions are annotated with respect to their semantic sense. This way of annotating not only generates a rich linguistic dataset that is grounded in real-world event instances, but also guides the annotators in frame identification, resulting in high inter-annotator-agreement and consistent annotations across documents and at discourse level, exceeding traditional sentence level annotations of frame elements. Moreover, the annotation tool features a dynamic lexical lookup that increases the development of a cross-domain FrameNet lexicon.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,412
inproceedings
pedersen-etal-2022-compiling
Compiling a Suitable Level of Sense Granularity in a Lexicon for {AI} Purposes: The Open Source {COR} Lexicon
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.6/
Pedersen, Bolette and S{\o}rensen, Nathalie Carmen Hau and Nimb, Sanni and Fl{\o}rke, Ida and Olsen, Sussi and Troelsg{\r{a}}rd, Thomas
Proceedings of the Thirteenth Language Resources and Evaluation Conference
51--60
We present The Central Word Register for Danish (COR), which is an open source lexicon project for general AI purposes funded and initiated by the Danish Agency for Digitisation as part of an AI initiative embarked by the Danish Government in 2020. We focus here on the lexical semantic part of the project (COR-S) and describe how we {--} based on the existing fine-grained sense inventory from Den Danske Ordbog (DDO) {--} compile a more AI suitable sense granularity level of the vocabulary. A three-step methodology is applied: We establish a set of linguistic principles for defining core senses in COR-S and from there, we generate a hand-crafted gold standard of 6,000 lemmas depicting how to come from the fine-grained DDO sense to the COR inventory. Finally, we experiment with a number of language models in order to automatize the sense reduction of the rest of the lexicon. The models comprise a ruled-based model that applies our linguistic principles in terms of features, a word2vec model using cosine similarity to measure the sense proximity, and finally a deep neural BERT model fine-tuned on our annotations. The rule-based approach shows best results, in particular on adjectives, however, when focusing on the average polysemous vocabulary, the BERT model shows promising results too.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,413
inproceedings
bond-choo-2022-sense
Sense and Sentiment
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.7/
Bond, Francis and Choo, Merrick
Proceedings of the Thirteenth Language Resources and Evaluation Conference
61--69
In this paper we examine existing sentiment lexicons and sense-based sentiment-tagged corpora to find out how sense and concept-based semantic relations effect sentiment scores (for polarity and valence). We show that some relations are good predictors of sentiment of related words: antonyms have similar valence and opposite polarity, synonyms similar valence and polarity, as do many derivational relations. We use this knowledge and existing resources to build a sentiment annotated wordnet of English, and show how it can be used to produce sentiment lexicons for other languages using the Open Multilingual Wordnet.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,414
inproceedings
sio-morgado-da-costa-2022-enriching
Enriching Linguistic Representation in the {C}antonese {W}ordnet and Building the New {C}antonese {W}ordnet Corpus
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.8/
Sio, Ut Seong and Morgado da Costa, Lu{\'i}s
Proceedings of the Thirteenth Language Resources and Evaluation Conference
70--78
This paper reports on the most recent improvements on the Cantonese Wordnet, a wordnet project started in 2019 (Sio and Morgado da Costa, 2019) with the aim of capturing and organizing lexico-semantic information of Hong Kong Cantonese. The improvements we present here extend both the breadth and depth of the Cantonese Wordnet: increasing the general coverage, adding functional categories, enriching verbal representations, as well as creating the Cantonese Wordnet Corpus {--} a corpus of handcrafted examples where individual senses are shown in context.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,415
inproceedings
habash-palfreyman-2022-zaebuc
{ZAEBUC}: An Annotated {A}rabic-{E}nglish Bilingual Writer Corpus
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.9/
Habash, Nizar and Palfreyman, David
Proceedings of the Thirteenth Language Resources and Evaluation Conference
79--88
We present ZAEBUC, an annotated Arabic-English bilingual writer corpus comprising short essays by first-year university students at Zayed University in the United Arab Emirates. We describe and discuss the various guidelines and pipeline processes we followed to create the annotations and quality check them. The annotations include spelling and grammar correction, morphological tokenization, Part-of-Speech tagging, lemmatization, and Common European Framework of Reference (CEFR) ratings. All of the annotations are done on Arabic and English texts using consistent guidelines as much as possible, with tracked alignments among the different annotations, and to the original raw texts. For morphological tokenization, POS tagging, and lemmatization, we use existing automatic annotation tools followed by manual correction. We also present various measurements and correlations with preliminary insights drawn from the data and annotations. The publicly available ZAEBUC corpus and its annotations are intended to be the stepping stones for additional annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,416
inproceedings
bolucu-can-2022-turkish
{T}urkish {U}niversal {C}onceptual {C}ognitive {A}nnotation
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.10/
B{\"ol{\"uc{\"u, Necva and Can, Burcu
Proceedings of the Thirteenth Language Resources and Evaluation Conference
89--99
Universal Conceptual Cognitive Annotation (UCCA) (Abend and Rappoport, 2013a) is a cross-lingual semantic annotation framework that provides an easy annotation without any requirement for linguistic background. UCCA-annotated datasets have been already released in English, French, and German. In this paper, we introduce the first UCCA-annotated Turkish dataset that currently involves 50 sentences obtained from the METU-Sabanci Turkish Treebank (Atalay et al., 2003; Oflazeret al., 2003). We followed a semi-automatic annotation approach, where an external semantic parser is utilised for an initial annotation of the dataset, which is partially accurate and requires refinement. We manually revised the annotations obtained from the semantic parser that are not in line with the UCCA rules that we defined for Turkish. We used the same external semantic parser for evaluation purposes and conducted experiments with both zero-shot and few-shot learning. While the parser cannot predict remote edges in zero-shot setting, using even a small subset of training data in few-shot setting increased the overall F-1 score including the remote edges. This is the initial version of the annotated dataset and we are currently extending the dataset. We will release the current Turkish UCCA annotation guideline along with the annotated dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,417
inproceedings
varadi-etal-2022-introducing
Introducing the {CURLICAT} Corpora: Seven-language Domain Specific Annotated Corpora from Curated Sources
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.11/
V{\'a}radi, Tam{\'a}s and Ny{\'e}ki, Bence and Koeva, Svetla and Tadi{\'c}, Marko and {\v{S}}tefanec, Vanja and Ogrodniczuk, Maciej and Nito{\'n}, Bart{\l}omiej and P{\k{e}}zik, Piotr and Barbu Mititelu, Verginica and Irimia, Elena and Mitrofan, Maria and Tufiș, Dan and Garab{\'i}k, Radovan and Krek, Simon and Repar, Andra{\v{z}}
Proceedings of the Thirteenth Language Resources and Evaluation Conference
100--108
This article presents the current outcomes of the CURLICAT CEF Telecom project, which aims to collect and deeply annotate a set of large corpora from selected domains. The CURLICAT corpus includes 7 monolingual corpora (Bulgarian, Croatian, Hungarian, Polish, Romanian, Slovak and Slovenian) containing selected samples from respective national corpora. These corpora are automatically tokenized, lemmatized and morphologically analysed and the named entities annotated. The annotations are uniformly provided for each language specific corpus while the common metadata schema is harmonised across the languages. Additionally, the corpora are annotated for IATE terms in all languages. The file format is CoNLL-U Plus format, containing the ten columns specific to the CoNLL-U format and three extra columns specific to our corpora as defined by Var{\'a}di et al. (2020). The CURLICAT corpora represent a rich and valuable source not just for training NMT models, but also for further studies and developments in machine learning, cross-lingual terminological data extraction and classification.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,418
inproceedings
rytting-etal-2022-ru
{RU}-{ADEPT}: {R}ussian Anonymized Dataset with Eight Personality Traits
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.12/
Rytting, C. Anton and Novak, Valerie and Hull, James R. and Frank, Victor M. and Rodrigues, Paul and Lee, Jarrett G. W. and Miller-Sims, Laurel
Proceedings of the Thirteenth Language Resources and Evaluation Conference
109--118
Social media has provided a platform for many individuals to easily express themselves naturally and publicly, and researchers have had the opportunity to utilize large quantities of this data to improve author trait analysis techniques and to improve author trait profiling systems. The majority of the work in this area, however, has been narrowly spent on English and other Western European languages, and generally focuses on a single social network at a time, despite the large quantity of data now available across languages and differences that have been found across platforms. This paper introduces RU-ADEPT, a dataset of Russian authors' personality trait scores{--}Big Five and Dark Triad, demographic information (e.g. age, gender), with associated corpus of the authors' cross-contributions to (up to) four different social media platforms{--}VKontakte (VK), LiveJournal, Blogger, and Moi Mir. We believe this to be the first publicly-available dataset associating demographic and personality trait data with Russian-language social media content, the first paper to describe the collection of Dark Triad scores with texts across multiple Russian-language social media platforms, and to a limited extent, the first publicly-available dataset of personality traits to author content across several different social media sites.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,419
inproceedings
brabant-etal-2022-coqar
{C}o{QAR}: Question Rewriting on {C}o{QA}
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.13/
Brabant, Quentin and Lecorv{\'e}, Gw{\'e}nol{\'e} and Rojas Barahona, Lina M.
Proceedings of the Thirteenth Language Resources and Evaluation Conference
119--126
Questions asked by humans during a conversation often contain contextual dependencies, i.e., explicit or implicit references to previous dialogue turns. These dependencies take the form of coreferences (e.g., via pronoun use) or ellipses, and can make the understanding difficult for automated systems. One way to facilitate the understanding and subsequent treatments of a question is to rewrite it into an out-of-context form, i.e., a form that can be understood without the conversational context. We propose CoQAR, a corpus containing 4.5K conversations from the Conversational Question-Answering dataset CoQA, for a total of 53K follow-up question-answer pairs. Each original question was manually annotated with at least 2 at most 3 out-of-context rewritings. CoQA originally contains 8k conversations, which sum up to 127k question-answer pairs. CoQAR can be used in the supervised learning of three tasks: question paraphrasing, question rewriting and conversational question answering. In order to assess the quality of CoQAR`s rewritings, we conduct several experiments consisting in training and evaluating models for these three tasks. Our results support the idea that question rewriting can be used as a preprocessing step for (conversational and non-conversational) question answering models, thereby increasing their performances.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,420
inproceedings
aicher-etal-2022-user
User Interest Modelling in Argumentative Dialogue Systems
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.14/
Aicher, Annalena and Gerstenlauer, Nadine and Minker, Wolfgang and Ultes, Stefan
Proceedings of the Thirteenth Language Resources and Evaluation Conference
127--136
Most systems helping to provide structured information and support opinion building, discuss with users without considering their individual interest. The scarce existing research on user interest in dialogue systems depends on explicit user feedback. Such systems require user responses that are not content-related and thus, tend to disturb the dialogue flow. In this paper, we present a novel model for implicitly estimating user interest during argumentative dialogues based on semantically clustered data. Therefore, an online user study was conducted to acquire training data which was used to train a binary neural network classifier in order to predict whether or not users are still interested in the content of the ongoing dialogue. We achieved a classification accuracy of 74.9{\%} and furthermore investigated with different Artificial Neural Networks (ANN) which new argument would fit the user interest best.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,421
inproceedings
xompero-etal-2022-every
Every time {I} fire a conversational designer, the performance of the dialogue system goes down
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.15/
Xompero, Giancarlo and Mastromattei, Michele and Salman, Samir and Giannone, Cristina and Favalli, Andrea and Romagnoli, Raniero and Zanzotto, Fabio Massimo
Proceedings of the Thirteenth Language Resources and Evaluation Conference
137--145
Incorporating handwritten domain scripts into neural-based task-oriented dialogue systems may be an effective way to reduce the need for large sets of annotated dialogues. In this paper, we investigate how the use of domain scripts written by conversational designers affects the performance of neural-based dialogue systems. To support this investigation, we propose the Conversational-Logic-Injection-in-Neural-Network system (CLINN) where domain scripts are coded in semi-logical rules. By using CLINN, we evaluated semi-logical rules produced by a team of differently-skilled conversational designers. We experimented with the Restaurant domain of the MultiWOZ dataset. Results show that external knowledge is extremely important for reducing the need for annotated examples for conversational systems. In fact, rules from conversational designers used in CLINN significantly outperform a state-of-the-art neural-based dialogue system when trained with smaller sets of annotated dialogues.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,422
inproceedings
wen-etal-2022-empirical
An Empirical Study on the Overlapping Problem of Open-Domain Dialogue Datasets
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.16/
Wen, Yuqiao and Luo, Guoqing and Mou, Lili
Proceedings of the Thirteenth Language Resources and Evaluation Conference
146--153
Open-domain dialogue systems aim to converse with humans through text, and dialogue research has heavily relied on benchmark datasets. In this work, we observe the overlapping problem in DailyDialog and OpenSubtitles, two popular open-domain dialogue benchmark datasets. Our systematic analysis then shows that such overlapping can be exploited to obtain fake state-of-the-art performance. Finally, we address this issue by cleaning these datasets and setting up a proper data processing procedure for future research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,423
inproceedings
gamba-etal-2022-language
Language Technologies for the Creation of Multilingual Terminologies. Lessons Learned from the {SSHOC} Project
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.17/
Gamba, Federica and Frontini, Francesca and Broeder, Daan and Monachini, Monica
Proceedings of the Thirteenth Language Resources and Evaluation Conference
154--163
This paper is framed in the context of the SSHOC project and aims at exploring how Language Technologies can help in promoting and facilitating multilingualism in the Social Sciences and Humanities (SSH). Although most SSH researchers produce culturally and societally relevant work in their local languages, metadata and vocabularies used in the SSH domain to describe and index research data are currently mostly in English. We thus investigate Natural Language Processing and Machine Translation approaches in view of providing resources and tools to foster multilingual access and discovery to SSH content across different languages. As case studies, we create and deliver as freely, openly available data a set of multilingual metadata concepts and an automatically extracted multilingual Data Stewardship terminology. The two case studies allow as well to evaluate performances of state-of-the-art tools and to derive a set of recommendations as to how best apply them. Although not adapted to the specific domain, the employed tools prove to be a valid asset to translation tasks. Nonetheless, validation of results by domain experts proficient in the language is an unavoidable phase of the whole workflow.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,424
inproceedings
schulder-hanke-2022-fair
How to be {FAIR} when you {CARE}: The {DGS} {C}orpus as a Case Study of Open Science Resources for Minority Languages
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.18/
Schulder, Marc and Hanke, Thomas
Proceedings of the Thirteenth Language Resources and Evaluation Conference
164--173
The publication of resources for minority languages requires a balance between making data open and accessible and respecting the rights and needs of its language community. The FAIR principles were introduced as a guide to good open data practices and they have since been complemented by the CARE principles for indigenous data governance. This article describes how the DGS Corpus implemented these principles and how the two sets of principles affected each other. The DGS Corpus is a large collection of recordings of members of the deaf community in Germany communicating in their primary language, German Sign Language (DGS); it was created to be both as a resource for linguistic research and as a record of the life experiences of deaf people in Germany. The corpus was designed with CARE in mind to respect and empower the language community and FAIR data publishing was used to enhance its usefulness as a scientific resource.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,425
inproceedings
basile-etal-2022-italian
{I}talian {NLP} for Everyone: Resources and Models from {EVALITA} to the {E}uropean Language Grid
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.19/
Basile, Valerio and Bosco, Cristina and Fell, Michael and Patti, Viviana and Varvara, Rossella
Proceedings of the Thirteenth Language Resources and Evaluation Conference
174--180
The European Language Grid enables researchers and practitioners to easily distribute and use NLP resources and models, such as corpora and classifiers. We describe in this paper how, during the course of our EVALITA4ELG project, we have integrated datasets and systems for the Italian language. We show how easy it is to use the integrated systems, and demonstrate in case studies how seamless the application of the platform is, providing Italian NLP for everyone.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,426
inproceedings
rosner-etal-2022-cross
Cross-Lingual Link Discovery for Under-Resourced Languages
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.20/
Rosner, Michael and Ahmadi, Sina and Apostol, Elena-Simona and Bosque-Gil, Julia and Chiarcos, Christian and Dojchinovski, Milan and Gkirtzou, Katerina and Gracia, Jorge and Gromann, Dagmar and Liebeskind, Chaya and Val{\={u}}nait{\.{e}} Ole{\v{s}}kevi{\v{c}}ien{\.{e}}, Giedr{\.{e}} and S{\'e}rasset, Gilles and Truic{\u{a}}, Ciprian-Octavian
Proceedings of the Thirteenth Language Resources and Evaluation Conference
181--192
In this paper, we provide an overview of current technologies for cross-lingual link discovery, and we discuss challenges, experiences and prospects of their application to under-resourced languages. We rst introduce the goals of cross-lingual linking and associated technologies, and in particular, the role that the Linked Data paradigm (Bizer et al., 2011) applied to language data can play in this context. We de ne under-resourced languages with a speci c focus on languages actively used on the internet, i.e., languages with a digitally versatile speaker community, but limited support in terms of language technology. We argue that languages for which considerable amounts of textual data and (at least) a bilingual word list are available, techniques for cross-lingual linking can be readily applied, and that these enable the implementation of downstream applications for under-resourced languages via the localisation and adaptation of existing technologies and resources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,427
inproceedings
dragos-etal-2022-angry
Angry or Sad ? Emotion Annotation for Extremist Content Characterisation
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.21/
Dragos, Valentina and Battistelli, Delphine and Etienne, Aline and Constable, Yol{\`e}ne
Proceedings of the Thirteenth Language Resources and Evaluation Conference
193--201
This paper examines the role of emotion annotations to characterize extremist content released on social platforms. The analysis of extremist content is important to identify user emotions towards some extremist ideas and to highlight the root cause of where emotions and extremist attitudes merge together. To address these issues our methodology combines knowledge from sociological and linguistic annotations to explore French extremist content collected online. For emotion linguistic analysis, the solution presented in this paper relies on a complex linguistic annotation scheme. The scheme was used to annotate extremist text corpora in French. Data sets were collected online by following semi-automatic procedures for content selection and validation. The paper describes the integrated annotation scheme, the annotation protocol that was set-up for French corpora annotation and the results, e.g. agreement measures and remarks on annotation disagreements. The aim of this work is twofold: first, to provide a characterization of extremist contents; second, to validate the annotation scheme and to test its capacity to capture and describe various aspects of emotions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,428
inproceedings
zampieri-etal-2022-identification-multiword
Identification of Multiword Expressions in Tweets for Hate Speech Detection
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.22/
Zampieri, Nicolas and Ramisch, Carlos and Illina, Irina and Fohr, Dominique
Proceedings of the Thirteenth Language Resources and Evaluation Conference
202--210
Multiword expression (MWE) identification in tweets is a complex task due to the complex linguistic nature of MWEs combined with the non-standard language use in social networks. MWE features were shown to be helpful for hate speech detection (HSD). In this article, we present joint experiments on these two related tasks on English Twitter data: first we focus on the MWE identification task, and then we observe the influence of MWE-based features on the HSD task. For MWE identification, we compare the performance of two systems: lexicon-based and deep neural networks-based (DNN). We experimentally evaluate seven configurations of a state-of-the-art DNN system based on recurrent networks using pre-trained contextual embeddings from BERT. The DNN-based system outperforms the lexicon-based one thanks to its superior generalisation power, yielding much better recall. For the HSD task, we propose a new DNN architecture for incorporating MWE features. We confirm that MWE features are helpful for the HSD task. Moreover, the proposed DNN architecture beats previous MWE-based HSD systems by 0.4 to 1.1 F-measure points on average on four Twitter HSD corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,429
inproceedings
jantscher-kern-2022-causal
Causal Investigation of Public Opinion during the {COVID}-19 Pandemic via Social Media Text
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.23/
Jantscher, Michael and Kern, Roman
Proceedings of the Thirteenth Language Resources and Evaluation Conference
211--226
Understanding the needs and fears of citizens, especially during a pandemic such as COVID-19, is essential for any government or legislative entity. An effective COVID-19 strategy further requires that the public understand and accept the restriction plans imposed by these entities. In this paper, we explore a causal mediation scenario in which we want to emphasize the use of NLP methods in combination with methods from economics and social sciences. Based on sentiment analysis of Tweets towards the current COVID-19 situation in the UK and Sweden, we conduct several causal inference experiments and attempt to decouple the effect of government restrictions on mobility behavior from the effect that occurs due to public perception of the COVID-19 strategy in a country. To avoid biased results we control for valid country specific epidemiological and time-varying confounders. Comprehensive experiments show that not all changes in mobility are caused by countries implemented policies but also by the support of individuals in the fight against this pandemic. We find that social media texts are an important source to capture citizens' concerns and trust in policy makers and are suitable to evaluate the success of government policies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,430
inproceedings
nakwijit-purver-2022-misspelling
Misspelling Semantics in {T}hai
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.24/
Nakwijit, Pakawat and Purver, Matthew
Proceedings of the Thirteenth Language Resources and Evaluation Conference
227--236
User-generated content is full of misspellings. Rather than being just random noise, we hypothesise that many misspellings contain hidden semantics that can be leveraged for language understanding tasks. This paper presents a fine-grained annotated corpus of misspelling in Thai, together with an analysis of misspelling intention and its possible semantics to get a better understanding of the misspelling patterns observed in the corpus. In addition, we introduce two approaches to incorporate the semantics of misspelling: Misspelling Average Embedding (MAE) and Misspelling Semantic Tokens (MST). Experiments on a sentiment analysis task confirm our overall hypothesis: additional semantics from misspelling can boost the micro F1 score up to 0.4-2{\%}, while blindly normalising misspelling is harmful and suboptimal.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,431
inproceedings
moriceau-etal-2022-automatic
Automatic Detection of Stigmatizing Uses of Psychiatric Terms on {T}witter
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.25/
Moriceau, V{\'e}ronique and Benamara, Farah and Boumadane, Abdelmoumene
Proceedings of the Thirteenth Language Resources and Evaluation Conference
237--243
Psychiatry and people suffering from mental disorders have often been given a pejorative label that induces social rejection. Many studies have addressed discourse content about psychiatry on social media, suggesting that they convey stigmatizingrepresentations of mental health disorders. In this paper, we focus for the first time on the use of psychiatric terms in tweetsin French. We first describe the annotated dataset that we use. Then we propose several deep learning models to detectautomatically (1) the different types of use of psychiatric terms (medical use, misuse or irrelevant use), and (2) the polarityof the tweet. We show that polarity detection can be improved when done in a multitask framework in combination with typeof use detection. This confirms the observations made manually on several datasets, namely that the polarity of a tweet iscorrelated to the type of term use (misuses are mostly negative whereas medical uses are neutral). The results are interesting forboth tasks and it allows to consider the possibility for performant automatic approaches in order to conduct real-time surveyson social media, larger and less expensive than existing manual ones
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,432
inproceedings
mohr-etal-2022-covert
{C}o{VERT}: A Corpus of Fact-checked Biomedical {COVID}-19 Tweets
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.26/
Mohr, Isabelle and W{\"uhrl, Amelie and Klinger, Roman
Proceedings of the Thirteenth Language Resources and Evaluation Conference
244--257
During the first two years of the COVID-19 pandemic, large volumes of biomedical information concerning this new disease have been published on social media. Some of this information can pose a real danger, particularly when false information is shared, for instance recommendations how to treat diseases without professional medical advice. Therefore, automatic fact-checking resources and systems developed specifically for medical domain are crucial. While existing fact-checking resources cover COVID-19 related information in news or quantify the amount of misinformation in tweets, there is no dataset providing fact-checked COVID-19 related Twitter posts with detailed annotations for biomedical entities, relations and relevant evidence. We contribute CoVERT, a fact-checked corpus of tweets with a focus on the domain of biomedicine and COVID-19 related (mis)information. The corpus consists of 300 tweets, each annotated with named entities and relations. We employ a novel crowdsourcing methodology to annotate all tweets with fact-checking labels and supporting evidence, which crowdworkers search for online. This methodology results in substantial inter-annotator agreement. Furthermore, we use the retrieved evidence extracts as part of a fact-checking pipeline, finding that the real-world evidence is more useful than the knowledge directly available in pretrained language models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,433
inproceedings
barbieri-etal-2022-xlm
{XLM}-{T}: Multilingual Language Models in {T}witter for Sentiment Analysis and Beyond
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.27/
Barbieri, Francesco and Espinosa Anke, Luis and Camacho-Collados, Jose
Proceedings of the Thirteenth Language Resources and Evaluation Conference
258--266
Language models are ubiquitous in current NLP, and their multilingual capacity has recently attracted considerable attention. However, current analyses have almost exclusively focused on (multilingual variants of) standard benchmarks, and have relied on clean pre-training and task-specific corpora as multilingual signals. In this paper, we introduce XLM-T, a model to train and evaluate multilingual language models in Twitter. In this paper we provide: (1) a new strong multilingual baseline consisting of an XLM-R (Conneau et al. 2020) model pre-trained on millions of tweets in over thirty languages, alongside starter code to subsequently fine-tune on a target task; and (2) a set of unified sentiment analysis Twitter datasets in eight different languages and a XLM-T model trained on this dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,434
inproceedings
alhassan-etal-2022-bad
{\textquoteleft}Am {I} the Bad One'? Predicting the Moral Judgement of the Crowd Using Pre{--}trained Language Models
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.28/
Alhassan, Areej and Zhang, Jinkai and Schlegel, Viktor
Proceedings of the Thirteenth Language Resources and Evaluation Conference
267--276
Natural language processing (NLP) has been shown to perform well in various tasks, such as answering questions, ascertaining natural language inference and anomaly detection. However, there are few NLP-related studies that touch upon the moral context conveyed in text. This paper studies whether state-of-the-art, pre-trained language models are capable of passing moral judgments on posts retrieved from a popular Reddit user board. Reddit is a social discussion website and forum where posts are promoted by users through a voting system. In this work, we construct a dataset that can be used for moral judgement tasks by collecting data from the AITA? (Am I the A*******?) subreddit. To model our task, we harnessed the power of pre-trained language models, including BERT, RoBERTa, RoBERTa-large, ALBERT and Longformer. We then fine-tuned these models and evaluated their ability to predict the correct verdict as judged by users for each post in the datasets. RoBERTa showed relative improvements across the three datasets, exhibiting a rate of 87{\%} accuracy and a Matthews correlation coefficient (MCC) of 0.76, while the use of the Longformer model slightly improved the performance when used with longer sequences, achieving 87{\%} accuracy and 0.77 MCC.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,435
inproceedings
han-etal-2022-generating
Generating Questions from {W}ikidata Triples
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.29/
Han, Kelvin and Castro Ferreira, Thiago and Gardent, Claire
Proceedings of the Thirteenth Language Resources and Evaluation Conference
277--290
Question generation from knowledge bases (or knowledge base question generation, KBQG) is the task of generating questions from structured database information, typically in the form of triples representing facts. To handle rare entities and generalize to unseen properties, previous work on KBQG resorted to extensive, often ad-hoc pre- and post-processing of the input triple. We revisit KBQG {--} using pre training, a new (triple, question) dataset and taking question type into account {--} and show that our approach outperforms previous work both in a standard and in a zero-shot setting. We also show that the extended KBQG dataset (also helpful for knowledge base question answering) we provide allows not only for better coverage in terms of knowledge base (KB) properties but also for increased output variability in that it permits the generation of multiple questions from the same KB triple.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,436
inproceedings
muffo-etal-2022-evaluating
Evaluating Transformer Language Models on Arithmetic Operations Using Number Decomposition
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.30/
Muffo, Matteo and Cocco, Aldo and Bertino, Enrico
Proceedings of the Thirteenth Language Resources and Evaluation Conference
291--297
In recent years, Large Language Models such as GPT-3 showed remarkable capabilities in performing NLP tasks in the zero and few shot settings. On the other hand, the experiments highlighted the difficulty of GPT-3 in carrying out tasks that require a certain degree of reasoning, such as arithmetic operations. In this paper we evaluate the ability of Transformer Language Models to perform arithmetic operations following a pipeline that, before performing computations, decomposes numbers in units, tens, and so on. We denote the models fine-tuned with this pipeline with the name Calculon and we test them in the task of performing additions, subtractions and multiplications on the same test sets of GPT-3. Results show an increase of accuracy of 63{\%} in the five-digit addition task. Moreover, we demonstrate the importance of the decomposition pipeline introduced, since fine-tuning the same Language Model without decomposing numbers results in 0{\%} accuracy in the five-digit addition task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,437
inproceedings
naraki-etal-2022-evaluating
Evaluating the Effects of Embedding with Speaker Identity Information in Dialogue Summarization
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.31/
Naraki, Yuji and Sakai, Tetsuya and Hayashi, Yoshihiko
Proceedings of the Thirteenth Language Resources and Evaluation Conference
298--304
Automatic dialogue summarization is a task used to succinctly summarize a dialogue transcript while correctly linking the speakers and their speech, which distinguishes this task from a conventional document summarization. To address this issue and reduce the {\textquotedblleft}who said what{\textquotedblright}-related errors in a summary, we propose embedding the speaker identity information in the input embedding into the dialogue transcript encoder. Unlike the speaker embedding proposed by Gu et al. (2020), our proposal takes into account the informativeness of position embedding. By experimentally comparing several embedding methods, we confirmed that the scores of ROUGE and a human evaluation of the generated summaries were substantially increased by embedding speaker information at the less informative part of the fixed position embedding with sinusoidal functions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,438
inproceedings
monsen-rennes-2022-perceived
Perceived Text Quality and Readability in Extractive and Abstractive Summaries
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.32/
Monsen, Julius and Rennes, Evelina
Proceedings of the Thirteenth Language Resources and Evaluation Conference
305--312
We present results from a study investigating how users perceive text quality and readability in extractive and abstractive summaries. We trained two summarisation models on Swedish news data and used these to produce summaries of articles. With the produced summaries, we conducted an online survey in which the extractive summaries were compared to the abstractive summaries in terms of fluency, adequacy and simplicity. We found statistically significant differences in perceived fluency and adequacy between abstractive and extractive summaries but no statistically significant difference in simplicity. Extractive summaries were preferred in most cases, possibly due to the types of errors the summaries tend to have.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,439
inproceedings
mei-etal-2022-learning
Learning to Prioritize: Precision-Driven Sentence Filtering for Long Text Summarization
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.33/
Mei, Alex and Kabir, Anisha and Bapat, Rukmini and Judge, John and Sun, Tony and Wang, William Yang
Proceedings of the Thirteenth Language Resources and Evaluation Conference
313--318
Neural text summarization has shown great potential in recent years. However, current state-of-the-art summarization models are limited by their maximum input length, posing a challenge to summarizing longer texts comprehensively. As part of a layered summarization architecture, we introduce PureText, a simple yet effective pre-processing layer that removes low- quality sentences in articles to improve existing summarization models. When evaluated on popular datasets like WikiHow and Reddit TIFU, we show up to 3.84 and 8.57 point ROUGE-1 absolute improvement on the full test set and the long article subset, respectively, for state-of-the-art summarization models such as BertSum and BART. Our approach provides downstream models with higher-quality sentences for summarization, improving overall model performance, especially on long text articles.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,440
inproceedings
ishigaki-etal-2022-automating
Automating Horizon Scanning in Future Studies
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.34/
Ishigaki, Tatsuya and Nishino, Suzuko and Washino, Sohei and Igarashi, Hiroki and Nagai, Yukari and Washida, Yuichi and Murai, Akihiko
Proceedings of the Thirteenth Language Resources and Evaluation Conference
319--327
We introduce document retrieval and comment generation tasks for automating horizon scanning. This is an important task in the field of futurology that collects sufficient information for predicting drastic societal changes in the mid- or long-term future. The steps used are: 1) retrieving news articles that imply drastic changes, and 2) writing subjective comments on each article for others' ease of understanding. As a first step in automating these tasks, we create a dataset that contains 2,266 manually collected news articles with comments written by experts. We analyze the collected documents and comments regarding characteristic words, the distance to general articles, and contents in the comments. Furthermore, we compare several methods for automating horizon scanning. Our experiments show that 1) manually collected articles are different from general articles regarding the words used and semantic distances, 2) the contents in the comment can be classified into several categories, and 3) a supervised model trained on our dataset achieves a better performance. The contributions are: 1) we propose document retrieval and comment generation tasks for horizon scanning, 2) create and analyze a new dataset, and 3) report the performance of several models and show that comment generation tasks are challenging.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,441
inproceedings
minh-etal-2022-vihealthbert
{V}i{H}ealth{BERT}: Pre-trained Language Models for {V}ietnamese in Health Text Mining
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.35/
Minh, Nguyen and Tran, Vu Hoang and Hoang, Vu and Ta, Huy Duc and Bui, Trung Huu and Truong, Steven Quoc Hung
Proceedings of the Thirteenth Language Resources and Evaluation Conference
328--337
Pre-trained language models have become crucial to achieving competitive results across many Natural Language Processing (NLP) problems. For monolingual pre-trained models in low-resource languages, the quantity has been significantly increased. However, most of them relate to the general domain, and there are limited strong baseline language models for domain-specific. We introduce ViHealthBERT, the first domain-specific pre-trained language model for Vietnamese healthcare. The performance of our model shows strong results while outperforming the general domain language models in all health-related datasets. Moreover, we also present Vietnamese datasets for the healthcare domain for two tasks are Acronym Disambiguation (AD) and Frequently Asked Questions (FAQ) Summarization. We release our ViHealthBERT to facilitate future research and downstream application for Vietnamese NLP in domain-specific. Our dataset and code are available in \url{https://github.com/demdecuong/vihealthbert}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,442
inproceedings
igamberdiev-habernal-2022-privacy
Privacy-Preserving Graph Convolutional Networks for Text Classification
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.36/
Igamberdiev, Timour and Habernal, Ivan
Proceedings of the Thirteenth Language Resources and Evaluation Conference
338--350
Graph convolutional networks (GCNs) are a powerful architecture for representation learning on documents that naturally occur as graphs, e.g., citation or social networks. However, sensitive personal information, such as documents with people`s profiles or relationships as edges, are prone to privacy leaks, as the trained model might reveal the original input. Although differential privacy (DP) offers a well-founded privacy-preserving framework, GCNs pose theoretical and practical challenges due to their training specifics. We address these challenges by adapting differentially-private gradient-based training to GCNs and conduct experiments using two optimizers on five NLP datasets in two languages. We propose a simple yet efficient method based on random graph splits that not only improves the baseline privacy bounds by a factor of 2.7 while retaining competitive F1 scores, but also provides strong privacy guarantees of epsilon = 1.0. We show that, under certain modeling choices, privacy-preserving GCNs perform up to 90{\%} of their non-private variants, while formally guaranteeing strong privacy measures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,443
inproceedings
alghamdi-etal-2022-armath
{A}r{MATH}: a Dataset for Solving {A}rabic Math Word Problems
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.37/
Alghamdi, Reem and Liang, Zhenwen and Zhang, Xiangliang
Proceedings of the Thirteenth Language Resources and Evaluation Conference
351--362
This paper studies solving Arabic Math Word Problems by deep learning. A Math Word Problem (MWP) is a text description of a mathematical problem that can be solved by deriving a math equation to reach the answer. Effective models have been developed for solving MWPs in English and Chinese. However, Arabic MWPs are rarely studied. This paper contributes the first large-scale dataset for Arabic MWPs, which contains 6,000 samples of primary-school math problems, written in Modern Standard Arabic (MSA). Arabic MWP solvers are then built with deep learning models and evaluated on this dataset. In addition, a transfer learning model is built to let the high-resource Chinese MWP solver promote the performance of the low-resource Arabic MWP solver. This work is the first to use deep learning methods to solve Arabic MWP and the first to use transfer learning to solve MWP across different languages. The transfer learning enhanced solver has an accuracy of 74.15{\%}, which is 3{\%} higher than the solver without using transfer learning. We make the dataset and solvers available in public for encouraging more research of Arabic MWPs: \url{https://github.com/reem-codes/ArMATH}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,444
inproceedings
winter-etal-2022-kimera
{KIMERA}: Injecting Domain Knowledge into Vacant Transformer Heads
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.38/
Winter, Benjamin and Rosero, Alexei Figueroa and L{\"oser, Alexander and Gers, Felix Alexander and Siu, Amy
Proceedings of the Thirteenth Language Resources and Evaluation Conference
363--373
Training transformer language models requires vast amounts of text and computational resources. This drastically limits the usage of these models in niche domains for which they are not optimized, or where domain-specific training data is scarce. We focus here on the clinical domain because of its limited access to training data in common tasks, while structured ontological data is often readily available. Recent observations in model compression of transformer models show optimization potential in improving the representation capacity of attention heads. We propose KIMERA (Knowledge Injection via Mask Enforced Retraining of Attention) for detecting, retraining and instilling attention heads with complementary structured domain knowledge. Our novel multi-task training scheme effectively identifies and targets individual attention heads that are least useful for a given downstream task and optimizes their representation with information from structured data. KIMERA generalizes well, thereby building the basis for an efficient fine-tuning. KIMERA achieves significant performance boosts on seven datasets in the medical domain in Information Retrieval and Clinical Outcome Prediction settings. We apply KIMERA to BERT-base to evaluate the extent of the domain transfer and also improve on the already strong results of BioBERT in the clinical domain.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,445
inproceedings
avram-etal-2022-distilling
Distilling the Knowledge of {R}omanian {BERT}s Using Multiple Teachers
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.39/
Avram, Andrei-Marius and Catrina, Darius and Cercel, Dumitru-Clementin and Dascalu, Mihai and Rebedea, Traian and Pais, Vasile and Tufis, Dan
Proceedings of the Thirteenth Language Resources and Evaluation Conference
374--384
Running large-scale pre-trained language models in computationally constrained environments remains a challenging problem yet to be addressed, while transfer learning from these models has become prevalent in Natural Language Processing tasks. Several solutions, including knowledge distillation, network quantization, or network pruning have been previously proposed; however, these approaches focus mostly on the English language, thus widening the gap when considering low-resource languages. In this work, we introduce three light and fast versions of distilled BERT models for the Romanian language: Distil-BERT-base-ro, Distil-RoBERT-base, and DistilMulti-BERT-base-ro. The first two models resulted from the individual distillation of knowledge from two base versions of Romanian BERTs available in literature, while the last one was obtained by distilling their ensemble. To our knowledge, this is the first attempt to create publicly available Romanian distilled BERT models, which were thoroughly evaluated on five tasks: part-of-speech tagging, named entity recognition, sentiment analysis, semantic textual similarity, and dialect identification. Our experimental results argue that the three distilled models offer performance comparable to their teachers, while being twice as fast on a GPU and {\textasciitilde}35{\%} smaller. In addition, we further test the similarity between the predictions of our students versus their teachers by measuring their label and probability loyalty, together with regression loyalty - a new metric introduced in this work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,446
inproceedings
matsunaga-etal-2022-personalized
Personalized Filled-pause Generation with Group-wise Prediction Models
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.40/
Matsunaga, Yuta and Saeki, Takaaki and Takamichi, Shinnosuke and Saruwatari, Hiroshi
Proceedings of the Thirteenth Language Resources and Evaluation Conference
385--392
In this paper, we propose a method to generate personalized filled pauses (FPs) with group-wise prediction models. Compared with fluent text generation, disfluent text generation has not been widely explored. To generate more human-like texts, we addressed disfluent text generation. The usage of disfluency, such as FPs, rephrases, and word fragments, differs from speaker to speaker, and thus, the generation of personalized FPs is required. However, it is difficult to predict them because of the sparsity of position and the frequency difference between more and less frequently used FPs. Moreover, it is sometimes difficult to adapt FP prediction models to each speaker because of the large variation of the tendency within each speaker. To address these issues, we propose a method to build group-dependent prediction models by grouping speakers on the basis of their tendency to use FPs. This method does not require a large amount of data and time to train each speaker model. We further introduce a loss function and a word embedding model suitable for FP prediction. Our experimental results demonstrate that group-dependent models can predict FPs with higher scores than a non-personalized one and the introduced loss function and word embedding model improve the prediction performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,447
inproceedings
sheikh-etal-2022-transformer
Transformer versus {LSTM} Language Models trained on Uncertain {ASR} Hypotheses in Limited Data Scenarios
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.41/
Sheikh, Imran and Vincent, Emmanuel and Illina, Irina
Proceedings of the Thirteenth Language Resources and Evaluation Conference
393--399
In several ASR use cases, training and adaptation of domain-specific LMs can only rely on a small amount of manually verified text transcriptions and sometimes a limited amount of in-domain speech. Training of LSTM LMs in such limited data scenarios can benefit from alternate uncertain ASR hypotheses, as observed in our recent work. In this paper, we propose a method to train Transformer LMs on ASR confusion networks. We evaluate whether these self-attention based LMs are better at exploiting alternate ASR hypotheses as compared to LSTM LMs. Evaluation results show that Transformer LMs achieve 3-6{\%} relative reduction in perplexity on the AMI scenario meetings but perform similar to LSTM LMs on the smaller Verbmobil conversational corpus. Evaluation on ASR N-best rescoring shows that LSTM and Transformer LMs trained on ASR confusion networks do not bring significant WER reductions. However, a qualitative analysis reveals that they are better at predicting less frequent words.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,448
inproceedings
koloski-etal-2022-thin
Out of Thin Air: Is Zero-Shot Cross-Lingual Keyword Detection Better Than Unsupervised?
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lrec-1.42/
Koloski, Boshko and Pollak, Senja and {\v{S}}krlj, Bla{\v{z}} and Martinc, Matej
Proceedings of the Thirteenth Language Resources and Evaluation Conference
400--409
Keyword extraction is the task of retrieving words that are essential to the content of a given document. Researchers proposed various approaches to tackle this problem. At the top-most level, approaches are divided into ones that require training - supervised and ones that do not - unsupervised. In this study, we are interested in settings, where for a language under investigation, no training data is available. More specifically, we explore whether pretrained multilingual language models can be employed for zero-shot cross-lingual keyword extraction on low-resource languages with limited or no available labeled training data and whether they outperform state-of-the-art unsupervised keyword extractors. The comparison is conducted on six news article datasets covering two high-resource languages, English and Russian, and four low-resource languages, Croatian, Estonian, Latvian, and Slovenian. We find that the pretrained models fine-tuned on a multilingual corpus covering languages that do not appear in the test set (i.e. in a zero-shot setting), consistently outscore unsupervised models in all six languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,449