entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
grover-banati-2022-ducs
{DUCS} at {S}em{E}val-2022 Task 6: Exploring Emojis and Sentiments for Sarcasm Detection
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.141/
Grover, Vandita and Banati, Prof Hema
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1005--1011
This paper describes the participation of team DUCS at SemEval 2022 Task 6: iSarcasmEval - Intended Sarcasm Detection in English and Arabic. Team DUCS participated in SubTask A of iSarcasmEval which was to determine if the given English text was sarcastic or not. In this work, emojis were utilized to capture how they contributed to the sarcastic nature of a text. It is observed that emojis can augment or reverse the polarity of a given statement. Thus sentiment polarities and intensities of emojis, as well as those of text, were computed to determine sarcasm. Use of capitalization, word repetition, and use of punctuation marks like '!' were factored in as sentiment intensifiers. An NLP augmenter was used to tackle the imbalanced nature of the sarcasm dataset. Several architectures comprising of various ML and DL classifiers, and transformer models like BERT and Multimodal BERT were experimented with. It was observed that Multimodal BERT outperformed other architectures tested and achieved an F1-score of 30.71{\%}. The key takeaway of this study was that sarcastic texts are usually positive sentences. In general emojis with positive polarity are used more than those with negative polarities in sarcastic texts.
null
null
10.18653/v1/2022.semeval-1.141
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,054
inproceedings
garcia-diaz-etal-2022-umuteam-semeval-2022
{UMUT}eam at {S}em{E}val-2022 Task 6: Evaluating Transformers for detecting Sarcasm in {E}nglish and {A}rabic
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.142/
Garc{\'i}a-D{\'i}az, Jos{\'e} and Caparros-Laiz, Camilo and Valencia-Garc{\'i}a, Rafael
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1012--1017
In this manuscript we detail the participation of the UMUTeam in the iSarcasm shared task (SemEval-2022). This shared task is related to the identification of sarcasm in English and Arabic documents. Our team achieve in the first challenge, a binary classification task, a F1 score of the sarcastic class of 17.97 for English and 31.75 for Arabic. For the second challenge, a multi-label classification, our results are not recorded due to an unknown problem. Therefore, we report the results of each sarcastic mechanism with the validation split. For our proposal, several neural networks that combine language-independent linguistic features with pre-trained embeddings are trained. The embeddings are based on different schemes, such as word and sentence embeddings, and contextual and non-contextual embeddings. Besides, we evaluate different techniques for the integration of the feature sets, such as ensemble learning and knowledge integration. In general, our best results are achieved using the knowledge integration strategy.
null
null
10.18653/v1/2022.semeval-1.142
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,055
inproceedings
sharma-etal-2022-r2d2-semeval
{R}2{D}2 at {S}em{E}val-2022 Task 6: Are language models sarcastic enough? Finetuning pre-trained language models to identify sarcasm
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.143/
Sharma, Mayukh and Kandasamy, Ilanthenral and W B, Vasantha
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1018--1024
This paper describes our system used for SemEval 2022 Task 6: iSarcasmEval: Intended Sarcasm Detection in English and Arabic. We participated in all subtasks based on only English datasets. Pre-trained Language Models (PLMs) have become a de-facto approach for most natural language processing tasks. In our work, we evaluate the performance of these models for identifying sarcasm. For Subtask A and Subtask B, we used simple finetuning on PLMs. For Subtask C, we propose a Siamese network architecture trained using a combination of cross-entropy and distance-maximisation loss. Our model was ranked $7^{th}$ in Subtask B, $8^{th}$ in Subtask C (English), and performed well in Subtask A (English). In our work, we also present the comparative performance of different PLMs for each Subtask.
null
null
10.18653/v1/2022.semeval-1.143
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,056
inproceedings
abdullah-etal-2022-sarcasmdet
{S}arcasm{D}et at {S}em{E}val-2022 Task 6: Detecting Sarcasm using Pre-trained Transformers in {E}nglish and {A}rabic Languages
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.144/
Abdullah, Malak and Alnore, Dalya and Swedat, Safa and Khrais, Jumana and Al-Ayyoub, Mahmoud
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1025--1030
This paper presents solution systems for task 6 at SemEval2022, iSarcasmEval: Intended Sarcasm Detection In English and Arabic. The shared task 6 consists of three sub-task. We participated in subtask A for both languages, Arabic and English. The goal of subtask A is to predict if a tweet would be considered sarcastic or not. The proposed solution SarcasmDet has been developed using the state-of-the-art Arabic and English pre-trained models AraBERT, MARBERT, BERT, and RoBERTa with ensemble techniques. The paper describes the SarcasmDet architecture with the fine-tuning of the best hyperparameter that led to this superior system. Our model ranked seventh out of 32 teams in subtask A- Arabic with an f1-sarcastic of 0.4305 and Seventeen out of 42 teams with f1-sarcastic 0.3561. However, we built another model to score f-1 sarcastic with 0.43 in English after the deadline. Both Models (Arabic and English scored 0.43 as f-1 sarcastic with ranking seventh).
null
null
10.18653/v1/2022.semeval-1.144
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,057
inproceedings
hacohen-kerner-etal-2022-jct-semeval
{JCT} at {S}em{E}val-2022 Task 6-A: Sarcasm Detection in Tweets Written in {E}nglish and {A}rabic using Preprocessing Methods and Word N-grams
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.145/
HaCohen-Kerner, Yaakov and Fchima, Matan and Meyrowitsch, Ilan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1031--1038
In this paper, we describe our submissions to SemEval-2022 contest. We tackled subtask 6-A - {\textquotedblleft}iSarcasmEval: Intended Sarcasm Detection In English and Arabic {--} Binary Classification{\textquotedblright}. We developed different models for two languages: English and Arabic. We applied 4 supervised machine learning methods, 6 preprocessing methods for English and 3 for Arabic, and 3 oversampling methods. Our best submitted model for the English test dataset was a SVC model that balanced the dataset using SMOTE and removed stop words. For the Arabic test dataset our best submitted model was a SVC model that preprocessed removed longation.
null
null
10.18653/v1/2022.semeval-1.145
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,058
inproceedings
roth-etal-2022-semeval
{S}em{E}val-2022 Task 7: Identifying Plausible Clarifications of Implicit and Underspecified Phrases in Instructional Texts
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.146/
Roth, Michael and Anthonio, Talita and Sauer, Anna
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1039--1049
We describe SemEval-2022 Task 7, a shared task on rating the plausibility of clarifications in instructional texts. The dataset for this task consists of manually clarified how-to guides for which we generated alternative clarifications and collected human plausibility judgements. The task of participating systems was to automatically determine the plausibility of a clarification in the respective context. In total, 21 participants took part in this task, with the best system achieving an accuracy of 68.9{\%}. This report summarizes the results and findings from 8 teams and their system descriptions. Finally, we show in an additional evaluation that predictions by the top participating team make it possible to identify contexts with multiple plausible clarifications with an accuracy of 75.2{\%}.
null
null
10.18653/v1/2022.semeval-1.146
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,059
inproceedings
kang-etal-2022-jbnu
{JBNU}-{CCL}ab at {S}em{E}val-2022 Task 7: {D}e{BERT}a for Identifying Plausible Clarifications in Instructional Texts
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.147/
Kang, Daewook and Lee, Sung-Min and Park, Eunhwan and Na, Seung-Hoon
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1050--1055
In this study, we examine the ability of contextualized representations of pretrained language model to distinguish whether sequences from instructional articles are plausible or implausible. Towards this end, we compare the BERT, RoBERTa, and DeBERTa models using simple classifiers based on the sentence representations of the [CLS] tokens and perform a detailed analysis by visualizing the representations of the [CLS] tokens of the models. In the experimental results of Subtask A: Multi-Class Classification, DeBERTa exhibits the best performance and produces a more distinguishable representation across different labels. Submitting an ensemble of 10 DeBERTa-based models, our final system achieves an accuracy of 61.4{\%} and is ranked fifth out of models submitted by eight teams. Further in-depth results suggest that the abilities of pretrained language models for the plausibility detection task are more strongly affected by their model structures or attention designs than by their model sizes.
null
null
10.18653/v1/2022.semeval-1.147
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,060
inproceedings
qiao-etal-2022-hw
{HW}-{TSC} at {S}em{E}val-2022 Task 7: Ensemble Model Based on Pretrained Models for Identifying Plausible Clarifications
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.148/
Qiao, Xiaosong and Li, Yinglu and Zhang, Min and Wang, Minghan and Yang, Hao and Tao, Shimin and Ying, Qin
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1056--1061
This paper describes the system for the identifying Plausible Clarifications of Implicit and Underspecified Phrases. This task was set up as an English cloze task, in which clarifications are presented as possible fillers and systems have to score how well each filler plausibly fits in a given context. For this shared task, we propose our own solutions, including supervised proaches, unsupervised approaches with pretrained models, and then we use these models to build an ensemble model. Finally we get the 2nd best result in the subtask1 which is a classification task, and the 3rd best result in the subtask2 which is a regression task.
null
null
10.18653/v1/2022.semeval-1.148
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,061
inproceedings
akrah-pedersen-2022-duluthnlp
{D}uluth{NLP} at {S}em{E}val-2022 Task 7: Classifying Plausible Alternatives with Pre{--}trained {ELECTRA}
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.149/
Akrah, Samuel and Pedersen, Ted
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1062--1066
This paper describes the DuluthNLP system that participated in Task 7 of SemEval-2022 on Identifying Plausible Clarifications of Implicit and Underspecified Phrases in Instructional Texts. Given an instructional text with an omitted token, the task requires models to classify or rank the plausibility of potential fillers. To solve the task, we fine{--}tuned the models BERT, RoBERTa, and ELECTRA on training data where potential fillers are rated for plausibility. This is a challenging problem, as shown by BERT-based models achieving accuracy less than 45{\%}. However, our ELECTRA model with tuned class weights on CrossEntropyLoss achieves an accuracy of 53.3{\%} on the official evaluation test data, which ranks 6 out of the 8 total submissions for Subtask A.
null
null
10.18653/v1/2022.semeval-1.149
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,062
inproceedings
yim-etal-2022-stanford
{S}tanford {ML}ab at {S}em{E}val 2022 Task 7: Tree- and Transformer-Based Methods for Clarification Plausibility
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.150/
Yim, Thomas and Lee, Junha and Verma, Rishi and Hickmann, Scott and Zhu, Annie and Sallade, Camron and Ng, Ian and Chi, Ryan and Liu, Patrick
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1067--1070
In this paper, we detail the methods we used to determine the idiomaticity and plausibility of candidate words or phrases into an instructional text as part of the SemEval Task 7: Identifying Plausible Clarifications of Implicit and Underspecified Phrases in Instructional Texts. Given a set of steps in an instructional text, there are certain phrases that most plausibly fill that spot. We explored various possible architectures, including tree-based methods over GloVe embeddings, ensembled BERT and ELECTRA models, and GPT 2-based infilling methods.
null
null
10.18653/v1/2022.semeval-1.150
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,063
inproceedings
nouriborji-etal-2022-nowruz
Nowruz at {S}em{E}val-2022 Task 7: Tackling Cloze Tests with Transformers and Ordinal Regression
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.151/
Nouriborji, Mohammadmahdi and Rohanian, Omid and Clifton, David
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1071--1077
This paper outlines the system using which team Nowruz participated in SemEval 2022 Task 7 {\textquotedblleft}Identifying Plausible Clarifications of Implicit and Underspecified Phrases{\textquotedblright} for both subtasks A and B. Using a pre-trained transformer as a backbone, the model targeted the task of multi-task classification and ranking in the context of finding the best fillers for a cloze task related to instructional texts on the website Wikihow. The system employed a combination of two ordinal regression components to tackle this task in a multi-task learning scenario. According to the official leaderboard of the shared task, this system was ranked 5th in the ranking and 7th in the classification subtasks out of 21 participating teams. With additional experiments, the models have since been further optimised. The code used in the experiments is going to be publicly available.
null
null
10.18653/v1/2022.semeval-1.151
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,064
inproceedings
shang-etal-2022-x
{X}-{P}u{D}u at {S}em{E}val-2022 Task 7: A Replaced Token Detection Task Pre-trained Model with Pattern-aware Ensembling for Identifying Plausible Clarifications
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.152/
Shang, Junyuan and Wang, Shuohuan and Sun, Yu and Yu, Yanjun and Zhou, Yue and Xiang, Li and Yang, Guixiu
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1078--1083
This paper describes our winning system on SemEval 2022 Task 7: $\textit{Identifying Plausible Clarifications ofImplicit and Underspecified Phrases in Instructional Texts}$. A replaced token detection pre-trained model is utilized with minorly different task-specific heads for SubTask-A: $\textit{Multi-class Classification}$ and SubTask-B: $\textit{Ranking}$. Incorporating a pattern-aware ensemble method, our system achieves a 68.90{\%} accuracy score and 0.8070 spearman`s rank correlation score surpassing the 2nd place with a large margin by 2.7 and 2.2 percent points for SubTask-A and SubTask-B, respectively. Our approach is simple and easy to implement, and we conducted ablation studies and qualitative and quantitative analyses for the working strategies used in our system.
null
null
10.18653/v1/2022.semeval-1.152
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,065
inproceedings
mengyuan-etal-2022-pali
{PALI} at {S}em{E}val-2022 Task 7: Identifying Plausible Clarifications of Implicit and Underspecified Phrases in Instructional Texts
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.153/
Mengyuan, Zhou and Hu, Dou and Yuan, Mengfei and Zhi, Jin and Du, Xiyang and Jiang, Lianxin and Mo, Yang and Shi, Xiaofeng
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1084--1089
This paper describes our system used in the SemEval-2022 Task 7(Roth et al.): Identifying Plausible Clarifications of Implicit and Under-specified Phrases. Semeval Task7 is an more complex cloze task, different than normal cloze task, only requiring NLP system could find the best fillers for sentence. In Semeval Task7, NLP system not only need to choose the best fillers for each input instance, but also evaluate the quality of all possible fillers and give them a relative score according to context semantic information. We propose an ensemble of different state-of-the-art transformer-based language models(i.e., RoBERTa and Deberta) with some plug-and-play tricks, such as Grouped Layerwise Learning Rate Decay (GLLRD) strategy, contrastive learning loss, different pooling head and an external input data preprecess block before the information came into pretrained language models, which improve performance significantly. The main contributions of our sys-tem are 1) revealing the performance discrepancy of different transformer-based pretraining models on the downstream task; 2) presenting an efficient learning-rate and parameter attenuation strategy when fintuning pretrained language models; 3) adding different constrative learning loss to improve model performance; 4) showing the useful of the different pooling head structure. Our system achieves a test accuracy of 0.654 on subtask1(ranking 4th on the leaderboard) and a test Spearman`s rank correlation coefficient of 0.785 on subtask2(ranking 2nd on the leaderboard).
null
null
10.18653/v1/2022.semeval-1.153
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,066
inproceedings
singh-2022-niksss-semeval
niksss at {S}em{E}val-2022 Task7:Transformers for Grading the Clarifications on Instructional Texts
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.154/
Singh, Nikhil
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1090--1093
This paper describes the 9th place system description for SemEval-2022 Task 7. The goal of this shared task was to develop computational models to predict how plausible a clarification made on an instructional text is. This shared task was divided into two Subtasks A and B. We attempted to solve these using various transformers-based architecture under different regime. We initially treated this as a text2text generation problem but comparing it with our recent approach we dropped it and treated this as a text-sequence classification and regression depending on the Subtask.
null
null
10.18653/v1/2022.semeval-1.154
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,067
inproceedings
chen-etal-2022-semeval
{S}em{E}val-2022 Task 8: Multilingual news article similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.155/
Chen, Xi and Zeynali, Ali and Camargo, Chico and Fl{\"ock, Fabian and Gaffney, Devin and Grabowicz, Przemyslaw and Hale, Scott A. and Jurgens, David and Samory, Mattia
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1094--1106
Thousands of new news articles appear daily in outlets in different languages. Understanding which articles refer to the same story can not only improve applications like news aggregation but enable cross-linguistic analysis of media consumption and attention. However, assessing the similarity of stories in news articles is challenging due to the different dimensions in which a story might vary, e.g., two articles may have substantial textual overlap but describe similar events that happened years apart. To address this challenge, we introduce a new dataset of nearly 10,000 news article pairs spanning 18 language combinations annotated for seven dimensions of similarity as SemEval 2022 Task 8. Here, we present an overview of the task, the best performing submissions, and the frontiers and challenges for measuring multilingual news article similarity. While the participants of this SemEval task contributed very strong models, achieving up to 0.818 correlation with gold standard labels across languages, human annotators are capable of reaching higher correlations, suggesting space for further progress.
null
null
10.18653/v1/2022.semeval-1.155
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,068
inproceedings
zosa-etal-2022-embeddia
{EMBEDDIA} at {S}em{E}val-2022 Task 8: Investigating Sentence, Image, and Knowledge Graph Representations for Multilingual News Article Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.156/
Zosa, Elaine and Boros, Emanuela and Koloski, Boshko and Pivovarova, Lidia
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1107--1113
In this paper, we present the participation of the EMBEDDIA team in the SemEval-2022 Task 8 (Multilingual News Article Similarity). We cover several techniques and propose different methods for finding the multilingual news article similarity by exploring the dataset in its entirety. We take advantage of the textual content of the articles, the provided metadata (e.g., titles, keywords, topics), the translated articles, the images (those that were available), and knowledge graph-based representations for entities and relations present in the articles. We, then, compute the semantic similarity between the different features and predict through regression the similarity scores. Our findings show that, while our proposed methods obtained promising results, exploiting the semantic textual similarity with sentence representations is unbeatable. Finally, in the official SemEval-2022 Task 8, we ranked fifth in the overall team ranking cross-lingual results, and second in the English-only results.
null
null
10.18653/v1/2022.semeval-1.156
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,069
inproceedings
xu-etal-2022-hfl
{HFL} at {S}em{E}val-2022 Task 8: A Linguistics-inspired Regression Model with Data Augmentation for Multilingual News Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.157/
Xu, Zihang and Yang, Ziqing and Cui, Yiming and Chen, Zhigang
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1114--1120
This paper describes our system designed for SemEval-2022 Task 8: Multilingual News Article Similarity. We proposed a linguistics-inspired model trained with a few task-specific strategies. The main techniques of our system are: 1) data augmentation, 2) multi-label loss, 3) adapted R-Drop, 4) samples reconstruction with the head-tail combination. We also present a brief analysis of some negative methods like two-tower architecture. Our system ranked 1st on the leaderboard while achieving a Pearson`s Correlation Coefficient of 0.818 on the official evaluation set.
null
null
10.18653/v1/2022.semeval-1.157
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,070
inproceedings
singh-etal-2022-gatenlp
{G}ate{NLP}-{US}hef at {S}em{E}val-2022 Task 8: Entity-Enriched {S}iamese Transformer for Multilingual News Article Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.158/
Singh, Iknoor and Li, Yue and Thong, Melissa and Scarton, Carolina
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1121--1128
This paper describes the second-placed system on the leaderboard of SemEval-2022 Task 8: Multilingual News Article Similarity. We propose an entity-enriched Siamese Transformer which computes news article similarity based on different sub-dimensions, such as the shared narrative, entities, location and time of the event discussed in the news article. Our system exploits a Siamese network architecture using a Transformer encoder to learn document-level representations for the purpose of capturing the narrative together with the auxiliary entity-based features extracted from the news articles. The intuition behind using all these features together is to capture the similarity between news articles at different granularity levels and to assess the extent to which different news outlets write about {\textquotedblleft}the same events{\textquotedblright}. Our experimental results and detailed ablation study demonstrate the effectiveness and the validity of our proposed method.
null
null
10.18653/v1/2022.semeval-1.158
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,071
inproceedings
goel-bommidi-2022-semeval
Wolfies at {S}em{E}val-2022 Task 8: Feature extraction pipeline with transformers for Multi-lingual news article similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.159/
Goel, Nikhil and Bommidi, Ranjith Reddy
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1129--1135
This work is about finding the similarity between a pair of news articles. There are seven different objective similarity metrics provided in the dataset for each pair and the news articles are in multiple different languages. On top of the pre-trained embedding model, we calculated cosine similarity for baseline results and feed-forward neural network was then trained on top of it to improve the results. We also built separate pipelines for each similarity metric for feature extraction. We could see significant improvement from baseline results using feature extraction and feed-forward neural network.
null
null
10.18653/v1/2022.semeval-1.159
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,072
inproceedings
kuimov-etal-2022-skoltechnlp
{S}koltech{NLP} at {S}em{E}val-2022 Task 8: Multilingual News Article Similarity via Exploration of News Texts to Vector Representations
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.160/
Kuimov, Mikhail and Dementieva, Daryna and Panchenko, Alexander
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1136--1144
This paper describes our contribution to SemEval 2022 Task 8: Multilingual News Article Similarity. The aim was to test completely different approaches and distinguish the best performing. That is why we`ve considered systems based on Transformer-based encoders, NER-based, and NLI-based methods (and their combination with SVO dependency triplets representation). The results prove that Transformer models produce the best scores. However, there is space for research and approaches that give not yet comparable but more interpretable results.
null
null
10.18653/v1/2022.semeval-1.160
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,073
inproceedings
joshi-etal-2022-iiit
{IIIT}-{MLNS} at {S}em{E}val-2022 Task 8: {S}iamese Architecture for Modeling Multilingual News Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.161/
Joshi, Sagar and Taunk, Dhaval and Varma, Vasudeva
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1145--1150
The task of multilingual news article similarity entails determining the degree of similarity of a given pair of news articles in a language-agnostic setting. This task aims to determine the extent to which the articles deal with the entities and events in question without much consideration of the subjective aspects of the discourse. Considering the superior representations being given by these models as validated on other tasks in NLP across an array of high and low-resource languages and this task not having any restricted set of languages to focus on, we adopted using the encoder representations from these models as our choice throughout our experiments. For modeling the similarity task by using the representations given by these models, a Siamese architecture was used as the underlying architecture. In experimentation, we investigated on several fronts including features passed to the encoder model, data augmentation and ensembling among our major experiments. We found data augmentation to be the most effective working strategy among our experiments.
null
null
10.18653/v1/2022.semeval-1.161
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,074
inproceedings
chittilla-khalil-2022-huaams
{H}ua{AMS} at {S}em{E}val-2022 Task 8: Combining Translation and Domain Pre-training for Cross-lingual News Article Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.162/
Chittilla, Sai Sandeep Sharma and Khalil, Talaat
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1151--1156
This paper describes our submission to SemEval-2022 Multilingual News Article Similarity task. We experiment with different approaches that utilize a pre-trained language model fitted with a regression head to predict similarity scores for a given pair of news articles. Our best performing systems include 2 key steps: 1) pre-training with in-domain data 2) training data enrichment through machine translation. Our final submission is an ensemble of predictions from our top systems. While we show the significance of pre-training and augmentation, we believe the issue of language coverage calls for more attention.
null
null
10.18653/v1/2022.semeval-1.162
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,075
inproceedings
hajjar-etal-2022-dartmouthcs
{D}artmouth{CS} at {S}em{E}val-2022 Task 8: Predicting Multilingual News Article Similarity with Meta-Information and Translation
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.163/
Hajjar, Joseph and Ma, Weicheng and Vosoughi, Soroush
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1157--1162
This paper presents our approach for tackling SemEval-2022 Task 8: Multilingual News Article Similarity. Our experiments show that even by using multi-lingual pre-trained language models (LMs), translating the text into the same language yields the best evaluation performance. We also find that stylometric features of the text and meta-information of the news articles can be predicted based on the text with low error rates, and these predictions could be used to improve the predictions of the overall similarity scores. These findings suggest substantial correlations between authorship information and topical similarity estimation, which sheds light on future stylometric and topic modeling research.
null
null
10.18653/v1/2022.semeval-1.163
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,076
inproceedings
bhavsar-etal-2022-team
Team Innovators at {S}em{E}val-2022 for Task 8: Multi-Task Training with Hyperpartisan and Semantic Relation for Multi-Lingual News Article Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.164/
Bhavsar, Nidhir and Devanathan, Rishikesh and Bhatnagar, Aakash and Singh, Muskaan and Motlicek, Petr and Ghosal, Tirthankar
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1163--1170
This work represents the system proposed by team Innovators for SemEval 2022 Task 8: Multilingual News Article Similarity. Similar multilingual news articles should match irrespective of the style of writing, the language of conveyance, and subjective decisions and biases induced by medium/outlet. The proposed architecture includes a machine translation system that translates multilingual news articles into English and presents a multitask learning model trained simultaneously on three distinct datasets. The system leverages the PageRank algorithm for Long-form text alignment. Multitask learning approach allows simultaneous training of multiple tasks while sharing the same encoder during training, facilitating knowledge transfer between tasks. Our best model is ranked 16 with a Pearson score of 0.733.
null
null
10.18653/v1/2022.semeval-1.164
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,077
inproceedings
jobanputra-martin-rodriguez-2022-oversampledml
{O}versampled{ML} at {S}em{E}val-2022 Task 8: When multilingual news similarity met Zero-shot approaches
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.165/
Jobanputra, Mayank and Mart{\'i}n Rodr{\'i}guez, Lorena
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1171--1177
We investigate the capabilities of pre-trained models, without any fine-tuning, for a document-level multilingual news similarity task of SemEval-2022. We utilize title and news content with appropriate pre-processing techniques. Our system derives 14 different similarity features using a combination of state-of-the-art methods (MPNet) with well-known statistical methods (i.e. TF-IDF, Word Mover`s distance). We formulate multilingual news similarity task as a regression task and approximate the overall similarity between two news articles using these features. Our best-performing system achieved a correlation score of 70.1{\%} and was ranked 20th among the 34 participating teams. In this paper, in addition to a system description, we also provide further analysis of our results and an ablation study highlighting the strengths and limitations of our features. We make our code publicly available at \url{https://github.com/cicl-iscl/multinewssimilarity}
null
null
10.18653/v1/2022.semeval-1.165
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,078
inproceedings
stefanovitch-2022-team
Team {TMA} at {S}em{E}val-2022 Task 8: Lightweight and Language-Agnostic News Similarity Classifier
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.166/
Stefanovitch, Nicolas
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1178--1183
We present our contribution to the SemEval 22 Share Task 8: Multilingual news article similarity. The approach is lightweight and language-agnostic, it is based on the computation of several lexicographic and embedding-based features, and the use of a simple ML approach: random forests. In a notable departure from the task formulation, which is a ranking task, we tackled this task as a classification one. We present a detailed analysis of the behaviour of our system under different settings.
null
null
10.18653/v1/2022.semeval-1.166
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,079
inproceedings
chen-etal-2022-itnlp2022
{ITNLP}2022 at {S}em{E}val-2022 Task 8: Pre-trained Model with Data Augmentation and Voting for Multilingual News Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.167/
Chen, Zhongan and Chen, Weiwei and Sun, YunLong and Xu, Hongqing and Zhou, Shuzhe and Chen, Bohan and Sun, Chengjie and Liu, Yuanchao
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1184--1189
This article introduces a system to solve the SemEval 2022 Task 8: Multilingual News Article Similarity. The task focuses on the consistency of events reported in two news articles. The system consists of a pre-trained model(e.g., INFOXLM and XLM-RoBERTa) to extract multilingual news features, following fully-connected networks to measure the similarity. In addition, data augmentation and Ten Fold Voting are used to enhance the model. Our final submitted model is an ensemble of three base models, with a Pearson value of 0.784 on the test dataset.
null
null
10.18653/v1/2022.semeval-1.167
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,080
inproceedings
heil-etal-2022-lsx
{LSX}{\_}team5 at {S}em{E}val-2022 Task 8: Multilingual News Article Similarity Assessment based on Word- and Sentence Mover`s Distance
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.168/
Heil, Stefan and Kopp, Karina and Zehe, Albin and Kobs, Konstantin and Hotho, Andreas
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1190--1195
This paper introduces our submission for the SemEval 2022 Task 8: Multilingual News Article Similarity. The task of the competition consisted of the development of a model, capable of determining the similarity between pairs of multilingual news articles. To address this challenge, we evaluated the Word Mover`s Distance in conjunction with word embeddings from ConceptNet Numberbatch and term frequencies of WorldLex, as well the Sentence Mover`s Distance based on sentence embeddings generated by pretrained transformer models of Sentence-BERT. To facilitate the comparison of multilingual articles with Sentence-BERT models, we deployed a Neural Machine Translation system. All our models achieve stable results in multilingual similarity estimation without learning parameters.
null
null
10.18653/v1/2022.semeval-1.168
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,081
inproceedings
pisarevskaya-zubiaga-2022-team
Team dina at {S}em{E}val-2022 Task 8: Pre-trained Language Models as Baselines for Semantic Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.169/
Pisarevskaya, Dina and Zubiaga, Arkaitz
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1196--1201
This paper describes the participation of the team {\textquotedblleft}dina{\textquotedblright} in the Multilingual News Similarity task at SemEval 2022. To build our system for the task, we experimented with several multilingual language models which were originally pre-trained for semantic similarity but were not further fine-tuned. We use these models in combination with state-of-the-art packages for machine translation and named entity recognition with the expectation of providing valuable input to the model. Our work assesses the applicability of such {\textquotedblleft}pure{\textquotedblright} models to solve the multilingual semantic similarity task in the case of news articles. Our best model achieved a score of 0.511, but shows that there is room for improvement.
null
null
10.18653/v1/2022.semeval-1.169
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,082
inproceedings
luo-etal-2022-tcu
{TCU} at {S}em{E}val-2022 Task 8: A Stacking Ensemble Transformer Model for Multilingual News Article Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.170/
Luo, Xiang and Niu, Yanqing and Zhu, Boer
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1202--1207
Previous studies focus on measuring the degree of similarity of textsby using traditional machine learning methods, such as Support Vector Regression (SVR). Based on Transformers, this paper describes our contribution to SemEval-2022 Task 8 Multilingual News Article Similarity. The similarity of multilingual news articles requires a regression prediction on the similarity of multilingual articles, rather than a classification for judging text similarity. This paper mainly describes the architecture of the model and how to adjust the parameters in the experiment and strengthen the generalization ability. In this paper, we implement and construct different models through transformer-based models. We applied different transformer-based models, as well as ensemble them together by using ensemble learning. To avoid the overfit, we focus on the adjustment of parameters and the increase of generalization ability in our experiments. In the last submitted contest, we achieve a score of 0.715 and rank the 21st place.
null
null
10.18653/v1/2022.semeval-1.170
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,083
inproceedings
ishihara-shirai-2022-nikkei
{N}ikkei at {S}em{E}val-2022 Task 8: Exploring {BERT}-based Bi-Encoder Approach for Pairwise Multilingual News Article Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.171/
Ishihara, Shotaro and Shirai, Hono
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1208--1214
This paper describes our system in SemEval-2022 Task 8, where participants were required to predict the similarity of two multilingual news articles. In the task of pairwise sentence and document scoring, there are two main approaches: Cross-Encoder, which inputs pairs of texts into a single encoder, and Bi-Encoder, which encodes each input independently. The former method often achieves higher performance, but the latter gave us a better result in SemEval-2022 Task 8. This paper presents our exploration of BERT-based Bi-Encoder approach for this task, and there are several findings such as pretrained models, pooling methods, translation, data separation, and the number of tokens. The weighted average ensemble of the four models achieved the competitive result and ranked in the top 12.
null
null
10.18653/v1/2022.semeval-1.171
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,084
inproceedings
nai-etal-2022-ynu
{YNU}-{HPCC} at {S}em{E}val-2022 Task 8: Transformer-based Ensemble Model for Multilingual News Article Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.172/
Nai, Zihan and Wang, Jin and Zhang, Xuejie
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1215--1220
This paper describes the system submitted by our team (YNU-HPCC) to SemEval-2022 Task 8: Multilingual news article similarity. This task requires participants to develop a system which could evaluate the similarity between multilingual news article pairs. We propose an approach that relies on Transformers to compute the similarity between pairs of news. We tried different models namely BERT, ALBERT, ELECTRA, RoBERTa, M-BERT and Compared their results. At last, we chose M-BERT as our System, which has achieved the best Pearson Correlation Coefficient score of 0.738.
null
null
10.18653/v1/2022.semeval-1.172
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,085
inproceedings
dufour-etal-2022-bl
{BL}.{R}esearch at {S}em{E}val-2022 Task 8: Using various Semantic Information to evaluate document-level Semantic Textual Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.173/
Dufour, Sebastien and Mehdi Kandi, Mohamed and Boutamine, Karim and Gosse, Camille and Billami, Mokhtar Boumedyen and Bortolaso, Christophe and Miloudi, Youssef
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1221--1228
This paper presents our system for document-level semantic textual similarity (STS) evaluation at SemEval-2022 Task 8: {\textquotedblleft}Multilingual News Article Similarity{\textquotedblright}. The semantic information used is obtained by using different semantic models ranging from the extraction of key terms and named entities to the document classification and obtaining similarity from automatic summarization of documents. All these semantic information`s are then used as features to feed a supervised system in order to evaluate the degree of similarity of a pair of documents. We obtained a Pearson correlation score of 0.706 compared to the best score of 0.818 from teams that participated in this task.
null
null
10.18653/v1/2022.semeval-1.173
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,086
inproceedings
di-giovanni-etal-2022-datascience
{D}ata{S}cience-Polimi at {S}em{E}val-2022 Task 8: Stacking Language Models to Predict News Article Similarity
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.174/
Di Giovanni, Marco and Tasca, Thomas and Brambilla, Marco
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1229--1234
In this paper, we describe the approach we designed to solve SemEval-2022 Task 8: Multilingual News Article Similarity. We collect and use exclusively textual features (title, description and body) of articles. Our best model is a stacking of 14 Transformer-based Language models fine-tuned on single or multiple fields, using data in the original language or translated to English. It placed fourth on the original leaderboard, sixth on the complete official one and fourth on the English-subset official one. We observe the data collection as our principal source of error due to a relevant fraction of missing or wrong fields.
null
null
10.18653/v1/2022.semeval-1.174
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,087
inproceedings
wangsadirdja-etal-2022-wuedevils
{W}ue{D}evils at {S}em{E}val-2022 Task 8: Multilingual News Article Similarity via Pair-Wise Sentence Similarity Matrices
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.175/
Wangsadirdja, Dirk and Heinickel, Felix and Trapp, Simon and Zehe, Albin and Kobs, Konstantin and Hotho, Andreas
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1235--1243
We present a system that creates pair-wise cosine and arccosine sentence similarity matrices using multilingual sentence embeddings obtained from pre-trained SBERT and Universal Sentence Encoder (USE) models respectively. For each news article sentence, it searches the most similar sentence from the other article and computes an average score. Further, a convolutional neural network calculates a total similarity score for the article pairs on these matrices. Finally, a random forest regressor merges the previous results to a final score that can optionally be extended with a publishing date score.
null
null
10.18653/v1/2022.semeval-1.175
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,088
inproceedings
tu-etal-2022-semeval
{S}em{E}val-2022 Task 9: {R}2{VQ} {--} Competence-based Multimodal Question Answering
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.176/
Tu, Jingxuan and Holderness, Eben and Maru, Marco and Conia, Simone and Rim, Kyeongmin and Lynch, Kelley and Brutti, Richard and Navigli, Roberto and Pustejovsky, James
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1244--1255
In this task, we identify a challenge that is reflective of linguistic and cognitive competencies that humans have when speaking and reasoning. Particularly, given the intuition that textual and visual information mutually inform each other for semantic reasoning, we formulate a Competence-based Question Answering challenge, designed to involve rich semantic annotation and aligned text-video objects. The task is to answer questions from a collection of cooking recipes and videos, where each question belongs to a {\textquotedblleft}question family{\textquotedblright} reflecting a specific reasoning competence. The data and task result is publicly available.
null
null
10.18653/v1/2022.semeval-1.176
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,089
inproceedings
zhai-etal-2022-hit
{HIT}{\&}{QMUL} at {S}em{E}val-2022 Task 9: Label-Enclosed Generative Question Answering ({LEG}-{QA})
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.177/
Zhai, Weihe and Feng, Mingqiang and Zubiaga, Arkaitz and Liu, Bingquan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1256--1262
This paper presents the second place system for the R2VQ: competence-based multimodal question answering shared task. The purpose of this task is to involve semantic{\&}cooking roles and text-images objects when querying how well a system understands the procedure of a recipe. This task is approached with text-to-text generative model based on transformer architecture. As a result, the model can well generalise to soft constrained and other competence-based question answering problem. We propose label enclosed input method which help the model achieve significant improvement from 65.34 (baseline) to 91.3. In addition to describing the submitted system, the impact of model architecture and label selection are investigated along with remarks regarding error analysis. Finally, future works are presented.
null
null
10.18653/v1/2022.semeval-1.177
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,090
inproceedings
dryjanski-etal-2022-samsung
{S}amsung Research {P}oland ({SRPOL}) at {S}em{E}val-2022 Task 9: Hybrid Question Answering Using Semantic Roles
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.178/
Dryja{\'n}ski, Tomasz and Zaleska, Monika and Ku{\'z}ma, Bartek and B{\l}a{\.z}ejewski, Artur and Bordzicka, Zuzanna and Bujnowski, Pawe{\l} and Firlag, Klaudia and Goltz, Christian and Grabowski, Maciej and Jo{\'n}czyk, Jakub and K{\l}osi{\'n}ski, Grzegorz and Paziewski, Bart{\l}omiej and Paszkiewicz, Natalia and Piersa, Jaros{\l}aw and Andruszkiewicz, Piotr
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1263--1273
In this work we present an overview of our winning system for the R2VQ - Competence-based Multimodal Question Answering task, with the final exact match score of 92.53{\%}.The task is structured as question-answer pairs, querying how well a system is capable of competence-based comprehension of recipes. We propose a hybrid of a rule-based system, Question Answering Transformer, and a neural classifier for N/A answers recognition. The rule-based system focuses on intent identification, data extraction and response generation.
null
null
10.18653/v1/2022.semeval-1.178
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,091
inproceedings
ruan-etal-2022-pingan
{PINGAN}{\_}{AI} at {S}em{E}val-2022 Task 9: Recipe knowledge enhanced model applied in Competence-based Multimodal Question Answering
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.179/
Ruan, Zhihao and Hou, Xiaolong and Jiang, Lianxin
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1274--1279
This paper describes our system used in the SemEval-2022 Task 09: R2VQ - Competence-based Multimodal Question Answering. We propose a knowledge-enhanced model for predicting answer in QA task, this model use BERT as the backbone. We adopted two knowledge-enhanced methods in this model: the knowledge auxiliary text method and the knowledge embedding method. We also design an answer extraction task pipeline, which contains an extraction-based model, an automatic keyword labeling module, and an answer generation module. Our system ranked 3rd in task 9 and achieved an exact match score of 78.21 and a word-level F1 score of 82.62.
null
null
10.18653/v1/2022.semeval-1.179
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,092
inproceedings
barnes-etal-2022-semeval
{S}em{E}val 2022 Task 10: Structured Sentiment Analysis
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.180/
Barnes, Jeremy and Oberlaender, Laura and Troiano, Enrica and Kutuzov, Andrey and Buchmann, Jan and Agerri, Rodrigo and {\O}vrelid, Lilja and Velldal, Erik
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1280--1295
In this paper, we introduce the first SemEval shared task on Structured Sentiment Analysis, for which participants are required to predict all sentiment graphs in a text, where a single sentiment graph is composed of a sentiment holder, target, expression and polarity. This new shared task includes two subtracks (monolingual and cross-lingual) with seven datasets available in five languages, namely Norwegian, Catalan, Basque, Spanish and English. Participants submitted their predictions on a held-out test set and were evaluated on Sentiment Graph F1 . Overall, the task received over 200 submissions from 32 participating teams. We present the results of the 15 teams that provided system descriptions and our own expanded analysis of the test predictions.
null
null
10.18653/v1/2022.semeval-1.180
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,093
inproceedings
sarangi-etal-2022-amex
{AMEX} {AI} Labs at {S}em{E}val-2022 Task 10: Contextualized fine-tuning of {BERT} for Structured Sentiment Analysis
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.181/
Sarangi, Pratyush and Ganesan, Shamika and Arora, Piyush and Joshi, Salil
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1296--1304
We describe the work carried out by AMEX AI Labs on the structured sentiment analysis task at SemEval-2022. This task focuses on extracting fine grained information w.r.t. to source, target and polar expressions in a given text. We propose a BERT based encoder, which utilizes a novel concatenation mechanism for combining syntactic and pretrained embeddings with BERT embeddings. Our system achieved an average rank of 14/32 systems, based on the average scores across seven datasets for five languages provided for the monolingual task. The proposed BERT based approaches outperformed BiLSTM based approaches used for structured sentiment extraction problem. We provide an in-depth analysis based on our post submission analysis.
null
null
10.18653/v1/2022.semeval-1.181
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,094
inproceedings
pfister-etal-2022-senpoi
{S}en{P}oi at {S}em{E}val-2022 Task 10: Point me to your Opinion, {S}en{P}oi
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.183/
Pfister, Jan and Wankerl, Sebastian and Hotho, Andreas
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1313--1323
Structured Sentiment Analysis is the task of extracting sentiment tuples in a graph structure commonly from review texts. We adapt the Aspect-Based Sentiment Analysis pointer network BARTABSA to model this tuple extraction as a sequence prediction task and extend their output grammar to account for the increased complexity of Structured Sentiment Analysis. To predict structured sentiment tuples in languages other than English we swap BART for a multilingual mT5 and introduce a novel Output Length Regularization to mitigate overfitting to common target sequence lengths, thereby improving the performance of the model by up to 70{\%}. We evaluate our approach on seven datasets in five languages including a zero shot crosslingual setting.
null
null
10.18653/v1/2022.semeval-1.183
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,096
inproceedings
anantharaman-etal-2022-ssn-mlrg1
{SSN}{\_}{MLRG}1 at {S}em{E}val-2022 Task 10: Structured Sentiment Analysis using 2-layer {B}i{LSTM}
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.184/
Anantharaman, Karun and K, Divyasri and Pt, Jayannthan and S, Angel and Sivanaiah, Rajalakshmi and Rajendram, Sakaya Milton and T T, Mirnalinee
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1324--1328
Task 10 in SemEval 2022 is a composite task which entails analysis of opinion tuples, and recognition and demarcation of their nature. In this paper, we will elaborate on how such a methodology is implemented, how it is undertaken for a Structured Sentiment Analysis, and the results obtained thereof. To achieve this objective, we have adopted a bi-layered BiLSTM approach. In our research, a variation on the norm has been effected towards enhancement of accuracy, by basing the categorization meted out to an individual member as a by-product of its adjacent members, using specialized algorithms to ensure the veracity of the output, which has been modelled to be the holistically most accurate label for the entire sequence. Such a strategy is superior in terms of its parsing accuracy and requires less time. This manner of action has yielded an SF1 of 0.33 in the highest-performing configuration.
null
null
10.18653/v1/2022.semeval-1.184
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,097
inproceedings
chen-etal-2022-mt
{MT}-Speech at {S}em{E}val-2022 Task 10: Incorporating Data Augmentation and Auxiliary Task with Cross-Lingual Pretrained Language Model for Structured Sentiment Analysis
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.185/
Chen, Cong and Chen, Jiansong and Liu, Cao and Yang, Fan and Wan, Guanglu and Xia, Jinxiong
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1329--1335
Sentiment analysis is a fundamental task, and structure sentiment analysis (SSA) is an important component of sentiment analysis. However, traditional SSA is suffering from some important issues: (1) lack of interactive knowledge of different languages; (2) small amount of annotation data or even no annotation data. To address the above problems, we incorporate data augment and auxiliary tasks within a cross-lingual pretrained language model into SSA. Specifically, we employ XLM-Roberta to enhance mutually interactive information when parallel data is available in the pretraining stage. Furthermore, we leverage two data augment strategies and auxiliary tasks to improve the performance on few-label data and zero-shot cross-lingual settings. Experiments demonstrate the effectiveness of our models. Our models rank first on the cross-lingual sub-task and rank second on the monolingual sub-task of SemEval-2022 task 10.
null
null
10.18653/v1/2022.semeval-1.185
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,098
inproceedings
zhang-etal-2022-ecnu
{ECNU}{\_}{ICA} at {S}em{E}val-2022 Task 10: A Simple and Unified Model for Monolingual and Crosslingual Structured Sentiment Analysis
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.186/
Zhang, Qi and Zhou, Jie and Chen, Qin and Bai, Qingchun and Xiao, Jun and He, Liang
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1336--1342
Sentiment analysis is increasingly viewed as a vital task both from an academic and a commercial standpoint. In this paper, we focus on the structured sentiment analysis task that is released on SemEval-2022 Task 10. The task aims to extract the structured sentiment information (e.g., holder, target, expression and sentiment polarity) in a text. We propose a simple and unified model for both the monolingual and crosslingual structured sentiment analysis tasks. We translate this task into an event extraction task by regrading the expression as the trigger word and the other elements as the arguments of the event. Particularly, we first extract the expression by judging its start and end indices. Then, to consider the expression, we design a conditional layer normalization algorithm to extract the holder and target based on the extracted expression. Finally, we infer the sentiment polarity based on the extracted structured information. Pre-trained language models are utilized to obtain the text representation. We conduct the experiments on seven datasets in five languages. It attracted 233 submissions in monolingual subtask and crosslingual subtask from 32 teams. Finally, we obtain the top 5 place on crosslingual tasks.
null
null
10.18653/v1/2022.semeval-1.186
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,099
inproceedings
lin-etal-2022-zhixiaobao
{ZHIXIAOBAO} at {S}em{E}val-2022 Task 10: Apporoaching Structured Sentiment with Graph Parsing
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.187/
Lin, Yangkun and Liang, Chen and Xu, Jing and Yang, Chong and Wang, Yongliang
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1343--1348
This paper presents our submission to task 10, Structured Sentiment Analysis of the SemEval 2022 competition. The task aims to extract all elements of the fine-grained sentiment in a text. We cast structured sentiment analysis to the prediction of the sentiment graphs following (Barnes et al., 2021), where nodes are spans of sentiment holders, targets and expressions, and directed edges denote the relation types between them. Our approach closely follows that of semantic dependency parsing (Dozat and Manning, 2018). The difference is that we use pre-trained language models (e.g., BERT and RoBERTa) as text encoder to solve the problem of limited annotated data. Additionally, we make improvements on the computation of cross attention and present the suffix masking technique to make further performance improvement. Substantially, our model achieved the Top-1 average Sentiment Graph F1 score on seven datasets in five different languages in the monolingual subtask.
null
null
10.18653/v1/2022.semeval-1.187
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,100
inproceedings
morio-etal-2022-hitachi
Hitachi at {S}em{E}val-2022 Task 10: Comparing Graph- and {S}eq2{S}eq-based Models Highlights Difficulty in Structured Sentiment Analysis
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.188/
Morio, Gaku and Ozaki, Hiroaki and Yamaguchi, Atsuki and Sogawa, Yasuhiro
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1349--1359
This paper describes our participation in SemEval-2022 Task 10, a structured sentiment analysis. In this task, we have to parse opinions considering both structure- and context-dependent subjective aspects, which is different from typical dependency parsing. Some of the major parser types have recently been used for semantic and syntactic parsing, while it is still unknown which type can capture structured sentiments well due to their subjective aspects. To this end, we compared two different types of state-of-the-art parser, namely graph-based and seq2seq-based. Our in-depth analyses suggest that, even though graph-based parser generally outperforms the seq2seq-based one, with strong pre-trained language models both parsers can essentially output acceptable and reasonable predictions. The analyses highlight that the difficulty derived from subjective aspects in structured sentiment analysis remains an essential challenge.
null
null
10.18653/v1/2022.semeval-1.188
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,101
inproceedings
pessutto-moreira-2022-ufrgsent
{UFRGS}ent at {S}em{E}val-2022 Task 10: Structured Sentiment Analysis using a Question Answering Model
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.189/
Pessutto, Lucas and Moreira, Viviane
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1360--1365
This paper describes the system submitted by our team (UFRGSent) to SemEval-2022 Task 10: Structured Sentiment Analysis. We propose a multilingual approach that relies on a Question Answering model to find tuples consisting of aspect, opinion, and holder. The approach starts from general questions and uses the extracted tuple elements to find the remaining components. Finally, we employ an aspect sentiment classification model to classify the polarity of the entire tuple. Despite our method being in a mid-rank position on SemEval competition, we show that the question-answering approach can achieve good coverage retrieving sentiment tuples, allowing room for improvements in the technique.
null
null
10.18653/v1/2022.semeval-1.189
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,102
inproceedings
poswiata-2022-opi
{OPI} at {S}em{E}val-2022 Task 10: Transformer-based Sequence Tagging with Relation Classification for Structured Sentiment Analysis
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.190/
Po{\'s}wiata, Rafa{\l}
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1366--1372
This paper presents our solution for SemEval-2022 Task 10: Structured Sentiment Analysis. The solution consisted of two modules: the first for sequence tagging and the second for relation classification. In both modules we used transformer-based language models. In addition to utilizing language models specific to each of the five competition languages, we also adopted multilingual models. This approach allowed us to apply the solution to both monolingual and cross-lingual sub-tasks, where we obtained average Sentiment Graph F1 of 54.5{\%} and 53.1{\%}, respectively. The source code of the prepared solution is available at \url{https://github.com/rafalposwiata/structured-sentiment-analysis}.
null
null
10.18653/v1/2022.semeval-1.190
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,103
inproceedings
r-etal-2022-etms
{ETMS}@{IITKGP} at {S}em{E}val-2022 Task 10: Structured Sentiment Analysis Using A Generative Approach
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.191/
R, Raghav and Vemali, Adarsh and Mukherjee, Rajdeep
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1373--1381
Structured Sentiment Analysis (SSA) deals with extracting opinion tuples in a text, where each tuple (h, e, t, p) consists of h, the holder, who expresses a sentiment polarity p towards a target t through a sentiment expression e. While prior works explore graph-based or sequence labeling-based approaches for the task, we in this paper present a novel unified generative method to solve SSA, a SemEval2022 shared task. We leverage a BART-based encoder-decoder architecture and suitably modify it to generate, given a sentence, a sequence of opinion tuples. Each generated tuple consists of seven integers respectively representing the indices corresponding to the start and end positions of the holder, target, and expression spans, followed by the sentiment polarity class associated between the target and the sentiment expression. We perform rigorous experiments for both Monolingual and Cross-lingual subtasks, and achieve competitive Sentiment F1 scores on the leaderboard in both settings.
null
null
10.18653/v1/2022.semeval-1.191
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,104
inproceedings
barikbin-2022-slpl
{SLPL}-Sentiment at {S}em{E}val-2022 Task 10: Making Use of Pre-Trained Model`s Attention Values in Structured Sentiment Analysis
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.192/
Barikbin, Sadrodin
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1382--1388
Sentiment analysis is a useful problem which could serve a variety of fields from business intelligence to social studies and even health studies. Using SemEval 2022 Task 10 formulation of this problem and taking sequence labeling as our approach, we propose a model which learns the task by finetuning a pretrained transformer, introducing as few parameters ({\textasciitilde}150k) as possible and making use of precomputed attention values in the transformer. Our model improves shared task baselines on all task datasets.
null
null
10.18653/v1/2022.semeval-1.192
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,105
inproceedings
alonso-alonso-etal-2022-lys
{L}y{S}{\_}{AC}oru{\~n}a at {S}em{E}val-2022 Task 10: Repurposing Off-the-Shelf Tools for Sentiment Analysis as Semantic Dependency Parsing
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.193/
Alonso-Alonso, Iago and Vilares, David and G{\'o}mez-Rodr{\'i}guez, Carlos
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1389--1400
This paper addressed the problem of structured sentiment analysis using a bi-affine semantic dependency parser, large pre-trained language models, and publicly available translation models. For the monolingual setup, we considered: (i) training on a single treebank, and (ii) relaxing the setup by training on treebanks coming from different languages that can be adequately processed by cross-lingual language models. For the zero-shot setup and a given target treebank, we relied on: (i) a word-level translation of available treebanks in other languages to get noisy, unlikely-grammatical, but annotated data (we release as much of it as licenses allow), and (ii) merging those translated treebanks to obtain training data. In the post-evaluation phase, we also trained cross-lingual models that simply merged all the English treebanks and did not use word-level translations, and yet obtained better results. According to the official results, we ranked 8th and 9th in the monolingual and cross-lingual setups.
null
null
10.18653/v1/2022.semeval-1.193
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,106
inproceedings
jia-etal-2022-spdb
{SPDB} Innovation Lab at {S}em{E}val-2022 Task 10: A Novel End-to-End Structured Sentiment Analysis Model based on the {ERNIE}-{M}
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.194/
Jia, Yalong and Ou, Zhenghui and Yang, Yang
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1401--1405
Sentiment analysis is a classical problem of natural language processing. SemEval 2022 sets a problem on the structured sentiment analysis in task 10, which is also a study-worthy topic in research area. In this paper, we propose a method which can predict structured sentiment information on multiple languages with limited data. The ERNIE-M pretrained language model is employed as a lingual feature extractor which works well on multiple language processing, followed by a graph parser as a opinion extractor. The method can predict structured sentiment information with high interpretability. We apply data augmentation as the given datasets are so small. Furthermore, we use K-fold cross-validation and DeBERTaV3 pretrained model as extra English embedding generator to train multiple models as our ensemble strategies. Experimental results show that the proposed model has considerable performance on both monolingual and cross-lingual tasks.
null
null
10.18653/v1/2022.semeval-1.194
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,107
inproceedings
li-etal-2022-hitsz
{HITSZ}-{HLT} at {S}em{E}val-2022 Task 10: A Span-Relation Extraction Framework for Structured Sentiment Analysis
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.195/
Li, Yihui and Yang, Yifan and Zhang, Yice and Xu, Ruifeng
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1406--1411
This paper describes our system that participated in the SemEval-2022 Task 10: Structured Sentiment Analysis, which aims to extract opinion tuples from texts.A full opinion tuple generally contains an opinion holder, an opinion target, the sentiment expression, and the corresponding polarity. The complex structure of the opinion tuple makes the task challenging. To address this task, we formalize it as a span-relation extraction problem and propose a two-stage extraction framework accordingly. In the first stage, we employ the span module to enumerate spans and then recognize the type of every span. In the second stage, we employ the relation module to determine the relation between spans. Our system achieves competitive results and ranks among the top-10 systems in almost subtasks.
null
null
10.18653/v1/2022.semeval-1.195
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,108
inproceedings
malmasi-etal-2022-semeval
{S}em{E}val-2022 Task 11: Multilingual Complex Named Entity Recognition ({M}ulti{C}o{NER})
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.196/
Malmasi, Shervin and Fang, Anjie and Fetahu, Besnik and Kar, Sudipta and Rokhlenko, Oleg
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1412--1437
We present the findings of SemEval-2022 Task 11 on Multilingual Complex Named Entity Recognition MULTICONER. Divided into 13 tracks, the task focused on methods to identify complex named entities (like names of movies, products and groups) in 11 languages in both monolingual and multi-lingual scenarios. Eleven tracks required building monolingual NER models for individual languages, one track focused on multilingual models able to work on all languages, and the last track featured code-mixed texts within any of these languages. The task is based on the MULTICONER dataset comprising of 2.3 millions instances in Bangla, Chinese, Dutch, English, Farsi, German, Hindi, Korean, Russian, Spanish, and Turkish. Results showed that methods fusing external knowledge into transformer models achieved the best results. However, identifying entities like creative works is still challenging even with external knowledge. MULTICONER was one of the most popular tasks in SemEval-2022 and it attracted 377 participants during the practice phase. 236 participants signed up for the final test phase and 55 teams submitted their systems.
null
null
10.18653/v1/2022.semeval-1.196
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,109
inproceedings
lai-2022-lmn
{LMN} at {S}em{E}val-2022 Task 11: A Transformer-based System for {E}nglish Named Entity Recognition
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.197/
Lai, Ngoc
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1438--1443
Processing complex and ambiguous named entities is a challenging research problem, but it has not received sufficient attention from the natural language processing community. In this short paper, we present our participation in the English track of SemEval-2022 Task 11: Multilingual Complex Named Entity Recognition. Inspired by the recent advances in pretrained Transformer language models, we propose a simple yet effective Transformer-based baseline for the task. Despite its simplicity, our proposed approach shows competitive results in the leaderboard as we ranked 12 over 30 teams. Our system achieved a macro F1 score of 72.50{\%} on the held-out test set. We have also explored a data augmentation approach using entity linking. While the approach does not improve the final performance, we also discuss it in this paper.
null
null
10.18653/v1/2022.semeval-1.197
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,110
inproceedings
lin-etal-2022-pa
{PA} Ph{\&}Tech at {S}em{E}val-2022 Task 11: {NER} Task with Ensemble Embedding from Reinforcement Learning
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.198/
Lin, Qizhi and Hou, Changyu and Wang, Xiaopeng and Wang, Jun and Qiao, Yixuan and Jiang, Peng and Jiang, Xiandi and Wang, Benqi and Xiao, Qifeng
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1444--1447
From pretrained contextual embedding to document-level embedding, the selection and construction of embedding have drawn more and more attention in the NER domain in recent research. This paper aims to discuss the performance of ensemble embeddings on complex NER tasks. Enlightened by Wang`s methodology, we try to replicate the dominating power of ensemble models with reinforcement learning optimizor on plain NER tasks to complex ones. Based on the composition of semeval dataset, the performance of the applied model is tested on lower-context, QA, and search query scenarios together with its zero-shot learning ability. Results show that with abundant training data, the model can achieve similar performance on lower-context cases compared to plain NER cases, but can barely transfer the performance to other scenarios in the test phase.
null
null
10.18653/v1/2022.semeval-1.198
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,111
inproceedings
schneider-etal-2022-uc3m
{UC}3{M}-{PUCPR} at {S}em{E}val-2022 Task 11: An Ensemble Method of Transformer-based Models for Complex Named Entity Recognition
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.199/
Schneider, Elisa and Rivera-Zavala, Renzo M. and Martinez, Paloma and Moro, Claudia and Paraiso, Emerson
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1448--1456
This study introduces the system submitted to the SemEval 2022 Task 11: MultiCoNER (Multilingual Complex Named Entity Recognition) by the UC3M-PUCPR team. We proposed an ensemble of transformer-based models for entity recognition in cross-domain texts. Our deep learning method benefits from the transformer architecture, which adopts the attention mechanism to handle the long-range dependencies of the input text. Also, the ensemble approach for named entity recognition (NER) improved the results over baselines based on individual models on two of the three tracks we participated in. The ensemble model for the code-mixed task achieves an overall performance of 76.36{\%} F1-score, a 2.85 percentage point increase upon our individually best model for this task, XLM-RoBERTa-large (73.51{\%}), outperforming the baseline provided for the shared task by 18.26 points. Our preliminary results suggest that contextualized language models ensembles can, even if modestly, improve the results in extracting information from unstructured data.
null
null
10.18653/v1/2022.semeval-1.199
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,112
inproceedings
wang-etal-2022-damo
{DAMO}-{NLP} at {S}em{E}val-2022 Task 11: A Knowledge-based System for Multilingual Named Entity Recognition
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.200/
Wang, Xinyu and Shen, Yongliang and Cai, Jiong and Wang, Tao and Wang, Xiaobin and Xie, Pengjun and Huang, Fei and Lu, Weiming and Zhuang, Yueting and Tu, Kewei and Lu, Wei and Jiang, Yong
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1457--1468
The MultiCoNER shared task aims at detecting semantically ambiguous and complex named entities in short and low-context settings for multiple languages. The lack of contexts makes the recognition of ambiguous named entities challenging. To alleviate this issue, our team DAMO-NLP proposes a knowledge-based system, where we build a multilingual knowledge base based on Wikipedia to provide related context information to the named entity recognition (NER) model. Given an input sentence, our system effectively retrieves related contexts from the knowledge base. The original input sentences are then augmented with such context information, allowing significantly better contextualized token representations to be captured. Our system wins 10 out of 13 tracks in the MultiCoNER shared task.
null
null
10.18653/v1/2022.semeval-1.200
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,113
inproceedings
pandey-etal-2022-multilinguals
Multilinguals at {S}em{E}val-2022 Task 11: Complex {NER} in Semantically Ambiguous Settings for Low Resource Languages
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.201/
Pandey, Amit and Daw, Swayatta and Unnam, Narendra and Pudi, Vikram
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1469--1476
We leverage pre-trained language models to solve the task of complex NER for two low-resource languages: Chinese and Spanish. We use the technique of Whole Word Masking (WWM) to boost the performance of masked language modeling objective on large and unsupervised corpora. We experiment with multiple neural network architectures, incorporating CRF, BiLSTMs, and Linear Classifiers on top of a fine-tuned BERT layer. All our models outperform the baseline by a significant margin and our best performing model obtains a competitive position on the evaluation leaderboard for the blind test set.
null
null
10.18653/v1/2022.semeval-1.201
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,114
inproceedings
pietilainen-ji-2022-aaltonlp
{A}alto{NLP} at {S}em{E}val-2022 Task 11: Ensembling Task-adaptive Pretrained Transformers for Multilingual Complex {NER}
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.202/
Pietil{\"ainen, Aapo and Ji, Shaoxiong
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1477--1482
This paper presents the system description of team AaltoNLP for SemEval-2022 shared task 11: MultiCoNER. Transformer-based models have produced high scores on standard Named Entity Recognition (NER) tasks. However, accuracy on complex named entities is still low. Complex and ambiguous named entities have been identified as a major error source in NER tasks. The shared task is about multilingual complex named entity recognition. In this paper, we describe an ensemble approach, which increases accuracy across all tested languages. The system ensembles output from multiple same architecture task-adaptive pretrained transformers trained with different random seeds. We notice a large discrepancy between performance on development and test data. Model selection based on limited development data may not yield optimal results on large test data sets.
null
null
10.18653/v1/2022.semeval-1.202
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,115
inproceedings
nguyen-huynh-2022-dangnt
{DANGNT}-{SGU} at {S}em{E}val-2022 Task 11: Using Pre-trained Language Model for Complex Named Entity Recognition
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.203/
Nguyen, Dang and Huynh, Huy Khac Nguyen
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1483--1487
In this paper, we describe a system that we built to participate in the SemEval 2022 Task 11: MultiCoNER Multilingual Complex Named Entity Recognition, specifically the track Mono-lingual in English. To construct this system, we used Pre-trained Language Models (PLMs). Especially, the Pre-trained Model base on BERT is applied for the task of recognizing named entities by fine-tuning method. We performed the evaluation on two test datasets of the shared task: the Practice Phase and the Evaluation Phase of the competition.
null
null
10.18653/v1/2022.semeval-1.203
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,116
inproceedings
chen-etal-2022-opdai
{OPDAI} at {S}em{E}val-2022 Task 11: A hybrid approach for {C}hinese {NER} using outside {W}ikipedia knowledge
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.204/
Chen, Ze and Wang, Kangxu and Zheng, Jiewen and Cai, Zijian and He, Jiarong and Gao, Jin
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1488--1493
This article describes the OPDAI submission to SemEval-2022 Task 11 on Chinese complex NER. First, we explore the performance of model-based approaches and their ensemble, finding that fine-tuning the pre-trained Chinese RoBERTa-wwm model with word semantic representation and contextual gazetteer representation performs best among single models. However, the model-based approach performs poorly on test data because of low-context and unseen-entity cases. Then, we extend our system into two stages: (1) generating entity candidates by using neural model, soft-templates and Wikipedia lexicon. (2) predicting the final entity results within a feature-based rank model. For the evaluation, our best submission achieves an $F_1$ score of 0.7954 and attains the third-best score in the Chinese sub-track.
null
null
10.18653/v1/2022.semeval-1.204
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,117
inproceedings
plank-2022-sliced
Sliced at {S}em{E}val-2022 Task 11: Bigger, Better? Massively Multilingual {LM}s for Multilingual Complex {NER} on an Academic {GPU} Budget
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.205/
Plank, Barbara
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1494--1500
Massively multilingual language models (MMLMs) have become a widely-used representation method, and multiple large MMLMs were proposed in recent years. A trend is to train MMLMs on larger text corpora or with more layers. In this paper we set out to test recent popular MMLMs on detecting semantically ambiguous and complex named entities with an academic GPU budget. Our submission of a single model for 11 languages on the SemEval Task 11 MultiCoNER shows that a vanilla transformer-CRF with XLM-R$_{large}$ outperforms the more recent RemBERT, ranking 9th from 26 submissions in the multilingual track. Compared to RemBERT, the XLM-R model has the additional advantage to fit on a slice of a multi-instance GPU. As contrary to expectations and recent findings, we found RemBERT to not be the best MMLM, we further set out to investigate this discrepancy with additional experiments on multilingual Wikipedia NER data. While we expected RemBERT to have an edge on that dataset as it is closer to its pre-training data, surprisingly, our results show that this is not the case, suggesting that text domain match does not explain the discrepancy.
null
null
10.18653/v1/2022.semeval-1.205
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,118
inproceedings
he-etal-2022-infrrd
Infrrd.ai at {S}em{E}val-2022 Task 11: A system for named entity recognition using data augmentation, transformer-based sequence labeling model, and {E}nsemble{CRF}
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.206/
He, Jianglong and Uppal, Akshay and N, Mamatha and Vignesh, Shiv and Kumar, Deepak and Kumar Sarda, Aditya
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1501--1510
In low-resource languages, the amount of training data is limited. Hence, the model has to perform well in unseen sentences and syntax on which the model has not trained. We propose a method that addresses the problem through an encoder and an ensemble of language models. A language-specific language model performed poorly when compared to a multilingual language model. So, the multilingual language model checkpoint is fine-tuned to a specific language. A novel approach of one hot encoder is introduced between the model outputs and the CRF to combine the results in an ensemble format. Our team, \textbf{Infrrd.ai}, competed in the MultiCoNER competition. The results are encouraging where the team is positioned within the top 10 positions. There is less than a 4{\%} percent difference from the third position in most of the tracks that we participated in. The proposed method shows that the ensemble of models with a multilingual language model as the base with the help of an encoder performs better than a single language-specific model.
null
null
10.18653/v1/2022.semeval-1.206
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,119
inproceedings
el-mekki-etal-2022-um6p
{UM}6{P}-{CS} at {S}em{E}val-2022 Task 11: Enhancing Multilingual and Code-Mixed Complex Named Entity Recognition via Pseudo Labels using Multilingual Transformer
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.207/
El Mekki, Abdellah and El Mahdaouy, Abdelkader and Akallouch, Mohammed and Berrada, Ismail and Khoumsi, Ahmed
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1511--1517
Building real-world complex Named Entity Recognition (NER) systems is a challenging task. This is due to the complexity and ambiguity of named entities that appear in various contexts such as short input sentences, emerging entities, and complex entities. Besides, real-world queries are mostly malformed, as they can be code-mixed or multilingual, among other scenarios. In this paper, we introduce our submitted system to the Multilingual Complex Named Entity Recognition (MultiCoNER) shared task. We approach the complex NER for multilingual and code-mixed queries, by relying on the contextualized representation provided by the multilingual Transformer XLM-RoBERTa. In addition to the CRF-based token classification layer, we incorporate a span classification loss to recognize named entities spans. Furthermore, we use a self-training mechanism to generate weakly-annotated data from a large unlabeled dataset. Our proposed system is ranked 6th and 8th in the multilingual and code-mixed MultiCoNER`s tracks respectively.
null
null
10.18653/v1/2022.semeval-1.207
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,120
inproceedings
fu-etal-2022-casia
{CASIA} at {S}em{E}val-2022 Task 11: {C}hinese Named Entity Recognition for Complex and Ambiguous Entities
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.208/
Fu, Jia and Gan, Zhen and Li, Zhucong and Li, Sirui and Sui, Dianbo and Chen, Yubo and Liu, Kang and Zhao, Jun
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1518--1523
This paper describes our approach to develop a complex named entity recognition system in SemEval 2022 Task 11: MultiCoNER Multilingual Complex Named Entity Recognition,Track 9 - Chinese. In this task, we need to identify the entity boundaries and categorylabels for the six identified categories of CW,LOC, PER, GRP, CORP, and PORD.The task focuses on detecting semantically ambiguous and complex entities in short and low-context settings. We constructed a hybrid system based on Roberta-large model with three training mechanisms and a series of data gugmentation.Three training mechanisms include adversarial training, Child-Tuning training, and continued pre-training. The core idea of the hybrid system is to improve the performance of the model in complex environments by introducing more domain knowledge through data augmentation and continuing pre-training domain adaptation of the model. Our proposed method in this paper achieves a macro-F1 of 0.797 on the final test set, ranking second.
null
null
10.18653/v1/2022.semeval-1.208
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,121
inproceedings
tasnim-etal-2022-team
{TEAM}-Atreides at {S}em{E}val-2022 Task 11: On leveraging data augmentation and ensemble to recognize complex Named Entities in {B}angla
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.209/
Tasnim, Nazia and Shihab, Md. Istiak and Shahriyar Sushmit, Asif and Bethard, Steven and Sadeque, Farig
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1524--1530
Many areas, such as the biological and healthcare domain, artistic works, and organization names, have nested, overlapping, discontinuous entity mentions that may even be syntactically or semantically ambiguous in practice. Traditional sequence tagging algorithms are unable to recognize these complex mentions because they may violate the assumptions upon which sequence tagging schemes are founded. In this paper, we describe our contribution to SemEval 2022 Task 11 on identifying such complex Named Entities. We have leveraged the ensemble of multiple ELECTRA-based models that were exclusively pretrained on the Bangla language with the performance of ELECTRA-based models pretrained on English to achieve competitive performance on the Track-11. Besides providing a system description, we will also present the outcomes of our experiments on architectural decisions, dataset augmentations, and post-competition findings.
null
null
10.18653/v1/2022.semeval-1.209
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,122
inproceedings
martin-etal-2022-kddie
{KDDIE} at {S}em{E}val-2022 Task 11: Using {D}e{BERT}a for Named Entity Recognition
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.210/
Martin, Caleb and Yang, Huichen and Hsu, William
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1531--1535
In this work, we introduce our system to the SemEval 2022 Task 11: Multilingual Complex Named Entity Recognition (MultiCoNER) competition. Our team (KDDIE) attempted the sub-task of Named Entity Recognition (NER) for the language of English in the challenge and reported our results. For this task, we use transfer learning method: fine-tuning the pre-trained language models (PLMs) on the competition dataset. Our two approaches are the BERT-based PLMs and PLMs with additional layer such as Condition Random Field. We report our finding and results in this report.
null
null
10.18653/v1/2022.semeval-1.210
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,123
inproceedings
singh-etal-2022-silpa
silpa{\_}nlp at {S}em{E}val-2022 Tasks 11: Transformer based {NER} models for {H}indi and {B}angla languages
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.211/
Singh, Sumit and Jawale, Pawankumar and Tiwary, Uma
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1536--1542
We present Transformer based pretrained models, which are fine-tuned for Named Entity Recognition (NER) task. Our team participated in SemEval-2022 Task 11 MultiCoNER: Multilingual Complex Named Entity Recognition task for Hindi and Bangla. Result comparison of six models (mBERT, IndicBERT, MuRIL (Base), MuRIL (Large), XLM-RoBERTa (Base) and XLM-RoBERTa (Large) ) has been performed. It is found that among these models MuRIL (Large) model performs better for both the Hindi and Bangla languages. Its F1-Scores for Hindi and Bangla are 0.69 and 0.59 respectively.
null
null
10.18653/v1/2022.semeval-1.211
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,124
inproceedings
rouhizadeh-teodoro-2022-ds4dh
{DS}4{DH} at {S}em{E}val-2022 Task 11: Multilingual Named Entity Recognition Using an Ensemble of Transformer-based Language Models
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.212/
Rouhizadeh, Hossein and Teodoro, Douglas
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1543--1548
In this paper, we describe our proposed method for the SemEval 2022 Task 11: Multilingual Complex Named Entity Recognition (MultiCoNER). The goal of this task is to locate and classify named entities in unstructured short complex texts in 11 different languages. After training a variety of contextual language models on the NER dataset, we used an ensemble strategy based on a majority vote to finalize our model. We evaluated our proposed approach on the multilingual NER dataset at SemEval-2022. The ensemble model provided consistent improvements against the individual models on the multilingual track, achieving a macro F1 performance of 65.2{\%}. However, our results were significantly outperformed by the top ranking systems, achieving thus a baseline performance.
null
null
10.18653/v1/2022.semeval-1.212
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,125
inproceedings
aziz-etal-2022-csecu-dsg
{CSECU}-{DSG} at {S}em{E}val-2022 Task 11: Identifying the Multilingual Complex Named Entity in Text Using Stacked Embeddings and Transformer based Approach
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.213/
Aziz, Abdul and Hossain, Md. Akram and Chy, Abu Nowshed
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1549--1555
Recognizing complex and ambiguous named entities (NEs) is one of the formidable tasks in the NLP domain. However, the diversity of linguistic constituents, syntactic structure, semantic ambiguity as well as differences from traditional NEs make it challenging to identify the complex NEs. To address these challenges, SemEval-2022 Task 11 introduced a shared task MultiCoNER focusing on complex named entity recognition in multilingual settings. This paper presents our participation in this task where we propose two different approaches including a BiLSTM-CRF model with stacked-embedding strategy and a transformer-based approach. Our proposed method achieved competitive performance among the participants' methods in a few languages.
null
null
10.18653/v1/2022.semeval-1.213
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,126
inproceedings
dowlagar-mamidi-2022-cmnerone
{CMNERO}ne at {S}em{E}val-2022 Task 11: Code-Mixed Named Entity Recognition by leveraging multilingual data
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.214/
Dowlagar, Suman and Mamidi, Radhika
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1556--1561
Identifying named entities is, in general, a practical and challenging task in the field of Natural Language Processing. Named Entity Recognition on the code-mixed text is further challenging due to the linguistic complexity resulting from the nature of the mixing. This paper addresses the submission of team CMNEROne to the SEMEVAL 2022 shared task 11 MultiCoNER. The Code-mixed NER task aimed to identify named entities on the code-mixed dataset. Our work consists of Named Entity Recognition (NER) on the code-mixed dataset by leveraging the multilingual data. We achieved a weighted average F1 score of 0.7044, i.e., 6{\%} greater than the NER baseline.
null
null
10.18653/v1/2022.semeval-1.214
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,127
inproceedings
pais-2022-racai
{RACAI} at {S}em{E}val-2022 Task 11: Complex named entity recognition using a lateral inhibition mechanism
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.215/
Pais, Vasile
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1562--1569
This paper presents RACAI`s system used for the shared task of {\textquotedblleft}Multilingual Complex Named Entity Recognition (MultiCoNER){\textquotedblright}, organized as part of the {\textquotedblleft}The 16th International Workshop on Semantic Evaluation (SemEval 2022){\textquotedblright}. The system employs a novel layer inspired by the biological mechanism of lateral inhibition. This allowed the system to achieve good results without any additional resources apart from the provided training data. In addition to the system`s architecture, results are provided as well as observations regarding the provided dataset.
null
null
10.18653/v1/2022.semeval-1.215
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,128
inproceedings
miftahova-etal-2022-namedentityrangers
{N}amed{E}ntity{R}angers at {S}em{E}val-2022 Task 11: Transformer-based Approaches for Multilingual Complex Named Entity Recognition
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.216/
Miftahova, Amina and Pugachev, Alexander and Skiba, Artem and Artemova, Ekaterina and Batura, Tatiana and Braslavski, Pavel and Ivanov, Vladimir
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1570--1575
This paper presents the two submissions of NamedEntityRangers Team to the MultiCoNER Shared Task, hosted at SemEval-2022. We evaluate two state-of-the-art approaches, of which both utilize pre-trained multi-lingual language models differently. The first approach follows the token classification schema, in which each token is assigned with a tag. The second approach follows a recent template-free paradigm, in which an encoder-decoder model translates the input sequence of words to a special output, encoding named entities with predefined labels. We utilize RemBERT and mT5 as backbone models for these two approaches, respectively. Our results show that the oldie but goodie token classification outperforms the template-free method by a wide margin. Our code is available at: \url{https://github.com/Abiks/MultiCoNER}.
null
null
10.18653/v1/2022.semeval-1.216
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,129
inproceedings
dogra-etal-2022-raccoons
Raccoons at {S}em{E}val-2022 Task 11: Leveraging Concatenated Word Embeddings for Named Entity Recognition
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.217/
Dogra, Atharvan and Kaur, Prabsimran and Kohli, Guneet and Bedi, Jatin
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1576--1582
Named Entity Recognition (NER), an essential subtask in NLP that identifies text belonging to predefined semantics such as a person, location, organization, drug, time, clinical procedure, biological protein, etc. NER plays a vital role in various fields such as informationextraction, question answering, and machine translation. This paper describes our participating system run to the Named entity recognitionand classification shared task SemEval-2022. The task is motivated towards detecting semantically ambiguous and complex entities in shortand low-context settings. Our team focused on improving entity recognition by improving the word embeddings. We concatenated the word representations from State-of-the-art language models and passed them to find the best representation through a reinforcement trainer. Our results highlight the improvements achieved by various embedding concatenations.
null
null
10.18653/v1/2022.semeval-1.217
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,130
inproceedings
hassan-etal-2022-seql
{S}eq{L} at {S}em{E}val-2022 Task 11: An Ensemble of Transformer Based Models for Complex Named Entity Recognition Task
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.218/
Hassan, Fadi and Tufa, Wondimagegnhue and Collell, Guillem and Vossen, Piek and Beinborn, Lisa and Flanagan, Adrian and Eeik Tan, Kuan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1583--1592
This paper presents our system used to participate in task 11 (MultiCONER) of the SemEval 2022 competition. Our system ranked fourth place in track 12 (Multilingual) and fifth place in track 13 (Code-Mixed). The goal of track 12 is to detect complex named entities in a multilingual setting, while track 13 is dedicated to detecting complex named entities in a code-mixed setting. Both systems were developed using transformer-based language models. We used an ensemble of XLM-RoBERTa-large and Microsoft/infoxlm-large with a Conditional Random Field (CRF) layer. In addition, we describe the algorithms employed to train our models and our hyper-parameter selection. We furthermore study the impact of different methods to aggregate the outputs of the individual models that compose our ensemble. Finally, we present an extensive analysis of the results and errors.
null
null
10.18653/v1/2022.semeval-1.218
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,131
inproceedings
hou-etal-2022-sfe
{SFE}-{AI} at {S}em{E}val-2022 Task 11: Low-Resource Named Entity Recognition using Large Pre-trained Language Models
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.219/
Hou, Changyu and Wang, Jun and Qiao, Yixuan and Jiang, Peng and Gao, Peng and Xie, Guotong and Lin, Qizhi and Wang, Xiaopeng and Jiang, Xiandi and Wang, Benqi and Xiao, Qifeng
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1593--1596
Large scale pre-training models have been widely used in named entity recognition (NER) tasks. However, model ensemble through parameter averaging or voting can not give full play to the differentiation advantages of different models, especially in the open domain. This paper describes our NER system in the SemEval 2022 task11: MultiCoNER. We proposed an effective system to adaptively ensemble pre-trained language models by a Transformer layer. By assigning different weights to each model for different inputs, we adopted the Transformer layer to integrate the advantages of diverse models effectively. Experimental results show that our method achieves superior performances in Farsi and Dutch.
null
null
10.18653/v1/2022.semeval-1.219
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,132
inproceedings
lee-etal-2022-ncuee
{NCUEE}-{NLP} at {S}em{E}val-2022 Task 11: {C}hinese Named Entity Recognition Using the {BERT}-{B}i{LSTM}-{CRF} Model
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.220/
Lee, Lung-Hao and Lu, Chien-Huan and Lin, Tzu-Mi
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1597--1602
This study describes the model design of the NCUEE-NLP system for the Chinese track of the SemEval-2022 MultiCoNER task. We use the BERT embedding for character representation and train the BiLSTM-CRF model to recognize complex named entities. A total of 21 teams participated in this track, with each team allowed a maximum of six submissions. Our best submission, with a macro-averaging F1-score of 0.7418, ranked the seventh position out of 21 teams.
null
null
10.18653/v1/2022.semeval-1.220
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,133
inproceedings
pu-etal-2022-cmb
{CMB} {AI} Lab at {S}em{E}val-2022 Task 11: A Two-Stage Approach for Complex Named Entity Recognition via Span Boundary Detection and Span Classification
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.221/
Pu, Keyu and Liu, Hongyi and Yang, Yixiao and Ji, Jiangzhou and Lv, Wenyi and He, Yaohan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1603--1607
This paper presents a solution for the SemEval-2022 Task 11 Multilingual Complex Named Entity Recognition. What is challenging in this task is detecting semantically ambiguous and complex entities in short and low-context settings. Our team (CMB AI Lab) propose a two-stage method to recognize the named entities: first, a model based on biaffine layer is built to predict span boundaries, and then a span classification model based on pooling layer is built to predict semantic tags of the spans. The basic pre-trained models we choose are XLM-RoBERTa and mT5. The evaluation result of our approach achieves an F1 score of 84.62 on sub-task 13, which ranks the third on the learder board.
null
null
10.18653/v1/2022.semeval-1.221
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,134
inproceedings
song-bethard-2022-ua
{UA}-{KO} at {S}em{E}val-2022 Task 11: Data Augmentation and Ensembles for {K}orean Named Entity Recognition
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.222/
Song, Hyunju and Bethard, Steven
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1608--1612
This paper presents the approaches and systems of the UA-KO team for the Korean portion of SemEval-2022 Task 11 on Multilingual Complex Named Entity Recognition.We fine-tuned Korean and multilingual BERT and RoBERTA models, conducted experiments on data augmentation, ensembles, and task-adaptive pretraining. Our final system ranked 8th out of 17 teams with an F1 score of 0.6749 F1.
null
null
10.18653/v1/2022.semeval-1.222
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,135
inproceedings
chen-etal-2022-ustc
{USTC}-{NELSLIP} at {S}em{E}val-2022 Task 11: Gazetteer-Adapted Integration Network for Multilingual Complex Named Entity Recognition
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.223/
Chen, Beiduo and Ma, Jun-Yu and Qi, Jiajun and Guo, Wu and Ling, Zhen-Hua and Liu, Quan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1613--1622
This paper describes the system developed by the USTC-NELSLIP team for SemEval-2022 Task 11 Multilingual Complex Named Entity Recognition (MultiCoNER). We propose a gazetteer-adapted integration network (GAIN) to improve the performance of language models for recognizing complex named entities. The method first adapts the representations of gazetteer networks to those of language models by minimizing the KL divergence between them. After adaptation, these two networks are then integrated for backend supervised named entity recognition (NER) training. The proposed method is applied to several state-of-the-art Transformer-based NER models with a gazetteer built from Wikidata, and shows great generalization ability across them. The final predictions are derived from an ensemble of these trained models. Experimental results and detailed analysis verify the effectiveness of the proposed method. The official results show that our system ranked 1st on three tracks (Chinese, Code-mixed and Bangla) and 2nd on the other ten tracks in this task.
null
null
10.18653/v1/2022.semeval-1.223
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,136
inproceedings
pandey-etal-2022-multilinguals-semeval
Multilinguals at {S}em{E}val-2022 Task 11: Transformer Based Architecture for Complex {NER}
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.224/
Pandey, Amit and Daw, Swayatta and Pudi, Vikram
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1623--1629
We investigate the task of complex NER for the English language. The task is non-trivial due to the semantic ambiguity of the textual structure and the rarity of occurrence of such entities in the prevalent literature. Using pre-trained language models such as BERT, we obtain a competitive performance on this task. We qualitatively analyze the performance of multiple architectures for this task. All our models are able to outperform the baseline by a significant margin. Our best performing model beats the baseline F1-score by over 9{\%}.
null
null
10.18653/v1/2022.semeval-1.224
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,137
inproceedings
boros-etal-2022-l3i
L3i at {S}em{E}val-2022 Task 11: Straightforward Additional Context for Multilingual Named Entity Recognition
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.225/
Boros, Emanuela and Gonz{\'a}lez-Gallardo, Carlos-Emiliano and Moreno, Jose and Doucet, Antoine
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1630--1638
This paper summarizes the participation of the L3i laboratory of the University of La Rochelle in the SemEval-2022 Task 11, Multilingual Complex Named Entity Recognition (MultiCoNER). The task focuses on detecting semantically ambiguous and complex entities in short and low-context monolingual and multilingual settings. We argue that using a language-specific and a multilingual language model could improve the performance of multilingual and mixed NER. Also, we consider that using additional contexts from the training set could improve the performance of a NER on short texts. Thus, we propose a straightforward technique for generating additional contexts with and without the presence of entities. Our findings suggest that, in our internal experimental setup, this approach is promising. However, we ranked above average for the high-resource languages and lower than average for low-resource and multilingual models.
null
null
10.18653/v1/2022.semeval-1.225
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,138
inproceedings
tavan-najafi-2022-marsan
{M}ar{S}an at {S}em{E}val-2022 Task 11: Multilingual complex named entity recognition using T5 and transformer encoder
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.226/
Tavan, Ehsan and Najafi, Maryam
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1639--1647
The multilingual complex named entity recognition task of SemEval2020 required participants to detect semantically ambiguous and complex entities in 11 languages. In order to participate in this competition, a deep learning model is being used with the T5 text-to-text language model and its multilingual version, MT5, along with the transformer`s encoder module. The subtoken check has also been introduced, resulting in a 4{\%} increase in the model F1-score in English. We also examined the use of the BPEmb model for converting input tokens to representation vectors in this research. A performance evaluation of the proposed entity detection model is presented at the end of this paper. Six different scenarios were defined, and the proposed model was evaluated in each scenario within the English development set. Our model is also evaluated in other languages.
null
null
10.18653/v1/2022.semeval-1.226
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,139
inproceedings
carik-etal-2022-su
{SU}-{NLP} at {S}em{E}val-2022 Task 11: Complex Named Entity Recognition with Entity Linking
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.227/
{\c{C}}ar{\i}k, Buse and Beyhan, Fatih and Yeniterzi, Reyyan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1648--1653
This paper describes the system proposed by Sabanc{\i} University Natural Language Processing Group in the SemEval-2022 MultiCoNER task. We developed an unsupervised entity linking pipeline that detects potential entity mentions with the help of Wikipedia and also uses the corresponding Wikipedia context to help the classifier in finding the named entity type of that mention. The proposed pipeline significantly improved the performance, especially for complex entities in low-context settings.
null
null
10.18653/v1/2022.semeval-1.227
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,140
inproceedings
gan-etal-2022-qtrade
Qtrade {AI} at {S}em{E}val-2022 Task 11: An Unified Framework for Multilingual {NER} Task
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.228/
Gan, Weichao and Lin, Yuanping and Yu, Guangbo and Chen, Guimin and Ye, Qian
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1654--1664
This paper describes our system, which placed third in the Multilingual Track (subtask 11), fourth in the Code-Mixed Track (subtask 12), and seventh in the Chinese Track (subtask 9) in the SemEval 2022 Task 11: MultiCoNER Multilingual Complex Named Entity Recognition. Our system`s key contributions are as follows: 1) For multilingual NER tasks, we offered a unified framework with which one can easily execute single-language or multilingual NER tasks, 2) for low-resource mixed-code NER task, one can easily enhanced his or her dataset through implementing several simple data augmentation methods and 3) for Chinese tasks, we proposed a model that can capture Chinese lexical semantic, lexical border, and lexical graph structural information. Finally, in the test phase, our system received macro-f1 scores of 77.66, 84.35, and 74 on task 12, task 13, and task 9.
null
null
10.18653/v1/2022.semeval-1.228
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,141
inproceedings
ma-etal-2022-pai
{PAI} at {S}em{E}val-2022 Task 11: Name Entity Recognition with Contextualized Entity Representations and Robust Loss Functions
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.229/
Ma, Long and Jian, Xiaorong and Li, Xuan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1665--1670
This paper describes our system used in the SemEval-2022 Task 11 Multilingual Complex Named Entity Recognition, achieving 3rd for track 1 on the leaderboard. We propose Dictionary-fused BERT, a flexible approach for entity dictionaries integration. The main ideas of our systems are:1) integrating external knowledge (an entity dictionary) into pre-trained models to obtain contextualized word and entity representations 2) designing a robust loss function leveraging a logit matrix 3) adding an auxiliary task, which is an on-top binary classification to decide whether the token is a mention word or not, makes the main task easier to learn. It is worth noting that our system achieves an F1 of 0.914 in the post-evaluation stage by updating the entity dictionary to the one of (CITATION), which is higher than the score of 1st on the leaderboard of the evaluation stage.
null
null
10.18653/v1/2022.semeval-1.229
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,142
inproceedings
lai-etal-2022-semeval
{S}em{E}val 2022 Task 12: Symlink - Linking Mathematical Symbols to their Descriptions
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.230/
Lai, Viet and Pouran Ben Veyseh, Amir and Dernoncourt, Franck and Nguyen, Thien
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1671--1678
We describe Symlink, a SemEval shared task of extracting mathematical symbols and their descriptions from LaTeX source of scientific documents. This is a new task in SemEval 2022, which attracted 180 individual registrations and 59 final submissions from 7 participant teams. We expect the data developed for this task and the findings reported to be valuable for the scientific knowledge extraction and automated knowledge base construction communities. The data used in this task is publicly accessible at \url{https://github.com/nlp-oregon/symlink}.
null
null
10.18653/v1/2022.semeval-1.230
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,143
inproceedings
lee-na-2022-jbnu
{JBNU}-{CCL}ab at {S}em{E}val-2022 Task 12: Machine Reading Comprehension and Span Pair Classification for Linking Mathematical Symbols to Their Descriptions
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.231/
Lee, Sung-Min and Na, Seung-Hoon
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1679--1686
This paper describes our system in the SemEval-2022 Task 12: {\textquoteleft}linking mathematical symbols to their descriptions', achieving first on the leaderboard for all the subtasks comprising named entity extraction (NER) and relation extraction (RE). Our system is a two-stage pipeline model based on SciBERT that detects symbols, descriptions, and their relationships in scientific documents. The system consists of 1) machine reading comprehension(MRC)-based NER model, where each entity type is represented as a question and its entity mention span is extracted as an answer using an MRC model, and 2) span pair classification for RE, where two entity mentions and their type markers are encoded into span representations that are then fed to a Softmax classifier. In addition, we deploy a rule-based symbol tokenizer to improve the detection of the exact boundary of symbol entities. Regularization and ensemble methods are further explored to improve the RE model.
null
null
10.18653/v1/2022.semeval-1.231
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,144
inproceedings
popovic-etal-2022-aifb
{AIFB}-{W}eb{S}cience at {S}em{E}val-2022 Task 12: Relation Extraction First - Using Relation Extraction to Identify Entities
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.232/
Popovic, Nicholas and Laurito, Walter and F{\"arber, Michael
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1687--1694
In this paper, we present an end-to-end joint entity and relation extraction approach based on transformer-based language models. We apply the model to the task of linking mathematical symbols to their descriptions in LaTeX documents. In contrast to existing approaches, which perform entity and relation extraction in sequence, our system incorporates information from relation extraction into entity extraction. This means that the system can be trained even on data sets where only a subset of all valid entity spans is annotated. We provide an extensive evaluation of the proposed system and its strengths and weaknesses. Our approach, which can be scaled dynamically in computational complexity at inference time, produces predictions with high precision and reaches 3rd place in the leaderboard of SemEval-2022 Task 12. For inputs in the domain of physics and math, it achieves high relation extraction macro F1 scores of 95.43{\%} and 79.17{\%}, respectively. The code used for training and evaluating our models is available at: \url{https://github.com/nicpopovic/RE1st}
null
null
10.18653/v1/2022.semeval-1.232
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,145
inproceedings
goot-2022-machamp
{M}a{C}h{A}mp at {S}em{E}val-2022 Tasks 2, 3, 4, 6, 10, 11, and 12: Multi-task Multi-lingual Learning for a Pre-selected Set of Semantic Datasets
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.233/
van der Goot, Rob
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1695--1703
Previous work on multi-task learning in Natural Language Processing (NLP) oftenincorporated carefully selected tasks as well as carefully tuning ofarchitectures to share information across tasks. Recently, it has shown thatfor autoregressive language models, a multi-task second pre-training step on awide variety of NLP tasks leads to a set of parameters that more easily adaptfor other NLP tasks. In this paper, we examine whether a similar setup can beused in autoencoder language models using a restricted set of semanticallyoriented NLP tasks, namely all SemEval 2022 tasks that are annotated at theword, sentence or paragraph level. We first evaluate a multi-task model trainedon all SemEval 2022 tasks that contain annotation on the word, sentence orparagraph level (7 tasks, 11 sub-tasks), and then evaluate whetherre-finetuning the resulting model for each task specificially leads to furtherimprovements. Our results show that our mono-task baseline, our multi-taskmodel and our re-finetuned multi-task model each outperform the other modelsfor a subset of the tasks. Overall, huge gains can be observed by doingmulti-task learning: for three tasks we observe an error reduction of more than40{\%}.
null
null
10.18653/v1/2022.semeval-1.233
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,146
inproceedings
cohan-etal-2022-overview
Overview of the Third Workshop on Scholarly Document Processing
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.1/
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
Proceedings of the Third Workshop on Scholarly Document Processing
1--6
With the ever-increasing pace of research and high volume of scholarly communication, scholars face a daunting task. Not only must they keep up with the growing literature in their own and related fields, scholars increasingly also need to rebut pseudo-science and disinformation. These needs have motivated an increasing focus on computational methods for enhancing search, summarization, and analysis of scholarly documents. However, the various strands of research on scholarly document processing remain fragmented. To reach out to the broader NLP and AI/ML community, pool distributed efforts in this area, and enable shared access to published research, we held the 3rd Workshop on Scholarly Document Processing (SDP) at COLING as a hybrid event (\url{https://sdproc.org/2022/}). The SDP workshop consisted of a research track, three invited talks and five Shared Tasks: 1) MSLR22: Multi-Document Summarization for Literature Reviews, 2) DAGPap22: Detecting automatically generated scientific papers, 3) SV-Ident 2022: Survey Variable Identification in Social Science Publications, 4) SKGG: Scholarly Knowledge Graph Generation, 5) MuP 2022: Multi Perspective Scientific Document Summarization. The program was geared towards NLP, information retrieval, and data mining for scholarly documents, with an emphasis on identifying and providing solutions to open challenges.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,148
inproceedings
bittermann-rieger-2022-finding
Finding Scientific Topics in Continuously Growing Text Corpora
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.2/
Bittermann, Andr{\'e} and Rieger, Jonas
Proceedings of the Third Workshop on Scholarly Document Processing
7--18
The ever growing amount of research publications demands computational assistance for everyone trying to keep track with scientific processes. Topic modeling has become a popular approach for finding scientific topics in static collections of research papers. However, the reality of continuously growing corpora of scholarly documents poses a major challenge for traditional approaches. We introduce RollingLDA for an ongoing monitoring of research topics, which offers the possibility of sequential modeling of dynamically growing corpora with time consistency of time series resulting from the modeled texts. We evaluate its capability to detect research topics and present a Shiny App as an easy-to-use interface. In addition, we illustrate usage scenarios for different user groups such as researchers, students, journalists, or policy-makers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,149
inproceedings
medic-snajder-2022-large
Large-scale Evaluation of Transformer-based Article Encoders on the Task of Citation Recommendation
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.3/
Medi{\'c}, Zoran and Snajder, Jan
Proceedings of the Third Workshop on Scholarly Document Processing
19--31
Recently introduced transformer-based article encoders (TAEs) designed to produce similar vector representations for mutually related scientific articles have demonstrated strong performance on benchmark datasets for scientific article recommendation. However, the existing benchmark datasets are predominantly focused on single domains and, in some cases, contain easy negatives in small candidate pools. Evaluating representations on such benchmarks might obscure the realistic performance of TAEs in setups with thousands of articles in candidate pools. In this work, we evaluate TAEs on large benchmarks with more challenging candidate pools. We compare the performance of TAEs with a lexical retrieval baseline model BM25 on the task of citation recommendation, where the model produces a list of recommendations for citing in a given input article. We find out that BM25 is still very competitive with the state-of-the-art neural retrievers, a finding which is surprising given the strong performance of TAEs on small benchmarks. As a remedy for the limitations of the existing benchmarks, we propose a new benchmark dataset for evaluating scientific article representations: Multi-Domain Citation Recommendation dataset (MDCR), which covers different scientific fields and contains challenging candidate pools.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,150
inproceedings
lay-etal-2022-investigating
Investigating the detection of Tortured Phrases in Scientific Literature
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.4/
Lay, Puthineath and Lentschat, Martin and Labbe, Cyril
Proceedings of the Third Workshop on Scholarly Document Processing
32--36
With the help of online tools, unscrupulous authors can today generate a pseudo-scientific article and attempt to publish it. Some of these tools work by replacing or paraphrasing existing texts to produce new content, but they have a tendency to generate nonsensical expressions. A recent study introduced the concept of {\textquotedblleft}tortured phrase{\textquotedblright}, an unexpected odd phrase that appears instead of the fixed expression. E.g. counterfeit consciousness instead of artificial intelligence. The present study aims at investigating how tortured phrases, that are not yet listed, can be detected automatically. We conducted several experiments, including non-neural binary classification, neural binary classification and cosine similarity comparison of the phrase tokens, yielding noticeable results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,151
inproceedings
huang-etal-2022-lightweight
Lightweight Contextual Logical Structure Recovery
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.5/
Huang, Po-Wei and Ramesh Kashyap, Abhinav and Qin, Yanxia and Yang, Yajing and Kan, Min-Yen
Proceedings of the Third Workshop on Scholarly Document Processing
37--48
Logical structure recovery in scientific articles associates text with a semantic section of the article. Although previous work has disregarded the surrounding context of a line, we model this important information by employing line-level attention on top of a transformer-based scientific document processing pipeline. With the addition of loss function engineering and data augmentation techniques with semi-supervised learning, our method improves classification performance by 10{\%} compared to a recent state-of-the-art model. Our parsimonious, text-only method achieves a performance comparable to that of other works that use rich document features such as font and spatial position, using less data without sacrificing performance, resulting in a lightweight training pipeline.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,152
inproceedings
te-etal-2022-citation
Citation Context Classification: Critical vs Non-critical
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.6/
Te, Sonita and Barhoumi, Amira and Lentschat, Martin and Bordignon, Fr{\'e}d{\'e}rique and Labb{\'e}, Cyril and Portet, Fran{\c{c}}ois
Proceedings of the Third Workshop on Scholarly Document Processing
49--53
Recently, there have been numerous research in Natural Language Processing on citation analysis in scientific literature. Studies of citation behavior aim at finding how researchers cited a paper in their work. In this paper, we are interested in identifying cited papers that are criticized. Recent research introduces the concept of Critical citations which provides a useful theoretical framework, making criticism an important part of scientific progress. Indeed, identifying critics could be a way to spot errors and thus encourage self-correction of science. In this work, we investigate how to automatically classify the critical citation contexts using Natural Language Processing (NLP). Our classification task consists of predicting critical or non-critical labels for citation contexts. For this, we experiment and compare different methods, including rule-based and machine learning methods, to classify critical vs. non-critical citation contexts. Our experiments show that fine-tuning pretrained transformer model RoBERTa achieved the highest performance among all systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,153
inproceedings
sugimoto-aizawa-2022-incorporating
Incorporating the Rhetoric of Scientific Language into Sentence Embeddings using Phrase-guided Distant Supervision and Metric Learning
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.7/
Sugimoto, Kaito and Aizawa, Akiko
Proceedings of the Third Workshop on Scholarly Document Processing
54--68
Communicative functions are an important rhetorical feature of scientific writing. Sentence embeddings that contain such features are highly valuable for the argumentative analysis of scientific documents, with applications in document alignment, recommendation, and academic writing assistance. Moreover, embeddings can provide a possible solution to the open-set problem, where models need to generalize to new communicative functions unseen at training time. However, existing sentence representation models are not suited for detecting functional similarity since they only consider lexical or semantic similarities. To remedy this, we propose a combined approach of distant supervision and metric learning to make a representation model more aware of the functional part of a sentence. We first leverage an existing academic phrase database to label sentences automatically with their functions. Then, we train an embedding model to capture similarities and dissimilarities from a rhetorical perspective. The experimental results demonstrate that the embeddings obtained from our model are more advantageous than existing models when retrieving functionally similar sentences. We also provide an extensive analysis of the performance differences between five metric learning objectives, revealing that traditional methods (e.g., softmax cross-entropy loss and triplet loss) outperform state-of-the-art techniques.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,154
inproceedings
buhnila-2022-identifying
Identifying Medical Paraphrases in Scientific versus Popularization Texts in {F}rench for Laypeople Understanding
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.8/
Buhnila, Ioana
Proceedings of the Third Workshop on Scholarly Document Processing
69--79
Scientific medical terms are difficult to understand for laypeople due to their technical formulas and etymology. Understanding medical concepts is important for laypeople as personal and public health is a lifelong concern. In this study, we present our methodology for building a French lexical resource annotated with paraphrases for the simplification of monolexical and multiword medical terms. In order to find medical paraphrases, we automatically searched for medical terms and specific lexical markers that help to paraphrase them. We annotated the medical terms, the paraphrase markers, and the paraphrase. We analysed the lexical relations and semantico-pragmatic functions that exists between the term and its paraphrase. We computed statistics for the medical paraphrase corpus, and we evaluated the readability of the medical paraphrases for a non-specialist coder. Our results show that medical paraphrases from popularization texts are easier to understand (62.66{\%}) than paraphrases extracted from scientific texts (50{\%}).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,155