bibtex_url
stringlengths 41
50
| proceedings
stringlengths 38
47
| bibtext
stringlengths 709
3.56k
| abstract
stringlengths 17
2.11k
| authors
sequencelengths 1
72
| title
stringlengths 12
207
| id
stringlengths 7
16
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 276
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
14
| num_comments
int64 -1
11
| n_authors
int64 -1
44
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
14
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.semeval-1.55.bib | https://aclanthology.org/2023.semeval-1.55/ | @inproceedings{wei-king-2023-stfx,
title = "{S}t{FX} {NLP} at {S}em{E}val-2023 Task 1: Multimodal Encoding-based Methods for Visual Word Sense Disambiguation",
author = "Wei, Yuchen and
King, Milton",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.55",
doi = "10.18653/v1/2023.semeval-1.55",
pages = "409--414",
abstract = "SemEval-2023{'}s Task 1, Visual Word Sense Disambiguation, a task about text semantics and visual semantics, selecting an image from a list of candidates, that best exhibits a given target word in a small context. We tried several methods, including the image captioning method and CLIP methods, and submitted our predictions in the competition for this task. This paper describes the methods we used and their performance and provides an analysis and discussion of the performance.",
}
| SemEval-2023{'}s Task 1, Visual Word Sense Disambiguation, a task about text semantics and visual semantics, selecting an image from a list of candidates, that best exhibits a given target word in a small context. We tried several methods, including the image captioning method and CLIP methods, and submitted our predictions in the competition for this task. This paper describes the methods we used and their performance and provides an analysis and discussion of the performance. | [
"Wei, Yuchen",
"King, Milton"
] | StFX NLP at SemEval-2023 Task 1: Multimodal Encoding-based Methods for Visual Word Sense Disambiguation | semeval-1.55 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.56.bib | https://aclanthology.org/2023.semeval-1.56/ | @inproceedings{tran-doan-2023-vtcc,
title = "{VTCC}-{NER} at {S}em{E}val-2023 Task 6: An Ensemble Pre-trained Language Models for Named Entity Recognition",
author = "Tran, Quang-Minh and
Doan, Xuan-Dung",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.56",
doi = "10.18653/v1/2023.semeval-1.56",
pages = "415--419",
abstract = "We propose an ensemble method that combines several pre-trained language models to enhance entity recognition in legal text. Our approach achieved a 90.9873{\%} F1 score on the private test set, ranking 2nd on the leaderboard for SemEval 2023 Task 6, Subtask B - Legal Named Entities Extraction.",
}
| We propose an ensemble method that combines several pre-trained language models to enhance entity recognition in legal text. Our approach achieved a 90.9873{\%} F1 score on the private test set, ranking 2nd on the leaderboard for SemEval 2023 Task 6, Subtask B - Legal Named Entities Extraction. | [
"Tran, Quang-Minh",
"Doan, Xuan-Dung"
] | VTCC-NER at SemEval-2023 Task 6: An Ensemble Pre-trained Language Models for Named Entity Recognition | semeval-1.56 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.57.bib | https://aclanthology.org/2023.semeval-1.57/ | @inproceedings{ginn-khamov-2023-ginn,
title = "Ginn-Khamov at {S}em{E}val-2023 Task 6, Subtask {B}: Legal Named Entities Extraction for Heterogenous Documents",
author = "Ginn, Michael and
Khamov, Roman",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.57",
doi = "10.18653/v1/2023.semeval-1.57",
pages = "420--425",
abstract = "This paper describes our submission to SemEval-2023 Task 6, Subtask B, a shared task on performing Named Entity Recognition in legal documents for specific legal entity types. Documents are divided into the preamble and judgement texts, and certain entity types should only be tagged in one of the two text sections. To address this challenge, our team proposes a token classification model that is augmented with information about the document type, which achieves greater performance than the non-augmented system.",
}
| This paper describes our submission to SemEval-2023 Task 6, Subtask B, a shared task on performing Named Entity Recognition in legal documents for specific legal entity types. Documents are divided into the preamble and judgement texts, and certain entity types should only be tagged in one of the two text sections. To address this challenge, our team proposes a token classification model that is augmented with information about the document type, which achieves greater performance than the non-augmented system. | [
"Ginn, Michael",
"Khamov, Roman"
] | Ginn-Khamov at SemEval-2023 Task 6, Subtask B: Legal Named Entities Extraction for Heterogenous Documents | semeval-1.57 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.58.bib | https://aclanthology.org/2023.semeval-1.58/ | @inproceedings{zhang-etal-2023-mao,
title = "Mao-Zedong at {S}em{E}val-2023 Task 4: Label Represention Multi-Head Attention Model with Contrastive Learning-Enhanced Nearest Neighbor Mechanism for Multi-Label Text Classification",
author = "Zhang, Che and
Liu, Ping{'}an and
Xiao, Zhenyang and
Fei, Haojun",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.58",
doi = "10.18653/v1/2023.semeval-1.58",
pages = "426--432",
abstract = "This is our system description paper for ValueEval task. The title is:Mao-Zedong At SemEval-2023 Task 4: Label Represention Multi-Head Attention Model With Contrastive Learning-Enhanced Nearest Neighbor Mechanism For Multi-Label Text Classification,and the author is Che Zhang and Pingan Liu and ZhenyangXiao and HaojunFei. In this paper, we propose a model that combinesthe label-specific attention network with the contrastive learning-enhanced nearest neighbor mechanism.",
}
| This is our system description paper for ValueEval task. The title is:Mao-Zedong At SemEval-2023 Task 4: Label Represention Multi-Head Attention Model With Contrastive Learning-Enhanced Nearest Neighbor Mechanism For Multi-Label Text Classification,and the author is Che Zhang and Pingan Liu and ZhenyangXiao and HaojunFei. In this paper, we propose a model that combinesthe label-specific attention network with the contrastive learning-enhanced nearest neighbor mechanism. | [
"Zhang, Che",
"Liu, Ping{'}an",
"Xiao, Zhenyang",
"Fei, Haojun"
] | Mao-Zedong at SemEval-2023 Task 4: Label Represention Multi-Head Attention Model with Contrastive Learning-Enhanced Nearest Neighbor Mechanism for Multi-Label Text Classification | semeval-1.58 | Poster | 2307.05174 | [
"https://github.com/peterlau0626/semeval2023-task4-humanvalue"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.59.bib | https://aclanthology.org/2023.semeval-1.59/ | @inproceedings{pu-zhou-2023-pcj,
title = "{PCJ} at {S}em{E}val-2023 Task 10: A Ensemble Model Based on Pre-trained Model for Sexism Detection and Classification in {E}nglish",
author = "Pu, Chujun and
Zhou, Xiaobing",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.59",
doi = "10.18653/v1/2023.semeval-1.59",
pages = "433--438",
abstract = "This paper describes the system and the resulting model submitted by our team {``}PCJ{''} to the SemEval-2023 Task 10 sub-task A contest. In this task, we need to test the English text content in the posts to determine whether there is sexism, which involves emotional text classification. Our submission system utilizes methods based on RoBERTa, SimCSE-RoBERTa pre-training models, and model ensemble to classify and train on datasets provided by the organizers. In the final assessment, our submission achieved a macro average F1 score of 0.8449, ranking 28th out of 84 teams in Task A.",
}
| This paper describes the system and the resulting model submitted by our team {``}PCJ{''} to the SemEval-2023 Task 10 sub-task A contest. In this task, we need to test the English text content in the posts to determine whether there is sexism, which involves emotional text classification. Our submission system utilizes methods based on RoBERTa, SimCSE-RoBERTa pre-training models, and model ensemble to classify and train on datasets provided by the organizers. In the final assessment, our submission achieved a macro average F1 score of 0.8449, ranking 28th out of 84 teams in Task A. | [
"Pu, Chujun",
"Zhou, Xiaobing"
] | PCJ at SemEval-2023 Task 10: A Ensemble Model Based on Pre-trained Model for Sexism Detection and Classification in English | semeval-1.59 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.60.bib | https://aclanthology.org/2023.semeval-1.60/ | @inproceedings{zhang-etal-2023-srcb,
title = "{SRCB} at {S}em{E}val-2023 Task 1: Prompt Based and Cross-Modal Retrieval Enhanced Visual Word Sense Disambiguation",
author = "Zhang, Xudong and
Zhen, Tiange and
Zhang, Jing and
Wang, Yujin and
Liu, Song",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.60",
doi = "10.18653/v1/2023.semeval-1.60",
pages = "439--446",
abstract = "The Visual Word Sense Disambiguation (VWSD) shared task aims at selecting the image among candidates that best interprets the semantics of a target word with a short-length phrase for English, Italian, and Farsi. The limited phrase context, which only contains 2-3 words, challenges the model{'}s understanding ability, and the visual label requires image-text matching performance across different modalities. In this paper, we propose a prompt based and multimodal retrieval enhanced VWSD system, which uses the rich potential knowledge of large-scale pretrained models by prompting and additional text-image information from knowledge bases and open datasets. Under the English situation and given an input phrase, (1) the context retrieval module predicts the correct definition from sense inventory by matching phrase and context through a biencoder architecture. (2) The image retrieval module retrieves the relevant images from an image dataset.(3) The matching module decides that either text or image is used to pair with image labels by a rule-based strategy, then ranks the candidate images according to the similarity score. Our system ranks first in the English track and second in the average of all languages (English, Italian, and Farsi).",
}
| The Visual Word Sense Disambiguation (VWSD) shared task aims at selecting the image among candidates that best interprets the semantics of a target word with a short-length phrase for English, Italian, and Farsi. The limited phrase context, which only contains 2-3 words, challenges the model{'}s understanding ability, and the visual label requires image-text matching performance across different modalities. In this paper, we propose a prompt based and multimodal retrieval enhanced VWSD system, which uses the rich potential knowledge of large-scale pretrained models by prompting and additional text-image information from knowledge bases and open datasets. Under the English situation and given an input phrase, (1) the context retrieval module predicts the correct definition from sense inventory by matching phrase and context through a biencoder architecture. (2) The image retrieval module retrieves the relevant images from an image dataset.(3) The matching module decides that either text or image is used to pair with image labels by a rule-based strategy, then ranks the candidate images according to the similarity score. Our system ranks first in the English track and second in the average of all languages (English, Italian, and Farsi). | [
"Zhang, Xudong",
"Zhen, Tiange",
"Zhang, Jing",
"Wang, Yujin",
"Liu, Song"
] | SRCB at SemEval-2023 Task 1: Prompt Based and Cross-Modal Retrieval Enhanced Visual Word Sense Disambiguation | semeval-1.60 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.61.bib | https://aclanthology.org/2023.semeval-1.61/ | @inproceedings{alissa-abdullah-2023-just,
title = "{JUST}-{KM} at {S}em{E}val-2023 Task 7: Multi-evidence Natural Language Inference using Role-based Double Roberta-Large",
author = "Alissa, Kefah and
Abdullah, Malak",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.61",
doi = "10.18653/v1/2023.semeval-1.61",
pages = "447--452",
abstract = "In recent years, there has been a vast increase in the available clinical data. Variant Deep learning techniques are used to enhance the retrieval and interpretation of these data. This task deployed Natural language inference (NLI) in Clinical Trial Reports (CTRs) to provide individualized care that is supported by evidence. A collection of breast cancer clinical trial records, statements, annotations, and labels from experienced domain experts. NLI presents a chance to advance the widespread understanding and retrieval of medical evidence, leading to significant improvements in connecting the most recent evidence to personalized care. The primary objective is to identify the inference relationship (entailment or contradiction) between pairs of clinical trial records and statements. In this research, we used different transformer-based models, and The proposed model, {``}Role-based Double Roberta-Large,{''} achieved the best result on the testing dataset with F1-score equal to 67.0{\%}",
}
| In recent years, there has been a vast increase in the available clinical data. Variant Deep learning techniques are used to enhance the retrieval and interpretation of these data. This task deployed Natural language inference (NLI) in Clinical Trial Reports (CTRs) to provide individualized care that is supported by evidence. A collection of breast cancer clinical trial records, statements, annotations, and labels from experienced domain experts. NLI presents a chance to advance the widespread understanding and retrieval of medical evidence, leading to significant improvements in connecting the most recent evidence to personalized care. The primary objective is to identify the inference relationship (entailment or contradiction) between pairs of clinical trial records and statements. In this research, we used different transformer-based models, and The proposed model, {``}Role-based Double Roberta-Large,{''} achieved the best result on the testing dataset with F1-score equal to 67.0{\%} | [
"Alissa, Kefah",
"Abdullah, Malak"
] | JUST-KM at SemEval-2023 Task 7: Multi-evidence Natural Language Inference using Role-based Double Roberta-Large | semeval-1.61 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.62.bib | https://aclanthology.org/2023.semeval-1.62/ | @inproceedings{mehta-varma-2023-llm,
title = "{LLM}-{RM} at {S}em{E}val-2023 Task 2: Multilingual Complex {NER} Using {XLM}-{R}o{BERT}a",
author = "Mehta, Rahul and
Varma, Vasudeva",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.62",
doi = "10.18653/v1/2023.semeval-1.62",
pages = "453--456",
abstract = "Named Entity Recognition(NER) is a task ofrecognizing entities at a token level in a sen-tence. This paper focuses on solving NER tasksin a multilingual setting for complex named en-tities. Our team, LLM-RM participated in therecently organized SemEval 2023 task, Task 2:MultiCoNER II,Multilingual Complex NamedEntity Recognition. We approach the problemby leveraging cross-lingual representation pro-vided by fine-tuning XLM-Roberta base modelon datasets of all of the 12 languages provided - Bangla, Chinese, English, Farsi, French,German, Hindi, Italian, Portuguese, Spanish,Swedish and Ukrainian.",
}
| Named Entity Recognition(NER) is a task ofrecognizing entities at a token level in a sen-tence. This paper focuses on solving NER tasksin a multilingual setting for complex named en-tities. Our team, LLM-RM participated in therecently organized SemEval 2023 task, Task 2:MultiCoNER II,Multilingual Complex NamedEntity Recognition. We approach the problemby leveraging cross-lingual representation pro-vided by fine-tuning XLM-Roberta base modelon datasets of all of the 12 languages provided - Bangla, Chinese, English, Farsi, French,German, Hindi, Italian, Portuguese, Spanish,Swedish and Ukrainian. | [
"Mehta, Rahul",
"Varma, Vasudeva"
] | LLM-RM at SemEval-2023 Task 2: Multilingual Complex NER Using XLM-RoBERTa | semeval-1.62 | Poster | 2305.03300 | [
""
] | https://huggingface.co/papers/2305.03300 | 1 | 0 | 0 | 2 | 1 | [] | [] | [] |
https://aclanthology.org/2023.semeval-1.63.bib | https://aclanthology.org/2023.semeval-1.63/ | @inproceedings{katyal-etal-2023-teampn,
title = "team{PN} at {S}em{E}val-2023 Task 1: Visual Word Sense Disambiguation Using Zero-Shot {M}ulti{M}odal Approach",
author = "Katyal, Nikita and
Rajpoot, Pawan and
Tamilarasu, Subhanandh and
Mustafi, Joy",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.63",
doi = "10.18653/v1/2023.semeval-1.63",
pages = "457--461",
abstract = "Visual Word Sense Disambiguation shared task at SemEval-2023 aims to identify an image corresponding to the intended meaning of a given ambiguous word (with related context) from a set of candidate images. The lack of textual description for the candidate image and the corresponding word{'}s ambiguity makes it a challenging problem. This paper describes teamPN{'}s multi-modal and modular approach to solving this in English track of the task. We efficiently used recent multi-modal pre-trained models backed by real-time multi-modal knowledge graphs to augment textual knowledge for the images and select the best matching image accordingly. We outperformed the baseline model by {\textasciitilde}5 points and proposed a unique approach that can further work as a framework for other modular and knowledge-backed solutions.",
}
| Visual Word Sense Disambiguation shared task at SemEval-2023 aims to identify an image corresponding to the intended meaning of a given ambiguous word (with related context) from a set of candidate images. The lack of textual description for the candidate image and the corresponding word{'}s ambiguity makes it a challenging problem. This paper describes teamPN{'}s multi-modal and modular approach to solving this in English track of the task. We efficiently used recent multi-modal pre-trained models backed by real-time multi-modal knowledge graphs to augment textual knowledge for the images and select the best matching image accordingly. We outperformed the baseline model by {\textasciitilde}5 points and proposed a unique approach that can further work as a framework for other modular and knowledge-backed solutions. | [
"Katyal, Nikita",
"Rajpoot, Pawan",
"Tamilarasu, Subhan",
"h",
"Mustafi, Joy"
] | teamPN at SemEval-2023 Task 1: Visual Word Sense Disambiguation Using Zero-Shot MultiModal Approach | semeval-1.63 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.64.bib | https://aclanthology.org/2023.semeval-1.64/ | @inproceedings{schneider-biemann-2023-lt,
title = "{LT} at {S}em{E}val-2023 Task 1: Effective Zero-Shot Visual Word Sense Disambiguation Approaches using External Knowledge Sources",
author = "Schneider, Florian and
Biemann, Chris",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.64",
doi = "10.18653/v1/2023.semeval-1.64",
pages = "462--468",
abstract = "The objective of the SemEval-2023 Task 1: Visual Word Sense Disambiguation (VWSD) is to identify the image illustrating the indented meaning of a target word and some minimal additional context. The omnipresence of textual and visual data in the task strongly suggests the utilization of the recent advances in multi-modal machine learning, i.e., pretrained visiolinguistic models (VLMs). Often referred to as foundation models due to their strong performance on many vision-language downstream tasks, these models further demonstrate powerful zero-shot capabilities. In this work, we utilize various pertained VLMs in a zero-shot fashion for multiple approaches using external knowledge sources to enrich the contextual information. Further, we evaluate our methods on the final test data and extensively analyze the suitability of different knowledge sources, the influence of training data, model sizes, multi-linguality, and different textual prompting strategies. Although we are not among the best-performing systems (rank 20 of 56), our experiments described in this work prove competitive results. Moreover, we aim to contribute meaningful insights and propel multi-modal machine learning tasks like VWSD.",
}
| The objective of the SemEval-2023 Task 1: Visual Word Sense Disambiguation (VWSD) is to identify the image illustrating the indented meaning of a target word and some minimal additional context. The omnipresence of textual and visual data in the task strongly suggests the utilization of the recent advances in multi-modal machine learning, i.e., pretrained visiolinguistic models (VLMs). Often referred to as foundation models due to their strong performance on many vision-language downstream tasks, these models further demonstrate powerful zero-shot capabilities. In this work, we utilize various pertained VLMs in a zero-shot fashion for multiple approaches using external knowledge sources to enrich the contextual information. Further, we evaluate our methods on the final test data and extensively analyze the suitability of different knowledge sources, the influence of training data, model sizes, multi-linguality, and different textual prompting strategies. Although we are not among the best-performing systems (rank 20 of 56), our experiments described in this work prove competitive results. Moreover, we aim to contribute meaningful insights and propel multi-modal machine learning tasks like VWSD. | [
"Schneider, Florian",
"Biemann, Chris"
] | LT at SemEval-2023 Task 1: Effective Zero-Shot Visual Word Sense Disambiguation Approaches using External Knowledge Sources | semeval-1.64 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.65.bib | https://aclanthology.org/2023.semeval-1.65/ | @inproceedings{guo-etal-2023-coco,
title = "Coco at {S}em{E}val-2023 Task 10: Explainable Detection of Online Sexism",
author = "Guo, Kangshuai and
Ma, Ruipeng and
Luo, Shichao and
Wang, Yan",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.65",
doi = "10.18653/v1/2023.semeval-1.65",
pages = "469--476",
abstract = "Sexism has become a growing concern on social media platforms as it impacts the health of the internet and can have negative impacts on society. This paper describes the coco system that participated in SemEval-2023 Task 10, Explainable Detection of Online Sexism (EDOS), which aims at sexism detection in various settings of natural language understanding. We develop a novel neural framework for sexism detection and misogyny that can combine text representations obtained using pre-trained language model models such as Bidirectional Encoder Representations from Transformers and using BiLSTM architecture to obtain the local and global semantic information. Further, considering that the EDOS dataset is relatively small and extremely unbalanced, we conducted data augmentation and introduced two datasets in the field of sexism detection. Moreover, we introduced Focal Loss which is a loss function in order to improve the performance of processing imbalanced data classification. Our system achieved an F1 score of 78.95{\textbackslash}{\%} on Task A - binary sexism.",
}
| Sexism has become a growing concern on social media platforms as it impacts the health of the internet and can have negative impacts on society. This paper describes the coco system that participated in SemEval-2023 Task 10, Explainable Detection of Online Sexism (EDOS), which aims at sexism detection in various settings of natural language understanding. We develop a novel neural framework for sexism detection and misogyny that can combine text representations obtained using pre-trained language model models such as Bidirectional Encoder Representations from Transformers and using BiLSTM architecture to obtain the local and global semantic information. Further, considering that the EDOS dataset is relatively small and extremely unbalanced, we conducted data augmentation and introduced two datasets in the field of sexism detection. Moreover, we introduced Focal Loss which is a loss function in order to improve the performance of processing imbalanced data classification. Our system achieved an F1 score of 78.95{\textbackslash}{\%} on Task A - binary sexism. | [
"Guo, Kangshuai",
"Ma, Ruipeng",
"Luo, Shichao",
"Wang, Yan"
] | Coco at SemEval-2023 Task 10: Explainable Detection of Online Sexism | semeval-1.65 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.66.bib | https://aclanthology.org/2023.semeval-1.66/ | @inproceedings{krog-agirrezabal-2023-diane,
title = "Diane Simmons at {S}em{E}val-2023 Task 5: Is it possible to make good clickbait spoilers using a Zero-Shot approach? Check it out!",
author = "Krog, Niels and
Agirrezabal, Manex",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.66",
doi = "10.18653/v1/2023.semeval-1.66",
pages = "477--481",
abstract = "In this paper, we present a possible solution to the SemEval23 shared task of generating spoilers for clickbait headlines. Using a Zero-Shot approach with two different Transformer architectures, BLOOM and RoBERTa, we generate three different types of spoilers: phrase, passage and multi. We found, RoBERTa pretrained for Question-Answering to perform better than BLOOM for causal language modelling, however both architectures proved promising for future attempts at such tasks.",
}
| In this paper, we present a possible solution to the SemEval23 shared task of generating spoilers for clickbait headlines. Using a Zero-Shot approach with two different Transformer architectures, BLOOM and RoBERTa, we generate three different types of spoilers: phrase, passage and multi. We found, RoBERTa pretrained for Question-Answering to perform better than BLOOM for causal language modelling, however both architectures proved promising for future attempts at such tasks. | [
"Krog, Niels",
"Agirrezabal, Manex"
] | Diane Simmons at SemEval-2023 Task 5: Is it possible to make good clickbait spoilers using a Zero-Shot approach? Check it out! | semeval-1.66 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.67.bib | https://aclanthology.org/2023.semeval-1.67/ | @inproceedings{grbowiec-2023-opi,
title = "{OPI} {PIB} at {S}em{E}val-2023 Task 1: A {CLIP}-based Solution Paired with an Additional Word Context Extension",
author = "Gr{\k{e}}bowiec, Ma{\l}gorzata",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.67",
doi = "10.18653/v1/2023.semeval-1.67",
pages = "482--487",
abstract = "This article presents our solution for SemEval-2023 Task 1: Visual Word Sense Disambiguation. The aim of the task was to select the most suitable from a list of ten images for a given word, extended by a small textual context. Our solution comprises two parts. The first focuses on an attempt to further extend the textual context, based on word definitions contained in WordNet and in Open English WordNet. The second focuses on selecting the most suitable image using the CLIP model with previously developed word context and additional information obtained from the BEiT image classification model. Our solution allowed us to achieve a result of 70.84{\%} on the official test dataset for the English language.",
}
| This article presents our solution for SemEval-2023 Task 1: Visual Word Sense Disambiguation. The aim of the task was to select the most suitable from a list of ten images for a given word, extended by a small textual context. Our solution comprises two parts. The first focuses on an attempt to further extend the textual context, based on word definitions contained in WordNet and in Open English WordNet. The second focuses on selecting the most suitable image using the CLIP model with previously developed word context and additional information obtained from the BEiT image classification model. Our solution allowed us to achieve a result of 70.84{\%} on the official test dataset for the English language. | [
"Gr{\\k{e}}bowiec, Ma{\\l}gorzata"
] | OPI PIB at SemEval-2023 Task 1: A CLIP-based Solution Paired with an Additional Word Context Extension | semeval-1.67 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.68.bib | https://aclanthology.org/2023.semeval-1.68/ | @inproceedings{wang-etal-2023-nlnde,
title = "{NLNDE} at {S}em{E}val-2023 Task 12: Adaptive Pretraining and Source Language Selection for Low-Resource Multilingual Sentiment Analysis",
author = {Wang, Mingyang and
Adel, Heike and
Lange, Lukas and
Str{\"o}tgen, Jannik and
Sch{\"u}tze, Hinrich},
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.68",
doi = "10.18653/v1/2023.semeval-1.68",
pages = "488--497",
abstract = "This paper describes our system developed for the SemEval-2023 Task 12 {``}Sentiment Analysis for Low-resource African Languages using Twitter Dataset{''}. Sentiment analysis is one of the most widely studied applications in natural language processing. However, most prior work still focuses on a small number of high-resource languages. Building reliable sentiment analysis systems for low-resource languages remains challenging, due to the limited training data in this task. In this work, we propose to leverage language-adaptive and task-adaptive pretraining on African texts and study transfer learning with source language selection on top of an African language-centric pretrained language model. Our key findings are: (1) Adapting the pretrained model to the target language and task using a small yet relevant corpus improves performance remarkably by more than 10 F1 score points. (2) Selecting source languages with positive transfer gains during training can avoid harmful interference from dissimilar languages, leading to better results in multilingual and cross-lingual settings. In the shared task, our system wins 8 out of 15 tracks and, in particular, performs best in the multilingual evaluation.",
}
| This paper describes our system developed for the SemEval-2023 Task 12 {``}Sentiment Analysis for Low-resource African Languages using Twitter Dataset{''}. Sentiment analysis is one of the most widely studied applications in natural language processing. However, most prior work still focuses on a small number of high-resource languages. Building reliable sentiment analysis systems for low-resource languages remains challenging, due to the limited training data in this task. In this work, we propose to leverage language-adaptive and task-adaptive pretraining on African texts and study transfer learning with source language selection on top of an African language-centric pretrained language model. Our key findings are: (1) Adapting the pretrained model to the target language and task using a small yet relevant corpus improves performance remarkably by more than 10 F1 score points. (2) Selecting source languages with positive transfer gains during training can avoid harmful interference from dissimilar languages, leading to better results in multilingual and cross-lingual settings. In the shared task, our system wins 8 out of 15 tracks and, in particular, performs best in the multilingual evaluation. | [
"Wang, Mingyang",
"Adel, Heike",
"Lange, Lukas",
"Str{\\\"o}tgen, Jannik",
"Sch{\\\"u}tze, Hinrich"
] | NLNDE at SemEval-2023 Task 12: Adaptive Pretraining and Source Language Selection for Low-Resource Multilingual Sentiment Analysis | semeval-1.68 | Poster | 2305.00090 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.69.bib | https://aclanthology.org/2023.semeval-1.69/ | @inproceedings{mahmoudi-2023-iust,
title = "{IUST}{\_}{NLP} at {S}em{E}val-2023 Task 10: Explainable Detecting Sexism with Transformers and Task-adaptive Pretraining",
author = "Mahmoudi, Hadiseh",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.69",
doi = "10.18653/v1/2023.semeval-1.69",
pages = "498--505",
abstract = "This paper describes our system on SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS). This work aims to design an automatic system for detecting and classifying sexist content in online spaces. We propose a set of transformer-based pre-trained models with task-adaptive pretraining and ensemble learning. The main contributions of our system include analyzing the performance of different transformer-based pre-trained models and combining these models, as well as providing an efficient method using large amounts of unlabeled data for model adaptive pretraining. We have also explored several other strategies. On the test dataset, our system achieves F1-scores of 83{\%}, 64{\%}, and 47{\%} on subtasks A, B, and C, respectively.",
}
| This paper describes our system on SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS). This work aims to design an automatic system for detecting and classifying sexist content in online spaces. We propose a set of transformer-based pre-trained models with task-adaptive pretraining and ensemble learning. The main contributions of our system include analyzing the performance of different transformer-based pre-trained models and combining these models, as well as providing an efficient method using large amounts of unlabeled data for model adaptive pretraining. We have also explored several other strategies. On the test dataset, our system achieves F1-scores of 83{\%}, 64{\%}, and 47{\%} on subtasks A, B, and C, respectively. | [
"Mahmoudi, Hadiseh"
] | IUST_NLP at SemEval-2023 Task 10: Explainable Detecting Sexism with Transformers and Task-adaptive Pretraining | semeval-1.69 | Poster | 2305.06892 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.70.bib | https://aclanthology.org/2023.semeval-1.70/ | @inproceedings{yang-etal-2023-tam,
title = "{TAM} of {SCNU} at {S}em{E}val-2023 Task 1: {FCLL}: A Fine-grained Contrastive Language-Image Learning Model for Cross-language Visual Word Sense Disambiguation",
author = "Yang, Qihao and
Li, Yong and
Wang, Xuelin and
Li, Shunhao and
Hao, Tianyong",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.70",
doi = "10.18653/v1/2023.semeval-1.70",
pages = "506--511",
abstract = "Visual Word Sense Disambiguation (WSD), as a fine-grained image-text retrieval task, aims to identify the images that are relevant to ambiguous target words or phrases. However, the difficulties of limited contextual information and cross-linguistic background knowledge in text processing make this task challenging. To alleviate this issue, we propose a Fine-grained Contrastive Language-Image Learning (FCLL) model, which learns fine-grained image-text knowledge by employing a new fine-grained contrastive learning mechanism and enriches contextual information by establishing relationship between concepts and sentences. In addition, a new multimodal-multilingual knowledge base involving ambiguous target words is constructed for visual WSD. Experiment results on the benchmark datasets from SemEval-2023 Task 1 show that our FCLL ranks at the first in overall evaluation with an average H@1 of 72.56{\textbackslash}{\%} and an average MRR of 82.22{\textbackslash}{\%}. The results demonstrate that FCLL is effective in inference on fine-grained language-vision knowledge. Source codes and the knowledge base are publicly available at \url{https://github.com/CharlesYang030/FCLL}.",
}
| Visual Word Sense Disambiguation (WSD), as a fine-grained image-text retrieval task, aims to identify the images that are relevant to ambiguous target words or phrases. However, the difficulties of limited contextual information and cross-linguistic background knowledge in text processing make this task challenging. To alleviate this issue, we propose a Fine-grained Contrastive Language-Image Learning (FCLL) model, which learns fine-grained image-text knowledge by employing a new fine-grained contrastive learning mechanism and enriches contextual information by establishing relationship between concepts and sentences. In addition, a new multimodal-multilingual knowledge base involving ambiguous target words is constructed for visual WSD. Experiment results on the benchmark datasets from SemEval-2023 Task 1 show that our FCLL ranks at the first in overall evaluation with an average H@1 of 72.56{\textbackslash}{\%} and an average MRR of 82.22{\textbackslash}{\%}. The results demonstrate that FCLL is effective in inference on fine-grained language-vision knowledge. Source codes and the knowledge base are publicly available at \url{https://github.com/CharlesYang030/FCLL}. | [
"Yang, Qihao",
"Li, Yong",
"Wang, Xuelin",
"Li, Shunhao",
"Hao, Tianyong"
] | TAM of SCNU at SemEval-2023 Task 1: FCLL: A Fine-grained Contrastive Language-Image Learning Model for Cross-language Visual Word Sense Disambiguation | semeval-1.70 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.71.bib | https://aclanthology.org/2023.semeval-1.71/ | @inproceedings{delil-kuyumcu-2023-sefamerve,
title = "Sefamerve at {S}em{E}val-2023 Task 12: Semantic Evaluation of Rarely Studied Languages",
author = "Delil, Selman and
Kuyumcu, Birol",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.71",
doi = "10.18653/v1/2023.semeval-1.71",
pages = "512--516",
abstract = "This paper describes our contribution to SemEval-23 Shared Task 12: ArfiSenti. The task consists of several sentiment classification subtasks for rarely studied African languages to predict positive, negative, or neutral classes of a given Twitter dataset. In our system we utilized three different models; FastText, MultiLang Transformers, and Language-Specific Transformers to find the best working model for the classification challenge. We experimented with mentioned models and mostly reached the best prediction scores using the Language Specific Transformers. Our best-submitted result was ranked 3rd among submissions for the Amharic language, obtaining an F1 score of 0.702 behind the second-ranked system.",
}
| This paper describes our contribution to SemEval-23 Shared Task 12: ArfiSenti. The task consists of several sentiment classification subtasks for rarely studied African languages to predict positive, negative, or neutral classes of a given Twitter dataset. In our system we utilized three different models; FastText, MultiLang Transformers, and Language-Specific Transformers to find the best working model for the classification challenge. We experimented with mentioned models and mostly reached the best prediction scores using the Language Specific Transformers. Our best-submitted result was ranked 3rd among submissions for the Amharic language, obtaining an F1 score of 0.702 behind the second-ranked system. | [
"Delil, Selman",
"Kuyumcu, Birol"
] | Sefamerve at SemEval-2023 Task 12: Semantic Evaluation of Rarely Studied Languages | semeval-1.71 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.72.bib | https://aclanthology.org/2023.semeval-1.72/ | @inproceedings{jin-wang-2023-teamshakespeare,
title = "{T}eam{S}hakespeare at {S}em{E}val-2023 Task 6: Understand Legal Documents with Contextualized Large Language Models",
author = "Jin, Xin and
Wang, Yuchen",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.72",
doi = "10.18653/v1/2023.semeval-1.72",
pages = "517--525",
abstract = "The growth of pending legal cases in populouscountries, such as India, has become a major is-sue. Developing effective techniques to processand understand legal documents is extremelyuseful in resolving this problem. In this pa-per, we present our systems for SemEval-2023Task 6: understanding legal texts (Modi et al., 2023). Specifically, we first develop the Legal-BERT-HSLN model that considers the com-prehensive context information in both intra-and inter-sentence levels to predict rhetoricalroles (subtask A) and then train a Legal-LUKEmodel, which is legal-contextualized and entity-aware, to recognize legal entities (subtask B).Our evaluations demonstrate that our designedmodels are more accurate than baselines, e.g.,with an up to 15.0{\%} better F1 score in subtaskB. We achieved notable performance in the taskleaderboard, e.g., 0.834 micro F1 score, andranked No.5 out of 27 teams in subtask A.",
}
| The growth of pending legal cases in populouscountries, such as India, has become a major is-sue. Developing effective techniques to processand understand legal documents is extremelyuseful in resolving this problem. In this pa-per, we present our systems for SemEval-2023Task 6: understanding legal texts (Modi et al., 2023). Specifically, we first develop the Legal-BERT-HSLN model that considers the com-prehensive context information in both intra-and inter-sentence levels to predict rhetoricalroles (subtask A) and then train a Legal-LUKEmodel, which is legal-contextualized and entity-aware, to recognize legal entities (subtask B).Our evaluations demonstrate that our designedmodels are more accurate than baselines, e.g.,with an up to 15.0{\%} better F1 score in subtaskB. We achieved notable performance in the taskleaderboard, e.g., 0.834 micro F1 score, andranked No.5 out of 27 teams in subtask A. | [
"Jin, Xin",
"Wang, Yuchen"
] | TeamShakespeare at SemEval-2023 Task 6: Understand Legal Documents with Contextualized Large Language Models | semeval-1.72 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.73.bib | https://aclanthology.org/2023.semeval-1.73/ | @inproceedings{obeidat-etal-2023-just,
title = "{JUST}{\_}{ONE} at {S}em{E}val-2023 Task 10: Explainable Detection of Online Sexism ({EDOS})",
author = "Obeidat, Doaa and
Shnaigat, Wala{'}a and
Nammas, Heba and
Abdullah, Malak",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.73",
doi = "10.18653/v1/2023.semeval-1.73",
pages = "526--531",
abstract = "The problem of online sexism, which refers to offensive content targeting women based on their gender or the intersection of their gender with one or more additional identity characteristics, such as race or religion, has become a widespread phenomenon on social media. This can include sexist comments and memes. To address this issue, the SemEval-2023 international workshop introduced the {``}Explainable Detection of Online Sexism Challenge{''}, which aims to explain the classifications given by AI models for detecting sexism. In this paper, we present the contributions of our team, JUSTONE, to all three sub-tasks of the challenge: subtask A, a binary classification task; subtask B, a four-class classification task; and subtask C, a fine-grained classification task. To accomplish this, we utilized pre-trained language models, specifically BERT and RoBERTa from Hugging Face, and a selective ensemble method in task 10 of the SemEval 2023 competition. As a result, our team achieved the following rankings and scores in different tasks: 19th out of 84 with a Macro-F1 score of 0.8538 in task A, 22nd out of 69 with a Macro-F1 score of 0.6417 in task B, and 14th out of 63 with a Macro-F1 score of 0.4774 in task C.",
}
| The problem of online sexism, which refers to offensive content targeting women based on their gender or the intersection of their gender with one or more additional identity characteristics, such as race or religion, has become a widespread phenomenon on social media. This can include sexist comments and memes. To address this issue, the SemEval-2023 international workshop introduced the {``}Explainable Detection of Online Sexism Challenge{''}, which aims to explain the classifications given by AI models for detecting sexism. In this paper, we present the contributions of our team, JUSTONE, to all three sub-tasks of the challenge: subtask A, a binary classification task; subtask B, a four-class classification task; and subtask C, a fine-grained classification task. To accomplish this, we utilized pre-trained language models, specifically BERT and RoBERTa from Hugging Face, and a selective ensemble method in task 10 of the SemEval 2023 competition. As a result, our team achieved the following rankings and scores in different tasks: 19th out of 84 with a Macro-F1 score of 0.8538 in task A, 22nd out of 69 with a Macro-F1 score of 0.6417 in task B, and 14th out of 63 with a Macro-F1 score of 0.4774 in task C. | [
"Obeidat, Doaa",
"Shnaigat, Wala{'}a",
"Nammas, Heba",
"Abdullah, Malak"
] | JUST_ONE at SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS) | semeval-1.73 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.74.bib | https://aclanthology.org/2023.semeval-1.74/ | @inproceedings{schroter-etal-2023-adam,
title = "{A}dam-Smith at {S}em{E}val-2023 Task 4: Discovering Human Values in Arguments with Ensembles of Transformer-based Models",
author = "Schroter, Daniel and
Dementieva, Daryna and
Groh, Georg",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.74",
doi = "10.18653/v1/2023.semeval-1.74",
pages = "532--541",
abstract = "This paper presents the best-performing approach alias {``}Adam Smith{''} for the SemEval-2023 Task 4: {``}Identification of Human Values behind Arguments{''}. The goal of the task was to create systems that automatically identify the values within textual arguments. We train transformer-based models until they reach their loss minimum or f1-score maximum. Ensembling the models by selecting one global decision threshold that maximizes the f1-score leads to the best-performing system in the competition. Ensembling based on stacking with logistic regressions shows the best performance on an additional dataset provided to evaluate the robustness ({``}Nahj al-Balagha{''}). Apart from outlining the submitted system, we demonstrate that the use of the large ensemble model is not necessary and that the system size can be significantly reduced.",
}
| This paper presents the best-performing approach alias {``}Adam Smith{''} for the SemEval-2023 Task 4: {``}Identification of Human Values behind Arguments{''}. The goal of the task was to create systems that automatically identify the values within textual arguments. We train transformer-based models until they reach their loss minimum or f1-score maximum. Ensembling the models by selecting one global decision threshold that maximizes the f1-score leads to the best-performing system in the competition. Ensembling based on stacking with logistic regressions shows the best performance on an additional dataset provided to evaluate the robustness ({``}Nahj al-Balagha{''}). Apart from outlining the submitted system, we demonstrate that the use of the large ensemble model is not necessary and that the system size can be significantly reduced. | [
"Schroter, Daniel",
"Dementieva, Daryna",
"Groh, Georg"
] | Adam-Smith at SemEval-2023 Task 4: Discovering Human Values in Arguments with Ensembles of Transformer-based Models | semeval-1.74 | Poster | 2305.08625 | [
"https://github.com/danielschroter/human_value_detector"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.75.bib | https://aclanthology.org/2023.semeval-1.75/ | @inproceedings{papadopoulos-etal-2023-andronicus,
title = "Andronicus of Rhodes at {S}em{E}val-2023 Task 4: Transformer-Based Human Value Detection Using Four Different Neural Network Architectures",
author = "Papadopoulos, Georgios and
Kokol, Marko and
Dagioglou, Maria and
Petasis, Georgios",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.75",
doi = "10.18653/v1/2023.semeval-1.75",
pages = "542--548",
abstract = "This paper presents our participation to the {``}Human Value Detection shared task (Kiesel et al., 2023), as {``}Andronicus of Rhodes. We describe the approaches behind each entry in the official evaluation, along with the motivation behind each approach. Our best-performing approach has been based on BERT large, with 4 classification heads, implementing two different classification approaches (with different activation and loss functions), and two different partitioning of the training data, to handle class imbalance. Classification is performed through majority voting. The proposed approach outperforms the BERT baseline, ranking in the upper half of the competition.",
}
| This paper presents our participation to the {``}Human Value Detection shared task (Kiesel et al., 2023), as {``}Andronicus of Rhodes. We describe the approaches behind each entry in the official evaluation, along with the motivation behind each approach. Our best-performing approach has been based on BERT large, with 4 classification heads, implementing two different classification approaches (with different activation and loss functions), and two different partitioning of the training data, to handle class imbalance. Classification is performed through majority voting. The proposed approach outperforms the BERT baseline, ranking in the upper half of the competition. | [
"Papadopoulos, Georgios",
"Kokol, Marko",
"Dagioglou, Maria",
"Petasis, Georgios"
] | Andronicus of Rhodes at SemEval-2023 Task 4: Transformer-Based Human Value Detection Using Four Different Neural Network Architectures | semeval-1.75 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.76.bib | https://aclanthology.org/2023.semeval-1.76/ | @inproceedings{lepekhin-sharoff-2023-ftd,
title = "{FTD} at {S}em{E}val-2023 Task 3: News Genre and Propaganda Detection by Comparing Mono- and Multilingual Models with Fine-tuning on Additional Data",
author = "Lepekhin, Mikhail and
Sharoff, Serge",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.76",
doi = "10.18653/v1/2023.semeval-1.76",
pages = "549--555",
abstract = "We report our participation in the SemEval-2023 shared task on propaganda detection and describe our solutions with pre-trained models and their ensembles. For Subtask 1 (News Genre Categorisation), we report the impact of several settings, such as the choice of the classification models (monolingual or multilingual or their ensembles), the choice of the training sets (base or additional sources), the impact of detection certainty in making a classification decision as well as the impact of other hyper-parameters. In particular, we fine-tune models on additional data for other genre classification tasks, such as FTD. We also try adding texts from genre-homogenous corpora, such as Panorama, Babylon Bee for satire and Giganews for for reporting texts. We also make prepared models for Subtasks 2 and 3 with finetuning the corresponding models first for Subtask 1.The code needed to reproduce the experiments is available.",
}
| We report our participation in the SemEval-2023 shared task on propaganda detection and describe our solutions with pre-trained models and their ensembles. For Subtask 1 (News Genre Categorisation), we report the impact of several settings, such as the choice of the classification models (monolingual or multilingual or their ensembles), the choice of the training sets (base or additional sources), the impact of detection certainty in making a classification decision as well as the impact of other hyper-parameters. In particular, we fine-tune models on additional data for other genre classification tasks, such as FTD. We also try adding texts from genre-homogenous corpora, such as Panorama, Babylon Bee for satire and Giganews for for reporting texts. We also make prepared models for Subtasks 2 and 3 with finetuning the corresponding models first for Subtask 1.The code needed to reproduce the experiments is available. | [
"Lepekhin, Mikhail",
"Sharoff, Serge"
] | FTD at SemEval-2023 Task 3: News Genre and Propaganda Detection by Comparing Mono- and Multilingual Models with Fine-tuning on Additional Data | semeval-1.76 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.77.bib | https://aclanthology.org/2023.semeval-1.77/ | @inproceedings{rizzi-etal-2023-mind,
title = "{MIND} at {S}em{E}val-2023 Task 11: From Uncertain Predictions to Subjective Disagreement",
author = "Rizzi, Giulia and
Astorino, Alessandro and
Scalena, Daniel and
Rosso, Paolo and
Fersini, Elisabetta",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.77",
doi = "10.18653/v1/2023.semeval-1.77",
pages = "556--564",
abstract = "This paper describes the participation of the research laboratory MIND, at the University of Milano-Bicocca, in the SemEval 2023 task related to Learning With Disagreements (Le-Wi-Di). The main goal is to identify the level of agreement/disagreement from a collection of textual datasets with different characteristics in terms of style, language and task. The proposed approach is grounded on the hypothesis that the disagreement between annotators could be grasped by the uncertainty that a model, based on several linguistic characteristics, could have on the prediction of a given gold label.",
}
| This paper describes the participation of the research laboratory MIND, at the University of Milano-Bicocca, in the SemEval 2023 task related to Learning With Disagreements (Le-Wi-Di). The main goal is to identify the level of agreement/disagreement from a collection of textual datasets with different characteristics in terms of style, language and task. The proposed approach is grounded on the hypothesis that the disagreement between annotators could be grasped by the uncertainty that a model, based on several linguistic characteristics, could have on the prediction of a given gold label. | [
"Rizzi, Giulia",
"Astorino, Aless",
"ro",
"Scalena, Daniel",
"Rosso, Paolo",
"Fersini, Elisabetta"
] | MIND at SemEval-2023 Task 11: From Uncertain Predictions to Subjective Disagreement | semeval-1.77 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.78.bib | https://aclanthology.org/2023.semeval-1.78/ | @inproceedings{sartipi-etal-2023-sartipi,
title = "Sartipi-Sedighin at {S}em{E}val-2023 Task 2: Fine-grained Named Entity Recognition with Pre-trained Contextual Language Models and Data Augmentation from {W}ikipedia",
author = "Sartipi, Amir and
Sedighin, Amirreza and
Fatemi, Afsaneh and
Baradaran Kashani, Hamidreza",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.78",
doi = "10.18653/v1/2023.semeval-1.78",
pages = "565--579",
abstract = "This paper presents the system developed by the Sartipi-Sedighin team for SemEval 2023 Task 2, which is a shared task focused on multilingual complex named entity recognition (NER), or MultiCoNER II. The goal of this task is to identify and classify complex named entities (NEs) in text across multiple languages. To tackle the MultiCoNER II task, we leveraged pre-trained language models (PLMs) fine-tuned for each language included in the dataset. In addition, we also applied a data augmentation technique to increase the amount of training data available to our models. Specifically, we searched for relevant NEs that already existed in the training data within Wikipedia, and we added new instances of these entities to our training corpus. Our team achieved an overall F1 score of 61.25{\%} in the English track and 71.79{\%} in the multilingual track across all 13 tracks of the shared task that we submitted to.",
}
| This paper presents the system developed by the Sartipi-Sedighin team for SemEval 2023 Task 2, which is a shared task focused on multilingual complex named entity recognition (NER), or MultiCoNER II. The goal of this task is to identify and classify complex named entities (NEs) in text across multiple languages. To tackle the MultiCoNER II task, we leveraged pre-trained language models (PLMs) fine-tuned for each language included in the dataset. In addition, we also applied a data augmentation technique to increase the amount of training data available to our models. Specifically, we searched for relevant NEs that already existed in the training data within Wikipedia, and we added new instances of these entities to our training corpus. Our team achieved an overall F1 score of 61.25{\%} in the English track and 71.79{\%} in the multilingual track across all 13 tracks of the shared task that we submitted to. | [
"Sartipi, Amir",
"Sedighin, Amirreza",
"Fatemi, Afsaneh",
"Baradaran Kashani, Hamidreza"
] | Sartipi-Sedighin at SemEval-2023 Task 2: Fine-grained Named Entity Recognition with Pre-trained Contextual Language Models and Data Augmentation from Wikipedia | semeval-1.78 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.79.bib | https://aclanthology.org/2023.semeval-1.79/ | @inproceedings{almuslim-etal-2023-uottawa,
title = "u{O}ttawa at {S}em{E}val-2023 Task 6: Deep Learning for Legal Text Understanding",
author = "Almuslim, Intisar and
Stilwell, Sean and
Kiran Suresh, Surya and
Inkpen, Diana",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.79",
doi = "10.18653/v1/2023.semeval-1.79",
pages = "580--588",
abstract = "We describe the methods we used for legal text understanding, specifically Task 6 Legal-Eval at SemEval 2023. The outcomes could assist law practitioners and help automate the working process of judicial systems. The shared task defined three main sub-tasks: sub-task A, Rhetorical Roles Prediction (RR); sub-task B, Legal Named Entities Extraction (L-NER); and sub-task C, Court Judgement Prediction with Explanation (CJPE). Our team addressed all three sub-tasks by exploring various Deep Learning (DL) based models. Overall, our team{'}s approaches achieved promising results on all three sub-tasks, demonstrating the potential of deep learning-based models in the judicial domain.",
}
| We describe the methods we used for legal text understanding, specifically Task 6 Legal-Eval at SemEval 2023. The outcomes could assist law practitioners and help automate the working process of judicial systems. The shared task defined three main sub-tasks: sub-task A, Rhetorical Roles Prediction (RR); sub-task B, Legal Named Entities Extraction (L-NER); and sub-task C, Court Judgement Prediction with Explanation (CJPE). Our team addressed all three sub-tasks by exploring various Deep Learning (DL) based models. Overall, our team{'}s approaches achieved promising results on all three sub-tasks, demonstrating the potential of deep learning-based models in the judicial domain. | [
"Almuslim, Intisar",
"Stilwell, Sean",
"Kiran Suresh, Surya",
"Inkpen, Diana"
] | uOttawa at SemEval-2023 Task 6: Deep Learning for Legal Text Understanding | semeval-1.79 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.80.bib | https://aclanthology.org/2023.semeval-1.80/ | @inproceedings{pan-etal-2023-umuteam,
title = "{UMUT}eam at {S}em{E}val-2023 Task 10: Fine-grained detection of sexism in {E}nglish",
author = "Pan, Ronghao and
Garc{\'\i}a-D{\'\i}az, Jos{\'e} Antonio and
Jim{\'e}nez Zafra, Salud Mar{\'\i}a and
Valencia-Garc{\'\i}a, Rafael",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.80",
doi = "10.18653/v1/2023.semeval-1.80",
pages = "589--594",
abstract = "In this manuscript, we describe the participation of UMUTeam in the Explainable Detection of Online Sexism shared task proposed at SemEval 2023. This task concerns the precise and explainable detection of sexist content on Gab and Reddit, i.e., developing detailed classifiers that not only identify what is sexist, but also explain why it is sexism. Our participation in the three EDOS subtasks is based on extending new unlabeled sexism data in the Masked Language Model task of a pre-trained model, such as RoBERTa-large to improve its generalization capacity and its performance on classification tasks. Once the model has been pre-trained with the new data, fine-tuning of this model is performed for different specific sexism classification tasks. Our system has achieved excellent results in this competitive task, reaching top 24 (84) in Task A, top 23 (69) in Task B, and top 13 (63) in Task C.",
}
| In this manuscript, we describe the participation of UMUTeam in the Explainable Detection of Online Sexism shared task proposed at SemEval 2023. This task concerns the precise and explainable detection of sexist content on Gab and Reddit, i.e., developing detailed classifiers that not only identify what is sexist, but also explain why it is sexism. Our participation in the three EDOS subtasks is based on extending new unlabeled sexism data in the Masked Language Model task of a pre-trained model, such as RoBERTa-large to improve its generalization capacity and its performance on classification tasks. Once the model has been pre-trained with the new data, fine-tuning of this model is performed for different specific sexism classification tasks. Our system has achieved excellent results in this competitive task, reaching top 24 (84) in Task A, top 23 (69) in Task B, and top 13 (63) in Task C. | [
"Pan, Ronghao",
"Garc{\\'\\i}a-D{\\'\\i}az, Jos{\\'e} Antonio",
"Jim{\\'e}nez Zafra, Salud Mar{\\'\\i}a",
"Valencia-Garc{\\'\\i}a, Rafael"
] | UMUTeam at SemEval-2023 Task 10: Fine-grained detection of sexism in English | semeval-1.80 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.81.bib | https://aclanthology.org/2023.semeval-1.81/ | @inproceedings{christodoulou-2023-nlp,
title = "{NLP}{\_}{CHRISTINE} at {S}em{E}val-2023 Task 10: Utilizing Transformer Contextual Representations and Ensemble Learning for Sexism Detection on Social Media Texts",
author = "Christodoulou, Christina",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.81",
doi = "10.18653/v1/2023.semeval-1.81",
pages = "595--602",
abstract = "The paper describes the SemEval-2023 Task 10: {``}Explainable Detection of Online Sexism (EDOS){''}, which investigates the detection of sexism on two social media sites, Gab and Reddit, by encouraging the development of machine learning models that perform binary and multi-class classification on English texts. The EDOS Task consisted of three hierarchical sub-tasks: binary sexism detection in sub-task A, category of sexism detection in sub-task B and fine-grained vector of sexism detection in sub-task C. My participation in EDOS comprised fine-tuning of different layer representations of Transformer-based pre-trained language models, namely BERT, AlBERT and RoBERTa, and ensemble learning via majority voting of the best performing models. Despite the low rank mainly due to a submission error, the system employed the largest version of the aforementioned Transformer models (BERT-Large, ALBERT-XXLarge-v1, ALBERT-XXLarge-v2, RoBERTa-Large), experimented with their multi-layer structure and aggregated their predictions so as to get the final result. My predictions on the test sets achieved 82.88{\%}, 63.77{\%} and 43.08{\%} Macro-F1 score in sub-tasks A, B and C respectively.",
}
| The paper describes the SemEval-2023 Task 10: {``}Explainable Detection of Online Sexism (EDOS){''}, which investigates the detection of sexism on two social media sites, Gab and Reddit, by encouraging the development of machine learning models that perform binary and multi-class classification on English texts. The EDOS Task consisted of three hierarchical sub-tasks: binary sexism detection in sub-task A, category of sexism detection in sub-task B and fine-grained vector of sexism detection in sub-task C. My participation in EDOS comprised fine-tuning of different layer representations of Transformer-based pre-trained language models, namely BERT, AlBERT and RoBERTa, and ensemble learning via majority voting of the best performing models. Despite the low rank mainly due to a submission error, the system employed the largest version of the aforementioned Transformer models (BERT-Large, ALBERT-XXLarge-v1, ALBERT-XXLarge-v2, RoBERTa-Large), experimented with their multi-layer structure and aggregated their predictions so as to get the final result. My predictions on the test sets achieved 82.88{\%}, 63.77{\%} and 43.08{\%} Macro-F1 score in sub-tasks A, B and C respectively. | [
"Christodoulou, Christina"
] | NLP_CHRISTINE at SemEval-2023 Task 10: Utilizing Transformer Contextual Representations and Ensemble Learning for Sexism Detection on Social Media Texts | semeval-1.81 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.82.bib | https://aclanthology.org/2023.semeval-1.82/ | @inproceedings{molazadeh-oskuee-etal-2023-scanlon,
title = "{T}.{M}. Scanlon at {S}em{E}val-2023 Task 4: Leveraging Pretrained Language Models for Human Value Argument Mining with Contrastive Learning",
author = "Molazadeh Oskuee, Milad and
Rahgouy, Mostafa and
Babaei Giglou, Hamed and
D Seals, Cheryl",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.82",
doi = "10.18653/v1/2023.semeval-1.82",
pages = "603--608",
abstract = "Human values are of great concern to social sciences which refer to when people have different beliefs and priorities of what is generally worth striving for and how to do so. This paper presents an approach for human value argument mining using contrastive learning to leverage the isotropy of language models. We fine-tuned DeBERTa-Large in a multi-label classification fashion and achieved an F1 score of 49{\%} for the task, resulting in a rank of 11. Our proposed model provides a valuable tool for analyzing arguments related to human values and highlights the significance of leveraging the isotropy of large language models for identifying human values.",
}
| Human values are of great concern to social sciences which refer to when people have different beliefs and priorities of what is generally worth striving for and how to do so. This paper presents an approach for human value argument mining using contrastive learning to leverage the isotropy of language models. We fine-tuned DeBERTa-Large in a multi-label classification fashion and achieved an F1 score of 49{\%} for the task, resulting in a rank of 11. Our proposed model provides a valuable tool for analyzing arguments related to human values and highlights the significance of leveraging the isotropy of large language models for identifying human values. | [
"Molazadeh Oskuee, Milad",
"Rahgouy, Mostafa",
"Babaei Giglou, Hamed",
"D Seals, Cheryl"
] | T.M. Scanlon at SemEval-2023 Task 4: Leveraging Pretrained Language Models for Human Value Argument Mining with Contrastive Learning | semeval-1.82 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.83.bib | https://aclanthology.org/2023.semeval-1.83/ | @inproceedings{pan-etal-2023-umuteam-semeval,
title = "{UMUT}eam at {S}em{E}val-2023 Task 3: Multilingual transformer-based model for detecting the Genre, the Framing, and the Persuasion Techniques in Online News",
author = "Pan, Ronghao and
Garc{\'\i}a-D{\'\i}az, Jos{\'e} Antonio and
{\'A}ngel Rodr{\'\i}guez-Garc{\'\i}a, Miguel and
Valencia-Garc{\'\i}a, Rafael",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.83",
doi = "10.18653/v1/2023.semeval-1.83",
pages = "609--615",
abstract = "In this manuscript, we describe the participation of the UMUTeam in SemEval-2023 Task 3, a shared task on detecting different aspects of news articles and other web documents, such as document category, framing dimensions, and persuasion technique in a multilingual setup. The task has been organized into three related subtasks, and we have been involved in the first two. Our approach is based on a fine-tuned multilingual transformer-based model that uses the dataset of all languages at once and a sentence transformer model to extract the most relevant chunk of a text for subtasks 1 and 2. The input data was truncated to 200 tokens with 50 overlaps using the sentence-transformer model to obtain the subset of text most related to the articles{'} titles. Our system has performed good results in subtask 1 in most languages, and in some cases, such as French and German, we have archived first place in the official leader board. As for task 2, our system has also performed very well in all languages, ranking in all the top 10.",
}
| In this manuscript, we describe the participation of the UMUTeam in SemEval-2023 Task 3, a shared task on detecting different aspects of news articles and other web documents, such as document category, framing dimensions, and persuasion technique in a multilingual setup. The task has been organized into three related subtasks, and we have been involved in the first two. Our approach is based on a fine-tuned multilingual transformer-based model that uses the dataset of all languages at once and a sentence transformer model to extract the most relevant chunk of a text for subtasks 1 and 2. The input data was truncated to 200 tokens with 50 overlaps using the sentence-transformer model to obtain the subset of text most related to the articles{'} titles. Our system has performed good results in subtask 1 in most languages, and in some cases, such as French and German, we have archived first place in the official leader board. As for task 2, our system has also performed very well in all languages, ranking in all the top 10. | [
"Pan, Ronghao",
"Garc{\\'\\i}a-D{\\'\\i}az, Jos{\\'e} Antonio",
"{\\'A}ngel Rodr{\\'\\i}guez-Garc{\\'\\i}a, Miguel",
"Valencia-Garc{\\'\\i}a, Rafael"
] | UMUTeam at SemEval-2023 Task 3: Multilingual transformer-based model for detecting the Genre, the Framing, and the Persuasion Techniques in Online News | semeval-1.83 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.84.bib | https://aclanthology.org/2023.semeval-1.84/ | @inproceedings{amihaesei-etal-2023-appeal,
title = "Appeal for Attention at {S}em{E}val-2023 Task 3: Data augmentation extension strategies for detection of online news persuasion techniques",
author = "Amihaesei, Sergiu and
Cornei, Laura and
Stoica, George",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.84",
doi = "10.18653/v1/2023.semeval-1.84",
pages = "616--623",
abstract = "In this paper, we proposed and explored the impact of four different dataset augmentation andextension strategies that we used for solving the subtask 3 of SemEval-2023 Task 3: multi-label persuasion techniques classification in a multi-lingual context. We consider two types of augmentation methods (one based on a modified version of synonym replacement and one based on translations) and two ways of extending the training dataset (using filtered data generated by GPT-3 and using a dataset from a previous competition). We studied the effects of the aforementioned techniques by using theaugmented and/or extended training dataset to fine-tune a pretrained XLM-RoBERTa-Large model. Using the augmentation methods alone, we managed to obtain 3rd place for English, 13th place for Italian and between the 5th to 9th places for the other 7 languages during the competition.",
}
| In this paper, we proposed and explored the impact of four different dataset augmentation andextension strategies that we used for solving the subtask 3 of SemEval-2023 Task 3: multi-label persuasion techniques classification in a multi-lingual context. We consider two types of augmentation methods (one based on a modified version of synonym replacement and one based on translations) and two ways of extending the training dataset (using filtered data generated by GPT-3 and using a dataset from a previous competition). We studied the effects of the aforementioned techniques by using theaugmented and/or extended training dataset to fine-tune a pretrained XLM-RoBERTa-Large model. Using the augmentation methods alone, we managed to obtain 3rd place for English, 13th place for Italian and between the 5th to 9th places for the other 7 languages during the competition. | [
"Amihaesei, Sergiu",
"Cornei, Laura",
"Stoica, George"
] | Appeal for Attention at SemEval-2023 Task 3: Data augmentation extension strategies for detection of online news persuasion techniques | semeval-1.84 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.85.bib | https://aclanthology.org/2023.semeval-1.85/ | @inproceedings{pan-etal-2023-chick,
title = "Chick Adams at {S}em{E}val-2023 Task 5: Using {R}o{BERT}a and {D}e{BERT}a to Extract Post and Document-based Features for Clickbait Spoiling",
author = "Pan, Ronghao and
Garc{\'\i}a-D{\'\i}az, Jos{\'e} Antonio and
Garc{\'\i}a-S{\'a}nchez, Franciso and
Valencia-Garc{\'\i}a, Rafael",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.85",
doi = "10.18653/v1/2023.semeval-1.85",
pages = "624--628",
abstract = "In this manuscript, we describe the participation of the UMUTeam in SemEval-2023 Task 5, namely, Clickbait Spoiling, a shared task on identifying spoiler type (i.e., a phrase or a passage) and generating short texts that satisfy curiosity induced by a clickbait post, i.e. generating spoilers for the clickbait post. Our participation in Task 1 is based on fine-tuning pre-trained models, which consists in taking a pre-trained model and tuning it to fit the spoiler classification task. Our system has obtained excellent results in Task 1: we outperformed all proposed baselines, being within the Top 10 for most measures. Foremost, we reached Top 3 in F1 score in the passage spoiler ranking.",
}
| In this manuscript, we describe the participation of the UMUTeam in SemEval-2023 Task 5, namely, Clickbait Spoiling, a shared task on identifying spoiler type (i.e., a phrase or a passage) and generating short texts that satisfy curiosity induced by a clickbait post, i.e. generating spoilers for the clickbait post. Our participation in Task 1 is based on fine-tuning pre-trained models, which consists in taking a pre-trained model and tuning it to fit the spoiler classification task. Our system has obtained excellent results in Task 1: we outperformed all proposed baselines, being within the Top 10 for most measures. Foremost, we reached Top 3 in F1 score in the passage spoiler ranking. | [
"Pan, Ronghao",
"Garc{\\'\\i}a-D{\\'\\i}az, Jos{\\'e} Antonio",
"Garc{\\'\\i}a-S{\\'a}nchez, Franciso",
"Valencia-Garc{\\'\\i}a, Rafael"
] | Chick Adams at SemEval-2023 Task 5: Using RoBERTa and DeBERTa to Extract Post and Document-based Features for Clickbait Spoiling | semeval-1.85 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.86.bib | https://aclanthology.org/2023.semeval-1.86/ | @inproceedings{hromadka-etal-2023-kinitveraai,
title = "{KI}n{ITV}era{AI} at {S}em{E}val-2023 Task 3: Simple yet Powerful Multilingual Fine-Tuning for Persuasion Techniques Detection",
author = "Hromadka, Timo and
Smolen, Timotej and
Remis, Tomas and
Pecher, Branislav and
Srba, Ivan",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.86",
doi = "10.18653/v1/2023.semeval-1.86",
pages = "629--637",
abstract = "This paper presents the best-performing solution to the SemEval 2023 Task 3 on the subtask 3 dedicated to persuasion techniques detection. Due to a high multilingual character of the input data and a large number of 23 predicted labels (causing a lack of labelled data for some language-label combinations), we opted for fine-tuning pre-trained transformer-based language models. Conducting multiple experiments, we find the best configuration, which consists of large multilingual model (XLM-RoBERTa large) trained jointly on all input data, with carefully calibrated confidence thresholds for seen and surprise languages separately. Our final system performed the best on 6 out of 9 languages (including two surprise languages) and achieved highly competitive results on the remaining three languages.",
}
| This paper presents the best-performing solution to the SemEval 2023 Task 3 on the subtask 3 dedicated to persuasion techniques detection. Due to a high multilingual character of the input data and a large number of 23 predicted labels (causing a lack of labelled data for some language-label combinations), we opted for fine-tuning pre-trained transformer-based language models. Conducting multiple experiments, we find the best configuration, which consists of large multilingual model (XLM-RoBERTa large) trained jointly on all input data, with carefully calibrated confidence thresholds for seen and surprise languages separately. Our final system performed the best on 6 out of 9 languages (including two surprise languages) and achieved highly competitive results on the remaining three languages. | [
"Hromadka, Timo",
"Smolen, Timotej",
"Remis, Tomas",
"Pecher, Branislav",
"Srba, Ivan"
] | KInITVeraAI at SemEval-2023 Task 3: Simple yet Powerful Multilingual Fine-Tuning for Persuasion Techniques Detection | semeval-1.86 | Poster | 2304.11924 | [
"https://github.com/kinit-sk/semeval2023-task3-persuasion-techniques"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.87.bib | https://aclanthology.org/2023.semeval-1.87/ | @inproceedings{lazic-vujnovic-2023-jelenasteam,
title = "jelenasteam at {S}em{E}val-2023 Task 9: Quantification of Intimacy in Multilingual Tweets using Machine Learning Algorithms: A Comparative Study on the {MINT} Dataset",
author = "Lazi{\'c}, Jelena and
Vujnovi{\'c}, Sanja",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.87",
doi = "10.18653/v1/2023.semeval-1.87",
pages = "638--643",
abstract = "Intimacy is one of the fundamental aspects of our social life. It relates to intimate interactions with others, often including verbal self-disclosure. In this paper, we researched machine learning algorithms for quantification of the intimacy in the tweets. A new multilingual textual intimacy dataset named MINT was used. It contains tweets in 10 languages, including English, Spanish, French, Portuguese, Italian, and Chinese in both training and test datasets, and Dutch, Korean, Hindi, and Arabic in test data only. In the first experiment, linear regression models combine with the features and word embedding, and XLM-T deep learning model were compared. In the second experiment, cross-lingual learning between languanges was tested. In the third experiments, data was clustered using K-means. The results indicate that XLM-T pre-trained embedding might be a good choice for an unsupervised learning algorithm for intimacy detection.",
}
| Intimacy is one of the fundamental aspects of our social life. It relates to intimate interactions with others, often including verbal self-disclosure. In this paper, we researched machine learning algorithms for quantification of the intimacy in the tweets. A new multilingual textual intimacy dataset named MINT was used. It contains tweets in 10 languages, including English, Spanish, French, Portuguese, Italian, and Chinese in both training and test datasets, and Dutch, Korean, Hindi, and Arabic in test data only. In the first experiment, linear regression models combine with the features and word embedding, and XLM-T deep learning model were compared. In the second experiment, cross-lingual learning between languanges was tested. In the third experiments, data was clustered using K-means. The results indicate that XLM-T pre-trained embedding might be a good choice for an unsupervised learning algorithm for intimacy detection. | [
"Lazi{\\'c}, Jelena",
"Vujnovi{\\'c}, Sanja"
] | jelenasteam at SemEval-2023 Task 9: Quantification of Intimacy in Multilingual Tweets using Machine Learning Algorithms: A Comparative Study on the MINT Dataset | semeval-1.87 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.88.bib | https://aclanthology.org/2023.semeval-1.88/ | @inproceedings{lamsiyah-etal-2023-ul,
title = "{UL} {\&} {UM}6{P} at {S}em{E}val-2023 Task 10: Semi-Supervised Multi-task Learning for Explainable Detection of Online Sexism",
author = "Lamsiyah, Salima and
El Mahdaouy, Abdelkader and
Alami, Hamza and
Berrada, Ismail and
Schommer, Christoph",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.88",
doi = "10.18653/v1/2023.semeval-1.88",
pages = "644--650",
abstract = "This paper introduces our participating system to the Explainable Detection of Online Sexism (EDOS) SemEval-2023 - Task 10: Explainable Detection of Online Sexism. The EDOS shared task covers three hierarchical sub-tasks for sexism detection, coarse-grained and fine-grained categorization. We have investigated both single-task and multi-task learning based on RoBERTa transformer-based language models. For improving the results, we have performed further pre-training of RoBERTa on the provided unlabeled data. Besides, we have employed a small sample of the unlabeled data for semi-supervised learning using the minimum class-confusion loss. Our system has achieved macro F1 scores of 82.25{\textbackslash}{\%}, 67.35{\textbackslash}{\%}, and 49.8{\textbackslash}{\%} on Tasks A, B, and C, respectively.",
}
| This paper introduces our participating system to the Explainable Detection of Online Sexism (EDOS) SemEval-2023 - Task 10: Explainable Detection of Online Sexism. The EDOS shared task covers three hierarchical sub-tasks for sexism detection, coarse-grained and fine-grained categorization. We have investigated both single-task and multi-task learning based on RoBERTa transformer-based language models. For improving the results, we have performed further pre-training of RoBERTa on the provided unlabeled data. Besides, we have employed a small sample of the unlabeled data for semi-supervised learning using the minimum class-confusion loss. Our system has achieved macro F1 scores of 82.25{\textbackslash}{\%}, 67.35{\textbackslash}{\%}, and 49.8{\textbackslash}{\%} on Tasks A, B, and C, respectively. | [
"Lamsiyah, Salima",
"El Mahdaouy, Abdelkader",
"Alami, Hamza",
"Berrada, Ismail",
"Schommer, Christoph"
] | UL & UM6P at SemEval-2023 Task 10: Semi-Supervised Multi-task Learning for Explainable Detection of Online Sexism | semeval-1.88 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.89.bib | https://aclanthology.org/2023.semeval-1.89/ | @inproceedings{ma-etal-2023-ustc,
title = "{USTC}-{NELSLIP} at {S}em{E}val-2023 Task 2: Statistical Construction and Dual Adaptation of Gazetteer for Multilingual Complex {NER}",
author = "Ma, Jun-Yu and
Gu, Jia-Chen and
Qi, Jiajun and
Ling, Zhenhua and
Liu, Quan and
Zhao, Xiaoyi",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.89",
doi = "10.18653/v1/2023.semeval-1.89",
pages = "651--659",
abstract = "This paper describes the system developed by the USTC-NELSLIP team for SemEval-2023 Task 2 Multilingual Complex Named Entity Recognition (MultiCoNER II). We propose a method named Statistical Construction and Dual Adaptation of Gazetteer (SCDAG) for Multilingual Complex NER. The method first utilizes a statistics-based approach to construct a gazetteer. Secondly, the representations of gazetteer networks and language models are adapted by minimizing the KL divergence between them at the sentence-level and entity-level. Finally, these two networks are then integrated for supervised named entity recognition (NER) training. The proposed method is applied to several state-of-the-art Transformer-based NER models with a gazetteer built from Wikidata, and shows great generalization ability across them. The final predictions are derived from an ensemble of these trained models. Experimental results and detailed analysis verify the effectiveness of the proposed method. The official results show that our system ranked 1st on one track (Hindi) in this task.",
}
| This paper describes the system developed by the USTC-NELSLIP team for SemEval-2023 Task 2 Multilingual Complex Named Entity Recognition (MultiCoNER II). We propose a method named Statistical Construction and Dual Adaptation of Gazetteer (SCDAG) for Multilingual Complex NER. The method first utilizes a statistics-based approach to construct a gazetteer. Secondly, the representations of gazetteer networks and language models are adapted by minimizing the KL divergence between them at the sentence-level and entity-level. Finally, these two networks are then integrated for supervised named entity recognition (NER) training. The proposed method is applied to several state-of-the-art Transformer-based NER models with a gazetteer built from Wikidata, and shows great generalization ability across them. The final predictions are derived from an ensemble of these trained models. Experimental results and detailed analysis verify the effectiveness of the proposed method. The official results show that our system ranked 1st on one track (Hindi) in this task. | [
"Ma, Jun-Yu",
"Gu, Jia-Chen",
"Qi, Jiajun",
"Ling, Zhenhua",
"Liu, Quan",
"Zhao, Xiaoyi"
] | USTC-NELSLIP at SemEval-2023 Task 2: Statistical Construction and Dual Adaptation of Gazetteer for Multilingual Complex NER | semeval-1.89 | Poster | 2305.02517 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.90.bib | https://aclanthology.org/2023.semeval-1.90/ | @inproceedings{saha-srihari-2023-rudolf,
title = "Rudolf Christoph Eucken at {S}em{E}val-2023 Task 4: An Ensemble Approach for Identifying Human Values from Arguments",
author = "Saha, Sougata and
Srihari, Rohini",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.90",
doi = "10.18653/v1/2023.semeval-1.90",
pages = "660--663",
abstract = "The subtle human values we acquire through life experiences govern our thoughts and gets reflected in our speech. It plays an integral part in capturing the essence of our individuality and making it imperative to identify such values in computational systems that mimic human actions. Computational argumentation is a field that deals with the argumentation capabilities of humans and can benefit from identifying such values. Motivated by that, we present an ensemble approach for detecting human values from argument text. Our ensemble comprises three models: (i) An entailment-based model for determining the human values based on their descriptions, (ii) A Roberta-based classifier that predicts the set of human values from an argument. (iii) A Roberta-based classifier to predict a reduced set of human values from an argument. We experiment with different ways of combining the models and report our results. Furthermore, our best combination achieves an overall F1 score of 0.48 on the main test set.",
}
| The subtle human values we acquire through life experiences govern our thoughts and gets reflected in our speech. It plays an integral part in capturing the essence of our individuality and making it imperative to identify such values in computational systems that mimic human actions. Computational argumentation is a field that deals with the argumentation capabilities of humans and can benefit from identifying such values. Motivated by that, we present an ensemble approach for detecting human values from argument text. Our ensemble comprises three models: (i) An entailment-based model for determining the human values based on their descriptions, (ii) A Roberta-based classifier that predicts the set of human values from an argument. (iii) A Roberta-based classifier to predict a reduced set of human values from an argument. We experiment with different ways of combining the models and report our results. Furthermore, our best combination achieves an overall F1 score of 0.48 on the main test set. | [
"Saha, Sougata",
"Srihari, Rohini"
] | Rudolf Christoph Eucken at SemEval-2023 Task 4: An Ensemble Approach for Identifying Human Values from Arguments | semeval-1.90 | Poster | 2305.05335 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.91.bib | https://aclanthology.org/2023.semeval-1.91/ | @inproceedings{feng-etal-2023-ynu,
title = "{YNU}-{HPCC} at {S}em{E}val-2023 Task7: Multi-evidence Natural Language Inference for Clinical Trial Data Based a {B}io{BERT} Model",
author = "Feng, Chao and
Wang, Jin and
Zhang, Xuejie",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.91",
doi = "10.18653/v1/2023.semeval-1.91",
pages = "664--670",
abstract = "This paper describes the system for the YNU-HPCC team in subtask 1 of the SemEval-2023 Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data (NLI4CT). This task requires judging the textual entailment relationship between the given CTR and the statement annotated by the expert annotator. This system is based on the fine-tuned Bi-directional Encoder Representation from Transformers for Biomedical Text Mining (BioBERT) model with supervised contrastive learning and back translation. Supervised contrastive learning is to enhance the classification, and back translation is to enhance the training data. Our system achieved relatively good results on the competition{'}s official leaderboard. The code of this paper is available at \url{https://github.com/facanhe/SemEval-2023-Task7}.",
}
| This paper describes the system for the YNU-HPCC team in subtask 1 of the SemEval-2023 Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data (NLI4CT). This task requires judging the textual entailment relationship between the given CTR and the statement annotated by the expert annotator. This system is based on the fine-tuned Bi-directional Encoder Representation from Transformers for Biomedical Text Mining (BioBERT) model with supervised contrastive learning and back translation. Supervised contrastive learning is to enhance the classification, and back translation is to enhance the training data. Our system achieved relatively good results on the competition{'}s official leaderboard. The code of this paper is available at \url{https://github.com/facanhe/SemEval-2023-Task7}. | [
"Feng, Chao",
"Wang, Jin",
"Zhang, Xuejie"
] | YNU-HPCC at SemEval-2023 Task7: Multi-evidence Natural Language Inference for Clinical Trial Data Based a BioBERT Model | semeval-1.91 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.92.bib | https://aclanthology.org/2023.semeval-1.92/ | @inproceedings{zhang-etal-2023-srcb-semeval,
title = "{SRCB} at {S}em{E}val-2023 Task 2: A System of Complex Named Entity Recognition with External Knowledge",
author = "Zhang, Yuming and
Li, Hongyu and
Zhang, Yongwei and
Jiang, Shanshan and
Dong, Bin",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.92",
doi = "10.18653/v1/2023.semeval-1.92",
pages = "671--678",
abstract = "The MultiCoNER II shared task aims at detecting semantically ambiguous and complex named entities in short and low-context settings for multiple languages. The lack of context makes the recognition of ambiguous named entities challenging. To alleviate this issue, our team SRCB proposes an external knowledge based system, where we utilize 3 different types of external knowledge retrieved in different ways. Given an original text, our system retrieves the possible labels and the descriptions for each potential entity detected by a mention detection model. And we also retrieve a related document as extra context from Wikipedia for each original text. We concatenate the original text with the external knowledge as the input of NER models. The informative contextual representations with external knowledge significantly improve the NER performance in both Chinese and English tracks. Our system win the 3rd place in the Chinese track and the 6th place in the English track.",
}
| The MultiCoNER II shared task aims at detecting semantically ambiguous and complex named entities in short and low-context settings for multiple languages. The lack of context makes the recognition of ambiguous named entities challenging. To alleviate this issue, our team SRCB proposes an external knowledge based system, where we utilize 3 different types of external knowledge retrieved in different ways. Given an original text, our system retrieves the possible labels and the descriptions for each potential entity detected by a mention detection model. And we also retrieve a related document as extra context from Wikipedia for each original text. We concatenate the original text with the external knowledge as the input of NER models. The informative contextual representations with external knowledge significantly improve the NER performance in both Chinese and English tracks. Our system win the 3rd place in the Chinese track and the 6th place in the English track. | [
"Zhang, Yuming",
"Li, Hongyu",
"Zhang, Yongwei",
"Jiang, Shanshan",
"Dong, Bin"
] | SRCB at SemEval-2023 Task 2: A System of Complex Named Entity Recognition with External Knowledge | semeval-1.92 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.93.bib | https://aclanthology.org/2023.semeval-1.93/ | @inproceedings{jin-etal-2023-pinganlifeinsurance,
title = "{P}ing{A}n{L}ife{I}nsurance at {S}em{E}val-2023 Task 12: Sentiment Analysis for Low-resource {A}frican Languages with Multi-Model Fusion",
author = "Jin, Meizhi and
Chen, Cheng and
Zhou, Mengyuan and
Yuan, Mengfei and
Hou, Xiaolong and
Du, Xiyang and
Jiang, Lianxin and
Li, Jianyu",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.93",
doi = "10.18653/v1/2023.semeval-1.93",
pages = "679--685",
abstract = "This paper describes our system used in the SemEval-2023 Task12: Sentiment Analysis for Low-resource African Languages using Twit- ter Dataset (Muhammad et al., 2023c). The AfriSenti-SemEval Shared Task 12 is based on a collection of Twitter datasets in 14 African languages for sentiment classification. It con- sists of three sub-tasks. Task A is a monolin- gual sentiment classification which covered 12 African languages. Task B is a multilingual sen- timent classification which combined training data from Task A (12 African languages). Task C is a zero-shot sentiment classification. We uti- lized various strategies, including monolingual training, multilingual mixed training, and trans- lation technology, and proposed a weighted vot- ing method that combined the results of differ- ent strategies. Substantially, in the monolingual subtask, our system achieved Top-1 in two lan- guages (Yoruba and Twi) and Top-2 in four languages (Nigerian Pidgin, Algerian Arabic, and Swahili, Multilingual). In the multilingual subtask, Our system achived Top-2 in publish leaderBoard.",
}
| This paper describes our system used in the SemEval-2023 Task12: Sentiment Analysis for Low-resource African Languages using Twit- ter Dataset (Muhammad et al., 2023c). The AfriSenti-SemEval Shared Task 12 is based on a collection of Twitter datasets in 14 African languages for sentiment classification. It con- sists of three sub-tasks. Task A is a monolin- gual sentiment classification which covered 12 African languages. Task B is a multilingual sen- timent classification which combined training data from Task A (12 African languages). Task C is a zero-shot sentiment classification. We uti- lized various strategies, including monolingual training, multilingual mixed training, and trans- lation technology, and proposed a weighted vot- ing method that combined the results of differ- ent strategies. Substantially, in the monolingual subtask, our system achieved Top-1 in two lan- guages (Yoruba and Twi) and Top-2 in four languages (Nigerian Pidgin, Algerian Arabic, and Swahili, Multilingual). In the multilingual subtask, Our system achived Top-2 in publish leaderBoard. | [
"Jin, Meizhi",
"Chen, Cheng",
"Zhou, Mengyuan",
"Yuan, Mengfei",
"Hou, Xiaolong",
"Du, Xiyang",
"Jiang, Lianxin",
"Li, Jianyu"
] | PingAnLifeInsurance at SemEval-2023 Task 12: Sentiment Analysis for Low-resource African Languages with Multi-Model Fusion | semeval-1.93 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.94.bib | https://aclanthology.org/2023.semeval-1.94/ | @inproceedings{prasad-etal-2023-irit,
title = "{IRIT}{\_}{IRIS}{\_}{C} at {S}em{E}val-2023 Task 6: A Multi-level Encoder-based Architecture for Judgement Prediction of Legal Cases and their Explanation",
author = "Prasad, Nishchal and
Boughanem, Mohand and
Dkaki, Taoufiq",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.94",
doi = "10.18653/v1/2023.semeval-1.94",
pages = "686--692",
abstract = "This paper describes our system used for sub-task C (1 {\&} 2) in Task 6: LegalEval: Understanding Legal Texts. We propose a three-level encoder-based classification architecture that works by fine-tuning a BERT-based pre-trained encoder, and post-processing the embeddings extracted from its last layers, using transformer encoder layers and RNNs. We run ablation studies on the same and analyze itsperformance. To extract the explanations for the predicted class we develop an explanation extraction algorithm, exploiting the idea of a model{'}s occlusion sensitivity. We explored some training strategies with a detailed analysis of the dataset. Our system ranks 2nd (macro-F1 metric) for its sub-task C-1 and 7th (ROUGE-2 metric) for sub-task C-2.",
}
| This paper describes our system used for sub-task C (1 {\&} 2) in Task 6: LegalEval: Understanding Legal Texts. We propose a three-level encoder-based classification architecture that works by fine-tuning a BERT-based pre-trained encoder, and post-processing the embeddings extracted from its last layers, using transformer encoder layers and RNNs. We run ablation studies on the same and analyze itsperformance. To extract the explanations for the predicted class we develop an explanation extraction algorithm, exploiting the idea of a model{'}s occlusion sensitivity. We explored some training strategies with a detailed analysis of the dataset. Our system ranks 2nd (macro-F1 metric) for its sub-task C-1 and 7th (ROUGE-2 metric) for sub-task C-2. | [
"Prasad, Nishchal",
"Boughanem, Moh",
"",
"Dkaki, Taoufiq"
] | IRIT_IRIS_C at SemEval-2023 Task 6: A Multi-level Encoder-based Architecture for Judgement Prediction of Legal Cases and their Explanation | semeval-1.94 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.95.bib | https://aclanthology.org/2023.semeval-1.95/ | @inproceedings{villa-cueva-etal-2023-walter,
title = "Walter Burns at {S}em{E}val-2023 Task 5: {NLP}-{CIMAT} - Leveraging Model Ensembles for Clickbait Spoiling",
author = "Villa Cueva, Emilio and
Vallejo Aldana, Daniel and
S{\'a}nchez Vega, Fernando and
L{\'o}pez Monroy, Adri{\'a}n Pastor",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.95",
doi = "10.18653/v1/2023.semeval-1.95",
pages = "693--699",
abstract = "This paper describes our participation in the Clickbait challenge at SemEval 2023. In this work, we address the Clickbait classification task using transformers models in an ensemble configuration. We tackle the Spoiler Generation task using a two-level ensemble strategy of models trained for extractive QA, and selecting the best K candidates for multi-part spoilers. In the test partitions, our approaches obtained a classification accuracy of 0.716 for classification and a BLEU-4 score of 0.439 for spoiler generation.",
}
| This paper describes our participation in the Clickbait challenge at SemEval 2023. In this work, we address the Clickbait classification task using transformers models in an ensemble configuration. We tackle the Spoiler Generation task using a two-level ensemble strategy of models trained for extractive QA, and selecting the best K candidates for multi-part spoilers. In the test partitions, our approaches obtained a classification accuracy of 0.716 for classification and a BLEU-4 score of 0.439 for spoiler generation. | [
"Villa Cueva, Emilio",
"Vallejo Aldana, Daniel",
"S{\\'a}nchez Vega, Fern",
"o",
"L{\\'o}pez Monroy, Adri{\\'a}n Pastor"
] | Walter Burns at SemEval-2023 Task 5: NLP-CIMAT - Leveraging Model Ensembles for Clickbait Spoiling | semeval-1.95 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.96.bib | https://aclanthology.org/2023.semeval-1.96/ | @inproceedings{correa-dias-etal-2023-team,
title = "Team {INF}-{UFRGS} at {S}em{E}val-2023 Task 7: Supervised Contrastive Learning for Pair-level Sentence Classification and Evidence Retrieval",
author = "Corr{\^e}a Dias, Abel and
Dias, Filipe and
Moreira, Higor and
Moreira, Viviane and
Comba, Jo{\~a}o Luiz",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.96",
doi = "10.18653/v1/2023.semeval-1.96",
pages = "700--706",
abstract = "This paper describes the EvidenceSCL system submitted by our team (INF-UFRGS) to SemEval-2023 Task 7: Multi-Evidence Natural Language Inference for Clinical Trial Data (NLI4CT). NLI4CT is divided into two tasks, one for determining the inference relation between a pair of statements in clinical trials and a second for retrieving a set of supporting facts from the premises necessary to justify the label predicted in the first task. Our approach uses pair-level supervised contrastive learning to classify pairs of sentences. We trained EvidenceSCL on two datasets created from NLI4CT and additional data from other NLI datasets. We show that our approach can address both goals of NLI4CT, and although it reached an intermediate position, there is room for improvement in the technique.",
}
| This paper describes the EvidenceSCL system submitted by our team (INF-UFRGS) to SemEval-2023 Task 7: Multi-Evidence Natural Language Inference for Clinical Trial Data (NLI4CT). NLI4CT is divided into two tasks, one for determining the inference relation between a pair of statements in clinical trials and a second for retrieving a set of supporting facts from the premises necessary to justify the label predicted in the first task. Our approach uses pair-level supervised contrastive learning to classify pairs of sentences. We trained EvidenceSCL on two datasets created from NLI4CT and additional data from other NLI datasets. We show that our approach can address both goals of NLI4CT, and although it reached an intermediate position, there is room for improvement in the technique. | [
"Corr{\\^e}a Dias, Abel",
"Dias, Filipe",
"Moreira, Higor",
"Moreira, Viviane",
"Comba, Jo{\\~a}o Luiz"
] | Team INF-UFRGS at SemEval-2023 Task 7: Supervised Contrastive Learning for Pair-level Sentence Classification and Evidence Retrieval | semeval-1.96 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.97.bib | https://aclanthology.org/2023.semeval-1.97/ | @inproceedings{das-etal-2023-au,
title = "{AU}{\_}{NLP} at {S}em{E}val-2023 Task 10: Explainable Detection of Online Sexism Using Fine-tuned {R}o{BERT}a",
author = "Das, Amit and
Raychawdhary, Nilanjana and
Bhattacharya, Tathagata and
Dozier, Gerry and
Seals, Cheryl D.",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.97",
doi = "10.18653/v1/2023.semeval-1.97",
pages = "707--717",
abstract = "Social media is a concept developed to link people and make the globe smaller. But it has recently developed into a center for sexist memes that target especially women. As a result, there are more events of hostile actions and harassing remarks present online. In this paper, we introduce our system for the task of online sexism detection, a part of SemEval 2023 task 10. We introduce fine-tuned RoBERTa model to address this specific problem. The efficiency of the proposed strategy is demonstrated by the experimental results reported in this research.",
}
| Social media is a concept developed to link people and make the globe smaller. But it has recently developed into a center for sexist memes that target especially women. As a result, there are more events of hostile actions and harassing remarks present online. In this paper, we introduce our system for the task of online sexism detection, a part of SemEval 2023 task 10. We introduce fine-tuned RoBERTa model to address this specific problem. The efficiency of the proposed strategy is demonstrated by the experimental results reported in this research. | [
"Das, Amit",
"Raychawdhary, Nilanjana",
"Bhattacharya, Tathagata",
"Dozier, Gerry",
"Seals, Cheryl D."
] | AU_NLP at SemEval-2023 Task 10: Explainable Detection of Online Sexism Using Fine-tuned RoBERTa | semeval-1.97 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.98.bib | https://aclanthology.org/2023.semeval-1.98/ | @inproceedings{nzeyimana-2023-kinlp,
title = "{KINLP} at {S}em{E}val-2023 Task 12: {K}inyarwanda Tweet Sentiment Analysis",
author = "Nzeyimana, Antoine",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.98",
doi = "10.18653/v1/2023.semeval-1.98",
pages = "718--723",
abstract = "This paper describes the system entered by the author to the SemEval-2023 Task 12: Sentiment analysis for African languages. The system focuses on the Kinyarwanda language and uses a language-specific model. Kinyarwanda morphology is modeled in a two tier transformer architecture and the transformer model is pre-trained on a large text corpus using multi-task masked morphology prediction. The model is deployed on an experimental platform that allows users to experiment with the pre-trained language model fine-tuning without the need to write machine learning code. Our final submission to the shared task achieves second ranking out of 34 teams in the competition, achieving 72.50{\%} weighted F1 score. Our analysis of the evaluation results highlights challenges in achieving high accuracy on the task and identifies areas for improvement.",
}
| This paper describes the system entered by the author to the SemEval-2023 Task 12: Sentiment analysis for African languages. The system focuses on the Kinyarwanda language and uses a language-specific model. Kinyarwanda morphology is modeled in a two tier transformer architecture and the transformer model is pre-trained on a large text corpus using multi-task masked morphology prediction. The model is deployed on an experimental platform that allows users to experiment with the pre-trained language model fine-tuning without the need to write machine learning code. Our final submission to the shared task achieves second ranking out of 34 teams in the competition, achieving 72.50{\%} weighted F1 score. Our analysis of the evaluation results highlights challenges in achieving high accuracy on the task and identifies areas for improvement. | [
"Nzeyimana, Antoine"
] | KINLP at SemEval-2023 Task 12: Kinyarwanda Tweet Sentiment Analysis | semeval-1.98 | Poster | 2304.12569 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.99.bib | https://aclanthology.org/2023.semeval-1.99/ | @inproceedings{rifat-etal-2023-acsmkrhr,
title = "{ACSMKRHR} at {S}em{E}val-2023 Task 10: Explainable Online Sexism Detection({EDOS})",
author = "Rifat, Rakib Hossain and
Shruti, Abanti and
Kamal, Marufa and
Sadeque, Farig",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.99",
doi = "10.18653/v1/2023.semeval-1.99",
pages = "724--732",
abstract = "People are expressing their opinions online for a lot of years now. Although these opinions and comments provide people an opportunity of expressing their views, there is a lot of hate speech that can be found online. More specifically, sexist comments are very popular affecting and creating a negative impact on a lot of women and girls online. This paper describes the approaches of the SemEval-2023 Task 10 competition for Explainable Online Sexism Detection (EDOS). The task has been divided into 3 subtasks, introducing different classes of sexist comments. We have approached these tasks using the bert-cased and uncased models which are trained on the annotated dataset that has been provided in the competition. Task A provided the best F1 score of 80{\%} on the test set, and tasks B and C provided 58{\%} and 40{\%} respectively.",
}
| People are expressing their opinions online for a lot of years now. Although these opinions and comments provide people an opportunity of expressing their views, there is a lot of hate speech that can be found online. More specifically, sexist comments are very popular affecting and creating a negative impact on a lot of women and girls online. This paper describes the approaches of the SemEval-2023 Task 10 competition for Explainable Online Sexism Detection (EDOS). The task has been divided into 3 subtasks, introducing different classes of sexist comments. We have approached these tasks using the bert-cased and uncased models which are trained on the annotated dataset that has been provided in the competition. Task A provided the best F1 score of 80{\%} on the test set, and tasks B and C provided 58{\%} and 40{\%} respectively. | [
"Rifat, Rakib Hossain",
"Shruti, Abanti",
"Kamal, Marufa",
"Sadeque, Farig"
] | ACSMKRHR at SemEval-2023 Task 10: Explainable Online Sexism Detection(EDOS) | semeval-1.99 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.100.bib | https://aclanthology.org/2023.semeval-1.100/ | @inproceedings{cai-etal-2023-ynu,
title = "{YNU}-{HPCC} at {S}em{E}val-2023 Task 9: Pretrained Language Model for Multilingual Tweet Intimacy Analysis",
author = "Cai, Qisheng and
Wang, Jin and
Zhang, Xuejie",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.100",
doi = "10.18653/v1/2023.semeval-1.100",
pages = "733--738",
abstract = "This paper describes our fine-tuned pretrained language model for task 9 (Multilingual Tweet Intimacy Analysis, MTIA) of the SemEval 2023 competition. MTIA aims to quantitatively analyze tweets in 6 languages for intimacy, giving a score from 1 to 5. The challenge of MTIA is in semantically extracting information from code-mixed texts. To alleviate this difficulty, we suggested a solution that combines attention and memory mechanisms. The preprocessed tweets are input to the XLM-T layer to get sentence embeddings and subsequently to the bidirectional GRU layer to obtain intimacy ratings. Experimental results show an improvement in the overall performance of our model in both seen and unseen languages.",
}
| This paper describes our fine-tuned pretrained language model for task 9 (Multilingual Tweet Intimacy Analysis, MTIA) of the SemEval 2023 competition. MTIA aims to quantitatively analyze tweets in 6 languages for intimacy, giving a score from 1 to 5. The challenge of MTIA is in semantically extracting information from code-mixed texts. To alleviate this difficulty, we suggested a solution that combines attention and memory mechanisms. The preprocessed tweets are input to the XLM-T layer to get sentence embeddings and subsequently to the bidirectional GRU layer to obtain intimacy ratings. Experimental results show an improvement in the overall performance of our model in both seen and unseen languages. | [
"Cai, Qisheng",
"Wang, Jin",
"Zhang, Xuejie"
] | YNU-HPCC at SemEval-2023 Task 9: Pretrained Language Model for Multilingual Tweet Intimacy Analysis | semeval-1.100 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.101.bib | https://aclanthology.org/2023.semeval-1.101/ | @inproceedings{luzzon-liebeskind-2023-jct,
title = "{JCT}{\_}{DM} at {S}em{E}val-2023 Task 10: Detection of Online Sexism: from Classical Models to Transformers",
author = "Luzzon, Efrat and
Liebeskind, Chaya",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.101",
doi = "10.18653/v1/2023.semeval-1.101",
pages = "739--743",
abstract = "This paper presents the experimentation of systems for detecting online sexism relying on classical models, deep learning models, and transformer-based models. The systems aim to provide a comprehensive approach to handling the intricacies of online language, including slang and neologisms. The dataset consists of labeled and unlabeled data from Gab and Reddit, which allows for the development of unsupervised or semi-supervised models. The system utilizes TF-IDF with classical models, bidirectional models with embedding, and pre-trained transformer models. The paper discusses the experimental setup and results, demonstrating the effectiveness of the system in detecting online sexism.",
}
| This paper presents the experimentation of systems for detecting online sexism relying on classical models, deep learning models, and transformer-based models. The systems aim to provide a comprehensive approach to handling the intricacies of online language, including slang and neologisms. The dataset consists of labeled and unlabeled data from Gab and Reddit, which allows for the development of unsupervised or semi-supervised models. The system utilizes TF-IDF with classical models, bidirectional models with embedding, and pre-trained transformer models. The paper discusses the experimental setup and results, demonstrating the effectiveness of the system in detecting online sexism. | [
"Luzzon, Efrat",
"Liebeskind, Chaya"
] | JCT_DM at SemEval-2023 Task 10: Detection of Online Sexism: from Classical Models to Transformers | semeval-1.101 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.102.bib | https://aclanthology.org/2023.semeval-1.102/ | @inproceedings{ma-etal-2023-pai-semeval,
title = "{PAI} at {S}em{E}val-2023 Task 2: A Universal System for Named Entity Recognition with External Entity Information",
author = "Ma, Long and
Lu, Kai and
Che, Tianbo and
Huang, Hailong and
Gao, Weiguo and
Li, Xuan",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.102",
doi = "10.18653/v1/2023.semeval-1.102",
pages = "744--750",
abstract = "The MultiCoNER II task aims to detect complex, ambiguous, and fine-grained named entities in low-context situations and noisy scenarios like the presence of spelling mistakes and typos for multiple languages. The task poses significant challenges due to the scarcity of contextual information, the high granularity of the entities(up to 33 classes), and the interference of noisy data. To address these issues, our team PAI proposes a universal Named Entity Recognition (NER) system that integrates external entity information to improve performance. Specifically, our system retrieves entities with properties from the knowledge base (i.e. Wikipedia) for a given text, then concatenates entity information with the input sentence and feeds it into Transformer-based models. Finally, our system wins 2 first places, 4 second places, and 1 third place out of 13 tracks. The code is publicly available at \url{https://github.com/diqiuzhuanzhuan/semeval-2023}.",
}
| The MultiCoNER II task aims to detect complex, ambiguous, and fine-grained named entities in low-context situations and noisy scenarios like the presence of spelling mistakes and typos for multiple languages. The task poses significant challenges due to the scarcity of contextual information, the high granularity of the entities(up to 33 classes), and the interference of noisy data. To address these issues, our team PAI proposes a universal Named Entity Recognition (NER) system that integrates external entity information to improve performance. Specifically, our system retrieves entities with properties from the knowledge base (i.e. Wikipedia) for a given text, then concatenates entity information with the input sentence and feeds it into Transformer-based models. Finally, our system wins 2 first places, 4 second places, and 1 third place out of 13 tracks. The code is publicly available at \url{https://github.com/diqiuzhuanzhuan/semeval-2023}. | [
"Ma, Long",
"Lu, Kai",
"Che, Tianbo",
"Huang, Hailong",
"Gao, Weiguo",
"Li, Xuan"
] | PAI at SemEval-2023 Task 2: A Universal System for Named Entity Recognition with External Entity Information | semeval-1.102 | Poster | 2305.06099 | [
"https://github.com/diqiuzhuanzhuan/semeval-2023"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.103.bib | https://aclanthology.org/2023.semeval-1.103/ | @inproceedings{jain-etal-2023-nits,
title = "{NITS}{\_}{L}egal at {S}em{E}val-2023 Task 6: Rhetorical Roles Prediction of {I}ndian Legal Documents via Sentence Sequence Labeling Approach",
author = "Jain, Deepali and
Borah, Malaya Dutta and
Biswas, Anupam",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.103",
doi = "10.18653/v1/2023.semeval-1.103",
pages = "751--757",
abstract = "Legal documents are notorious for their complexity and domain-specific language, making them challenging for legal practitioners as well as non-experts to comprehend. To address this issue, the LegalEval 2023 track proposed several shared tasks, including the task of Rhetorical Roles Prediction (Task A). We participated as NITS{\_}Legal team in Task A and conducted exploratory experiments to improve our understanding of the task. Our results suggest that sequence context is crucial in performing rhetorical roles prediction. Given the lengthy nature of legal documents, we propose a BiLSTM-based sentence sequence labeling approach that uses a local context-incorporated dataset created from the original dataset. To better represent the sentences during training, we extract legal domain-specific sentence embeddings from a Legal BERT model. Our experimental findings emphasize the importance of considering local context instead of treating each sentence independently to achieve better performance in this task. Our approach has the potential to improve the accessibility and usability of legal documents.",
}
| Legal documents are notorious for their complexity and domain-specific language, making them challenging for legal practitioners as well as non-experts to comprehend. To address this issue, the LegalEval 2023 track proposed several shared tasks, including the task of Rhetorical Roles Prediction (Task A). We participated as NITS{\_}Legal team in Task A and conducted exploratory experiments to improve our understanding of the task. Our results suggest that sequence context is crucial in performing rhetorical roles prediction. Given the lengthy nature of legal documents, we propose a BiLSTM-based sentence sequence labeling approach that uses a local context-incorporated dataset created from the original dataset. To better represent the sentences during training, we extract legal domain-specific sentence embeddings from a Legal BERT model. Our experimental findings emphasize the importance of considering local context instead of treating each sentence independently to achieve better performance in this task. Our approach has the potential to improve the accessibility and usability of legal documents. | [
"Jain, Deepali",
"Borah, Malaya Dutta",
"Biswas, Anupam"
] | NITS_Legal at SemEval-2023 Task 6: Rhetorical Roles Prediction of Indian Legal Documents via Sentence Sequence Labeling Approach | semeval-1.103 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.104.bib | https://aclanthology.org/2023.semeval-1.104/ | @inproceedings{pichardo-estevez-etal-2023-i2c,
title = "{I}2{C}-{H}uelva at {S}em{E}val-2023 Task 9: Analysis of Intimacy in Multilingual Tweets Using Resampling Methods and Transformers",
author = "Pichardo Estevez, Abel and
Mata V{\'a}zquez, Jacinto and
Pach{\'o}n {\'A}lvarez, Victoria and
El Balima Cordero, Nordin",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.104",
doi = "10.18653/v1/2023.semeval-1.104",
pages = "758--762",
abstract = "Nowadays, intimacy is a fundamental aspect of how we relate to other people in social settings. The most frequent way in which we can determine a high level of intimacy is in the use of certain emoticons, curse words, verbs, etc. This paper presents the approach developed to solve SemEval 2023 task 9: Multiligual Tweet Intimacy Analysis. To address the task, a transfer learning approach was conducted by fine tuning various pre-trained languagemodels. Since the dataset supplied by the organizer was highly imbalanced, our main strategy to obtain high prediction values was the implementation of different oversampling and undersampling techniques on the training set. Our final submission achieved an overall Pearson{'}s r of 0.497.",
}
| Nowadays, intimacy is a fundamental aspect of how we relate to other people in social settings. The most frequent way in which we can determine a high level of intimacy is in the use of certain emoticons, curse words, verbs, etc. This paper presents the approach developed to solve SemEval 2023 task 9: Multiligual Tweet Intimacy Analysis. To address the task, a transfer learning approach was conducted by fine tuning various pre-trained languagemodels. Since the dataset supplied by the organizer was highly imbalanced, our main strategy to obtain high prediction values was the implementation of different oversampling and undersampling techniques on the training set. Our final submission achieved an overall Pearson{'}s r of 0.497. | [
"Pichardo Estevez, Abel",
"Mata V{\\'a}zquez, Jacinto",
"Pach{\\'o}n {\\'A}lvarez, Victoria",
"El Balima Cordero, Nordin"
] | I2C-Huelva at SemEval-2023 Task 9: Analysis of Intimacy in Multilingual Tweets Using Resampling Methods and Transformers | semeval-1.104 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.105.bib | https://aclanthology.org/2023.semeval-1.105/ | @inproceedings{felicia-fudulu-etal-2023-i2c,
title = "{I}2{C}-{H}uelva at {S}em{E}val-2023 Task 10: Ensembling Transformers Models for the Detection of Online Sexism",
author = "Felicia Fudulu, Lavinia and
Rodriguez Tenorio, Alberto and
Pach{\'o}n {\'A}lvarez, Victoria and
Mata V{\'a}zquez, Jacinto",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.105",
doi = "10.18653/v1/2023.semeval-1.105",
pages = "763--769",
abstract = "This work details our approach for addressing Tasks A and B of the Semeval 2023 Task 10: Explainable Detection of Online Sexism (EDOS). For Task A a simple ensemble based of majority vote system was presented. To build our proposal, first a review of transformers was carried out and the 3 best performing models were selected to be part of the ensemble. Next, for these models, the best hyperpameters were searched using a reduced data set. Finally, we trained these models using more data. During the development phase, our ensemble system achieved an f1-score of 0.8403. For task B, we developed a model based on the deBERTa transformer, utilizing the hyperparameters identified for task A. During the development phase, our proposed model attained an f1-score of 0.6467. Overall, our methodology demonstrates an effective approach to the tasks, leveraging advanced machine learning techniques and hyperparameters searches to achieve high performance in detecting and classifying instances of sexism in online text.",
}
| This work details our approach for addressing Tasks A and B of the Semeval 2023 Task 10: Explainable Detection of Online Sexism (EDOS). For Task A a simple ensemble based of majority vote system was presented. To build our proposal, first a review of transformers was carried out and the 3 best performing models were selected to be part of the ensemble. Next, for these models, the best hyperpameters were searched using a reduced data set. Finally, we trained these models using more data. During the development phase, our ensemble system achieved an f1-score of 0.8403. For task B, we developed a model based on the deBERTa transformer, utilizing the hyperparameters identified for task A. During the development phase, our proposed model attained an f1-score of 0.6467. Overall, our methodology demonstrates an effective approach to the tasks, leveraging advanced machine learning techniques and hyperparameters searches to achieve high performance in detecting and classifying instances of sexism in online text. | [
"Felicia Fudulu, Lavinia",
"Rodriguez Tenorio, Alberto",
"Pach{\\'o}n {\\'A}lvarez, Victoria",
"Mata V{\\'a}zquez, Jacinto"
] | I2C-Huelva at SemEval-2023 Task 10: Ensembling Transformers Models for the Detection of Online Sexism | semeval-1.105 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.106.bib | https://aclanthology.org/2023.semeval-1.106/ | @inproceedings{zhang-etal-2023-zbl2w,
title = "{ZBL}2{W} at {S}em{E}val-2023 Task 9: A Multilingual Fine-tuning Model with Data Augmentation for Tweet Intimacy Analysis",
author = "Zhang, Hao and
Wu, Youlin and
Lu, Junyu and
Bai, Zewen and
Wu, Jiangming and
Lin, Hongfei and
Zhang, Shaowu",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.106",
doi = "10.18653/v1/2023.semeval-1.106",
pages = "770--775",
abstract = "This paper describes our system used in the SemEval-2023 Task 9 Multilingual Tweet Intimacy Analysis. There are two key challenges in this task: the complexity of multilingual and zero-shot cross-lingual learning, and the difficulty of semantic mining of tweet intimacy. To solve the above problems, our system extracts contextual representations from the pretrained language models, XLM-T, and employs various optimization methods, including adversarial training, data augmentation, ordinal regression loss and special training strategy. Our system ranked 14th out of 54 participating teams on the leaderboard and ranked 10th on predicting languages not in the training data. Our code is available on Github.",
}
| This paper describes our system used in the SemEval-2023 Task 9 Multilingual Tweet Intimacy Analysis. There are two key challenges in this task: the complexity of multilingual and zero-shot cross-lingual learning, and the difficulty of semantic mining of tweet intimacy. To solve the above problems, our system extracts contextual representations from the pretrained language models, XLM-T, and employs various optimization methods, including adversarial training, data augmentation, ordinal regression loss and special training strategy. Our system ranked 14th out of 54 participating teams on the leaderboard and ranked 10th on predicting languages not in the training data. Our code is available on Github. | [
"Zhang, Hao",
"Wu, Youlin",
"Lu, Junyu",
"Bai, Zewen",
"Wu, Jiangming",
"Lin, Hongfei",
"Zhang, Shaowu"
] | ZBL2W at SemEval-2023 Task 9: A Multilingual Fine-tuning Model with Data Augmentation for Tweet Intimacy Analysis | semeval-1.106 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.107.bib | https://aclanthology.org/2023.semeval-1.107/ | @inproceedings{chen-etal-2023-ncuee-nlp,
title = "{NCUEE}-{NLP} at {S}em{E}val-2023 Task 7: Ensemble Biomedical {L}ink{BERT} Transformers in Multi-evidence Natural Language Inference for Clinical Trial Data",
author = "Chen, Chao-Yi and
Tien, Kao-Yuan and
Cheng, Yuan-Hao and
Lee, Lung-Hao",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.107",
doi = "10.18653/v1/2023.semeval-1.107",
pages = "776--781",
abstract = "This study describes the model design of the NCUEE-NLP system for the SemEval-2023 NLI4CT task that focuses on multi-evidence natural language inference for clinical trial data. We use the LinkBERT transformer in the biomedical domain (denoted as BioLinkBERT) as our main system architecture. First, a set of sentences in clinical trial reports is extracted as evidence for premise-statement inference. This identified evidence is then used to determine the inference relation (i.e., entailment or contradiction). Finally, a soft voting ensemble mechanism is applied to enhance the system performance. For Subtask 1 on textual entailment, our best submission had an F1-score of 0.7091, ranking sixth among all 30 participating teams. For Subtask 2 on evidence retrieval, our best result obtained an F1-score of 0.7940, ranking ninth of 19 submissions.",
}
| This study describes the model design of the NCUEE-NLP system for the SemEval-2023 NLI4CT task that focuses on multi-evidence natural language inference for clinical trial data. We use the LinkBERT transformer in the biomedical domain (denoted as BioLinkBERT) as our main system architecture. First, a set of sentences in clinical trial reports is extracted as evidence for premise-statement inference. This identified evidence is then used to determine the inference relation (i.e., entailment or contradiction). Finally, a soft voting ensemble mechanism is applied to enhance the system performance. For Subtask 1 on textual entailment, our best submission had an F1-score of 0.7091, ranking sixth among all 30 participating teams. For Subtask 2 on evidence retrieval, our best result obtained an F1-score of 0.7940, ranking ninth of 19 submissions. | [
"Chen, Chao-Yi",
"Tien, Kao-Yuan",
"Cheng, Yuan-Hao",
"Lee, Lung-Hao"
] | NCUEE-NLP at SemEval-2023 Task 7: Ensemble Biomedical LinkBERT Transformers in Multi-evidence Natural Language Inference for Clinical Trial Data | semeval-1.107 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.108.bib | https://aclanthology.org/2023.semeval-1.108/ | @inproceedings{xu-ding-2023-tsingriver,
title = "Tsingriver at {S}em{E}val-2023 Task 10: Labeled Data Augmentation in Consistency Training",
author = "Xu, Yehui and
Ding, Haiyan",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.108",
doi = "10.18653/v1/2023.semeval-1.108",
pages = "782--786",
abstract = "Semi-supervised learning has promising performance in deep learning, one of the approaches is consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. However, The degree of correlation between unlabeled data and task objective directly affects model prediction performance. This paper describes our system designed for SemEval-2023 Task 10: Explainable Detection of Online Sexism. We utilize a consistency training framework and data augmentation as the main strategy to train a model. The score obtained by our method is 0.8180 in subtask A, ranking 57 in all the teams.",
}
| Semi-supervised learning has promising performance in deep learning, one of the approaches is consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. However, The degree of correlation between unlabeled data and task objective directly affects model prediction performance. This paper describes our system designed for SemEval-2023 Task 10: Explainable Detection of Online Sexism. We utilize a consistency training framework and data augmentation as the main strategy to train a model. The score obtained by our method is 0.8180 in subtask A, ranking 57 in all the teams. | [
"Xu, Yehui",
"Ding, Haiyan"
] | Tsingriver at SemEval-2023 Task 10: Labeled Data Augmentation in Consistency Training | semeval-1.108 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.109.bib | https://aclanthology.org/2023.semeval-1.109/ | @inproceedings{rodrigo-gines-etal-2023-unedmediabiasteam,
title = "{U}ned{M}edia{B}ias{T}eam @ {S}em{E}val-2023 Task 3: Can We Detect Persuasive Techniques Transferring Knowledge From Media Bias Detection?",
author = "Rodrigo-Gin{\'e}s, Francisco-Javier and
Plaza, Laura and
Carrillo-de-Albornoz, Jorge",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.109",
doi = "10.18653/v1/2023.semeval-1.109",
pages = "787--793",
abstract = "How similar is the detection of media bias to the detection of persuasive techniques? We have explored how transferring knowledge from one task to the other may help to improve the performance. This paper presents the systems developed for participating in the SemEval-2023 Task 3: Detecting the Genre, the Framing, and the Persuasion Techniques in Online News in a Multi-lingual Setup. We have participated in both the subtask 1: News Genre Categorisation, and the subtask 3: Persuasion Techniques Detection. Our solutions are based on two-stage fine-tuned multilingual models. We evaluated our approach on the 9 languages provided in the task. Our results show that the use of transfer learning from media bias detection to persuasion techniques detection is beneficial for the subtask of detecting the genre (macro F1-score of 0.523 in the English test set) as it improves previous results, but not for the detection of persuasive techniques (micro F1-score of 0.24 in the English test set).",
}
| How similar is the detection of media bias to the detection of persuasive techniques? We have explored how transferring knowledge from one task to the other may help to improve the performance. This paper presents the systems developed for participating in the SemEval-2023 Task 3: Detecting the Genre, the Framing, and the Persuasion Techniques in Online News in a Multi-lingual Setup. We have participated in both the subtask 1: News Genre Categorisation, and the subtask 3: Persuasion Techniques Detection. Our solutions are based on two-stage fine-tuned multilingual models. We evaluated our approach on the 9 languages provided in the task. Our results show that the use of transfer learning from media bias detection to persuasion techniques detection is beneficial for the subtask of detecting the genre (macro F1-score of 0.523 in the English test set) as it improves previous results, but not for the detection of persuasive techniques (micro F1-score of 0.24 in the English test set). | [
"Rodrigo-Gin{\\'e}s, Francisco-Javier",
"Plaza, Laura",
"Carrillo-de-Albornoz, Jorge"
] | UnedMediaBiasTeam @ SemEval-2023 Task 3: Can We Detect Persuasive Techniques Transferring Knowledge From Media Bias Detection? | semeval-1.109 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.110.bib | https://aclanthology.org/2023.semeval-1.110/ | @inproceedings{pritzkau-2023-nl4ia,
title = "{NL}4{IA} at {S}em{E}val-2023 Task 3: A Comparison of Sequence Classification and Token Classification to Detect Persuasive Techniques",
author = "Pritzkau, Albert",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.110",
doi = "10.18653/v1/2023.semeval-1.110",
pages = "794--799",
abstract = "The following system description presents our approach to the detection of persuasion techniques in online news. The given task has been framed as a multi-label classification problem. In a multi-label classification problem, each input chunkin this case paragraphis assigned one of several class labels. Span level annotations were also provided. In order to assign class labels to the given documents, we opted for RoBERTa (A Robustly Optimized BERT Pretraining Approach) for both approachessequence and token classification. Starting off with a pre-trained model for language representation, we fine-tuned this model on the given classification task with the provided annotated data in supervised training steps.",
}
| The following system description presents our approach to the detection of persuasion techniques in online news. The given task has been framed as a multi-label classification problem. In a multi-label classification problem, each input chunkin this case paragraphis assigned one of several class labels. Span level annotations were also provided. In order to assign class labels to the given documents, we opted for RoBERTa (A Robustly Optimized BERT Pretraining Approach) for both approachessequence and token classification. Starting off with a pre-trained model for language representation, we fine-tuned this model on the given classification task with the provided annotated data in supervised training steps. | [
"Pritzkau, Albert"
] | NL4IA at SemEval-2023 Task 3: A Comparison of Sequence Classification and Token Classification to Detect Persuasive Techniques | semeval-1.110 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.111.bib | https://aclanthology.org/2023.semeval-1.111/ | @inproceedings{choudhary-etal-2023-iitd,
title = "{IITD} at {S}em{E}val-2023 Task 2: A Multi-Stage Information Retrieval Approach for Fine-Grained Named Entity Recognition",
author = "Choudhary, Shivani and
Chatterjee, Niladri and
Saha, Subir",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.111",
doi = "10.18653/v1/2023.semeval-1.111",
pages = "800--806",
abstract = "MultiCoNER-II is a fine-grained Named Entity Recognition (NER) task that aims to identify ambiguous and complex named entities in multiple languages, with a small amount of contextual information available. To address this task, we propose a multi-stage information retrieval (IR) pipeline that improves the performance of language models for fine-grained NER. Our approach involves leveraging a combination of a BM25-based IR model and a language model to retrieve relevant passages from a corpus. These passages are then used to train a model that utilizes a weighted average of losses. The prediction is generated by a decoder stack that includes a projection layer and conditional random field. To demonstrate the effectiveness of our approach, we participated in the English track of the MultiCoNER-II competition. Our approach yielded promising results, which we validated through detailed analysis.",
}
| MultiCoNER-II is a fine-grained Named Entity Recognition (NER) task that aims to identify ambiguous and complex named entities in multiple languages, with a small amount of contextual information available. To address this task, we propose a multi-stage information retrieval (IR) pipeline that improves the performance of language models for fine-grained NER. Our approach involves leveraging a combination of a BM25-based IR model and a language model to retrieve relevant passages from a corpus. These passages are then used to train a model that utilizes a weighted average of losses. The prediction is generated by a decoder stack that includes a projection layer and conditional random field. To demonstrate the effectiveness of our approach, we participated in the English track of the MultiCoNER-II competition. Our approach yielded promising results, which we validated through detailed analysis. | [
"Choudhary, Shivani",
"Chatterjee, Niladri",
"Saha, Subir"
] | IITD at SemEval-2023 Task 2: A Multi-Stage Information Retrieval Approach for Fine-Grained Named Entity Recognition | semeval-1.111 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.112.bib | https://aclanthology.org/2023.semeval-1.112/ | @inproceedings{gonzalez-gallardo-etal-2023-l3i,
title = "{L}3{I}++ at {S}em{E}val-2023 Task 2: Prompting for Multilingual Complex Named Entity Recognition",
author = "Gonzalez-Gallardo, Carlos-Emiliano and
Tran, Thi Hong Hanh and
Girdhar, Nancy and
Boros, Emanuela and
Moreno, Jose G. and
Doucet, Antoine",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.112",
doi = "10.18653/v1/2023.semeval-1.112",
pages = "807--814",
abstract = "This paper summarizes the participation of the L3i laboratory of the University of La Rochelle in the SemEval-2023 Task 2, Multilingual Complex Named Entity Recognition (MultiCoNER II). Similar to MultiCoNER I, the task seeks to develop methods to detect semantic ambiguous and complex entities in short and low-context settings. However, MultiCoNER II adds a fine-grained entity taxonomy with over 30 entity types and corrupted data on the test partitions. We approach these complications following prompt-based learning as (1) a ranking problem using a seq2seq framework, and (2) an extractive question-answering task. Our findings show that even if prompting techniques have a similar recall to fine-tuned hierarchical language model-based encoder methods, precision tends to be more affected.",
}
| This paper summarizes the participation of the L3i laboratory of the University of La Rochelle in the SemEval-2023 Task 2, Multilingual Complex Named Entity Recognition (MultiCoNER II). Similar to MultiCoNER I, the task seeks to develop methods to detect semantic ambiguous and complex entities in short and low-context settings. However, MultiCoNER II adds a fine-grained entity taxonomy with over 30 entity types and corrupted data on the test partitions. We approach these complications following prompt-based learning as (1) a ranking problem using a seq2seq framework, and (2) an extractive question-answering task. Our findings show that even if prompting techniques have a similar recall to fine-tuned hierarchical language model-based encoder methods, precision tends to be more affected. | [
"Gonzalez-Gallardo, Carlos-Emiliano",
"Tran, Thi Hong Hanh",
"Girdhar, Nancy",
"Boros, Emanuela",
"Moreno, Jose G.",
"Doucet, Antoine"
] | L3I++ at SemEval-2023 Task 2: Prompting for Multilingual Complex Named Entity Recognition | semeval-1.112 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.113.bib | https://aclanthology.org/2023.semeval-1.113/ | @inproceedings{vetagiri-etal-2023-cnlp,
title = "{CNLP}-{NITS} at {S}em{E}val-2023 Task 10: Online sexism prediction, {PREDHATE}!",
author = "Vetagiri, Advaitha and
Adhikary, Prottay and
Pakray, Partha and
Das, Amitava",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.113",
doi = "10.18653/v1/2023.semeval-1.113",
pages = "815--822",
abstract = "Online sexism is a rising issue that threatens women{'}s safety, fosters hostile situations, and upholds social inequities. We describe a task SemEval-2023 Task 10 for creating English-language models that can precisely identify and categorize sexist content on internet forums and social platforms like Gab and Reddit as well to provide an explainability in order to address this problem. The problem is divided into three hierarchically organized subtasks: binary sexism detection, sexism by category, and sexism by fine-grained vector. The dataset consists of 20,000 labelled entries. For Task A, pertained models like Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BiLSTM), which is called CNN-BiLSTM and Generative Pretrained Transformer 2 (GPT-2) models were used, as well as the GPT-2 model for Task B and C, and have provided experimental configurations. According to our findings, the GPT-2 model performs better than the CNN-BiLSTM model for Task A, while GPT-2 is highly accurate for Tasks B and C on the training, validation and testing splits of the training data provided in the task. Our proposed models allow researchers to create more precise and understandable models for identifying and categorizing sexist content in online forums, thereby empowering users and moderators.",
}
| Online sexism is a rising issue that threatens women{'}s safety, fosters hostile situations, and upholds social inequities. We describe a task SemEval-2023 Task 10 for creating English-language models that can precisely identify and categorize sexist content on internet forums and social platforms like Gab and Reddit as well to provide an explainability in order to address this problem. The problem is divided into three hierarchically organized subtasks: binary sexism detection, sexism by category, and sexism by fine-grained vector. The dataset consists of 20,000 labelled entries. For Task A, pertained models like Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BiLSTM), which is called CNN-BiLSTM and Generative Pretrained Transformer 2 (GPT-2) models were used, as well as the GPT-2 model for Task B and C, and have provided experimental configurations. According to our findings, the GPT-2 model performs better than the CNN-BiLSTM model for Task A, while GPT-2 is highly accurate for Tasks B and C on the training, validation and testing splits of the training data provided in the task. Our proposed models allow researchers to create more precise and understandable models for identifying and categorizing sexist content in online forums, thereby empowering users and moderators. | [
"Vetagiri, Advaitha",
"Adhikary, Prottay",
"Pakray, Partha",
"Das, Amitava"
] | CNLP-NITS at SemEval-2023 Task 10: Online sexism prediction, PREDHATE! | semeval-1.113 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.114.bib | https://aclanthology.org/2023.semeval-1.114/ | @inproceedings{hossain-etal-2023-garner,
title = "gar{NER} at {S}em{E}val-2023: Simplified Knowledge Augmentation for Multilingual Complex Named Entity Recognition",
author = "Hossain, Md Zobaer and
So, Averie Ho Zoen and
Silwal, Silviya and
Gonzalez Gongora, H. Andres and
Samin, Ahnaf Mozib and
Junaed, Jahedul Alam and
Mazumder, Aritra and
Saha, Sourav and
Tahsin Soha, Sabiha",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.114",
doi = "10.18653/v1/2023.semeval-1.114",
pages = "823--835",
abstract = "This paper presents our solution, garNER, to the SemEval-2023 MultiConer task. We propose a knowledge augmentation approach by directly querying entities from the Wikipedia API and appending the summaries of the entities to the input sentence. These entities are either retrieved from the labeled training set (Gold Entity) or from off-the-shelf entity taggers (Entity Extractor). Ensemble methods are then applied across multiple models to get the final prediction. Our analysis shows that the added contexts are beneficial only when such contexts are relevant to the target-named entities, but detrimental when the contexts are irrelevant.",
}
| This paper presents our solution, garNER, to the SemEval-2023 MultiConer task. We propose a knowledge augmentation approach by directly querying entities from the Wikipedia API and appending the summaries of the entities to the input sentence. These entities are either retrieved from the labeled training set (Gold Entity) or from off-the-shelf entity taggers (Entity Extractor). Ensemble methods are then applied across multiple models to get the final prediction. Our analysis shows that the added contexts are beneficial only when such contexts are relevant to the target-named entities, but detrimental when the contexts are irrelevant. | [
"Hossain, Md Zobaer",
"So, Averie Ho Zoen",
"Silwal, Silviya",
"Gonzalez Gongora, H. Andres",
"Samin, Ahnaf Mozib",
"Junaed, Jahedul Alam",
"Mazumder, Aritra",
"Saha, Sourav",
"Tahsin Soha, Sabiha"
] | garNER at SemEval-2023: Simplified Knowledge Augmentation for Multilingual Complex Named Entity Recognition | semeval-1.114 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.115.bib | https://aclanthology.org/2023.semeval-1.115/ | @inproceedings{ehrhart-etal-2023-d2klab,
title = "{D}2{KL}ab at {S}em{E}val-2023 Task 2: Leveraging {T}-{NER} to Develop a Fine-Tuned Multilingual Model for Complex Named Entity Recognition",
author = "Ehrhart, Thibault and
Plu, Julien and
Troncy, Raphael",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.115",
doi = "10.18653/v1/2023.semeval-1.115",
pages = "836--840",
abstract = "This paper presents D2KLab{'}s system used for the shared task of {``}Multilingual Complex Named Entity Recognition (MultiCoNER II){''}, as part of SemEval 2023 Task 2. The system relies on a fine-tuned transformer based language model for extracting named entities. In addition to the architecture of the system, we discuss our results and observations.",
}
| This paper presents D2KLab{'}s system used for the shared task of {``}Multilingual Complex Named Entity Recognition (MultiCoNER II){''}, as part of SemEval 2023 Task 2. The system relies on a fine-tuned transformer based language model for extracting named entities. In addition to the architecture of the system, we discuss our results and observations. | [
"Ehrhart, Thibault",
"Plu, Julien",
"Troncy, Raphael"
] | D2KLab at SemEval-2023 Task 2: Leveraging T-NER to Develop a Fine-Tuned Multilingual Model for Complex Named Entity Recognition | semeval-1.115 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.116.bib | https://aclanthology.org/2023.semeval-1.116/ | @inproceedings{baswani-etal-2023-ltrc,
title = "{LTRC} at {S}em{E}val-2023 Task 6: Experiments with Ensemble Embeddings",
author = "Baswani, Pavan and
Sri Adibhatla, Hiranmai and
Shrivastava, Manish",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.116",
doi = "10.18653/v1/2023.semeval-1.116",
pages = "841--846",
abstract = "In this paper, we present our team{'}s involvement in Task 6: LegalEval: Understanding Legal Texts. The task comprised three subtasks, and we focus on subtask A: Rhetorical Roles prediction. Our approach included experimenting with pre-trained embeddings and refining them with statistical and neural classifiers. We provide a thorough examination ofour experiments, solutions, and analysis, culminating in our best-performing model and current progress. We achieved a micro F1 score of 0.6133 on the test data using fine-tuned LegalBERT embeddings.",
}
| In this paper, we present our team{'}s involvement in Task 6: LegalEval: Understanding Legal Texts. The task comprised three subtasks, and we focus on subtask A: Rhetorical Roles prediction. Our approach included experimenting with pre-trained embeddings and refining them with statistical and neural classifiers. We provide a thorough examination ofour experiments, solutions, and analysis, culminating in our best-performing model and current progress. We achieved a micro F1 score of 0.6133 on the test data using fine-tuned LegalBERT embeddings. | [
"Baswani, Pavan",
"Sri Adibhatla, Hiranmai",
"Shrivastava, Manish"
] | LTRC at SemEval-2023 Task 6: Experiments with Ensemble Embeddings | semeval-1.116 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.117.bib | https://aclanthology.org/2023.semeval-1.117/ | @inproceedings{pauli-etal-2023-teamampa,
title = "{T}eam{A}mpa at {S}em{E}val-2023 Task 3: Exploring Multilabel and Multilingual {R}o{BERT}a Models for Persuasion and Framing Detection",
author = "Pauli, Amalie and
Sarabia, Rafael and
Derczynski, Leon and
Assent, Ira",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.117",
doi = "10.18653/v1/2023.semeval-1.117",
pages = "847--855",
abstract = "This paper describes our submission to theSemEval 2023 Task 3 on two subtasks: detectingpersuasion techniques and framing. Bothsubtasks are multi-label classification problems. We present a set of experiments, exploring howto get robust performance across languages usingpre-trained RoBERTa models. We test differentoversampling strategies, a strategy ofadding textual features from predictions obtainedwith related models, and present bothinconclusive and negative results. We achievea robust ranking across languages and subtaskswith our best ranking being nr. 1 for Subtask 3on Spanish.",
}
| This paper describes our submission to theSemEval 2023 Task 3 on two subtasks: detectingpersuasion techniques and framing. Bothsubtasks are multi-label classification problems. We present a set of experiments, exploring howto get robust performance across languages usingpre-trained RoBERTa models. We test differentoversampling strategies, a strategy ofadding textual features from predictions obtainedwith related models, and present bothinconclusive and negative results. We achievea robust ranking across languages and subtaskswith our best ranking being nr. 1 for Subtask 3on Spanish. | [
"Pauli, Amalie",
"Sarabia, Rafael",
"Derczynski, Leon",
"Assent, Ira"
] | TeamAmpa at SemEval-2023 Task 3: Exploring Multilabel and Multilingual RoBERTa Models for Persuasion and Framing Detection | semeval-1.117 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.118.bib | https://aclanthology.org/2023.semeval-1.118/ | @inproceedings{alami-etal-2023-um6p,
title = "{UM}6{P} at {S}em{E}val-2023 Task 3: News genre classification based on transformers, graph convolution networks and number of sentences",
author = "Alami, Hamza and
Benlahbib, Abdessamad and
El Mahdaouy, Abdelkader and
Berrada, Ismail",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.118",
doi = "10.18653/v1/2023.semeval-1.118",
pages = "856--861",
abstract = "This paper presents our proposed method for english documents genre classification in the context of SemEval 2023 task 3, subtask 1. Our method use ensemble technique to combine four distinct models predictions: Longformer, RoBERTa, GCN, and a sentences number-based model. Each model is optimized on simple objectives and easy to grasp. We provide snippets of code that define each model to make the reading experience better. Our method ranked 12th in documents genre classification for english texts.",
}
| This paper presents our proposed method for english documents genre classification in the context of SemEval 2023 task 3, subtask 1. Our method use ensemble technique to combine four distinct models predictions: Longformer, RoBERTa, GCN, and a sentences number-based model. Each model is optimized on simple objectives and easy to grasp. We provide snippets of code that define each model to make the reading experience better. Our method ranked 12th in documents genre classification for english texts. | [
"Alami, Hamza",
"Benlahbib, Abdessamad",
"El Mahdaouy, Abdelkader",
"Berrada, Ismail"
] | UM6P at SemEval-2023 Task 3: News genre classification based on transformers, graph convolution networks and number of sentences | semeval-1.118 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.119.bib | https://aclanthology.org/2023.semeval-1.119/ | @inproceedings{hoang-etal-2023-viettel,
title = "Viettel-{AI} at {S}em{E}val-2023 Task 6: Legal Document Understanding with Longformer for Court Judgment Prediction with Explanation",
author = "Hoang, Thanh Dat and
Bui, Chi Minh and
Bui, Nam",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.119",
doi = "10.18653/v1/2023.semeval-1.119",
pages = "862--868",
abstract = "Court Judgement Prediction with Explanation (CJPE) is a task in the field of legal analysis and evaluation, which involves predicting the outcome of a court case based on the available legal text and providing a detailed explanation of the prediction. This is an important task in the legal system as it can aid in decision-making and improve the efficiency of the court process. In this paper, we present a new approach to understanding legal texts, which are normally long documents, based on data-oriented methods. Specifically, we first try to exploit the characteristic of data to understand the legal texts. The output is then used to train the model using the Longformer architecture. Regarding the experiment, the proposed method is evaluated on the sub-task CJPE of the SemEval-2023 Task 6. Accordingly, our method achieves top 1 and top 2 on the classification task and explanation task, respectively. Furthermore, we present several open research issues for further investigations in order to improve the performance in this research field.",
}
| Court Judgement Prediction with Explanation (CJPE) is a task in the field of legal analysis and evaluation, which involves predicting the outcome of a court case based on the available legal text and providing a detailed explanation of the prediction. This is an important task in the legal system as it can aid in decision-making and improve the efficiency of the court process. In this paper, we present a new approach to understanding legal texts, which are normally long documents, based on data-oriented methods. Specifically, we first try to exploit the characteristic of data to understand the legal texts. The output is then used to train the model using the Longformer architecture. Regarding the experiment, the proposed method is evaluated on the sub-task CJPE of the SemEval-2023 Task 6. Accordingly, our method achieves top 1 and top 2 on the classification task and explanation task, respectively. Furthermore, we present several open research issues for further investigations in order to improve the performance in this research field. | [
"Hoang, Thanh Dat",
"Bui, Chi Minh",
"Bui, Nam"
] | Viettel-AI at SemEval-2023 Task 6: Legal Document Understanding with Longformer for Court Judgment Prediction with Explanation | semeval-1.119 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.120.bib | https://aclanthology.org/2023.semeval-1.120/ | @inproceedings{arlim-etal-2023-gunadarmaxbrin,
title = "{G}unadarma{XBRIN} at {S}em{E}val-2023 Task 12: Utilization of {SVM} and {A}fri{BERT}a for Monolingual, Multilingual, and Zero-shot Sentiment Analysis in {A}frican Languages",
author = "Arlim, Novitasari and
Riyanto, Slamet and
Rodiah, Rodiah and
Siagian, Al Hafiz Akbar Maulana",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.120",
doi = "10.18653/v1/2023.semeval-1.120",
pages = "869--877",
abstract = "This paper describes our participation in Task 12: AfriSenti-SemEval 2023, i.e., track 12 of subtask A, track 16 of subtask B, and track 18 of subtask C. To deal with these three tracks, we utilize Support Vector Machine (SVM) + One vs Rest, SVM + One vs Rest with SMOTE, and AfriBERTa-large models. In particular, our SVM + One vs Rest with SMOTE model could obtain the highest weighted F1-Score for tracks 16 and 18 in the evaluation phase, that is, 65.14{\%} and 33.49{\%}, respectively. Meanwhile, our SVM + One vs Rest model could perform better than other models for track 12 in the evaluation phase.",
}
| This paper describes our participation in Task 12: AfriSenti-SemEval 2023, i.e., track 12 of subtask A, track 16 of subtask B, and track 18 of subtask C. To deal with these three tracks, we utilize Support Vector Machine (SVM) + One vs Rest, SVM + One vs Rest with SMOTE, and AfriBERTa-large models. In particular, our SVM + One vs Rest with SMOTE model could obtain the highest weighted F1-Score for tracks 16 and 18 in the evaluation phase, that is, 65.14{\%} and 33.49{\%}, respectively. Meanwhile, our SVM + One vs Rest model could perform better than other models for track 12 in the evaluation phase. | [
"Arlim, Novitasari",
"Riyanto, Slamet",
"Rodiah, Rodiah",
"Siagian, Al Hafiz Akbar Maulana"
] | GunadarmaXBRIN at SemEval-2023 Task 12: Utilization of SVM and AfriBERTa for Monolingual, Multilingual, and Zero-shot Sentiment Analysis in African Languages | semeval-1.120 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.121.bib | https://aclanthology.org/2023.semeval-1.121/ | @inproceedings{lovon-melgarejo-etal-2023-meerqat,
title = "{MEERQAT}-{IRIT} at {S}em{E}val-2023 Task 2: Leveraging Contextualized Tag Descriptors for Multilingual Named Entity Recognition",
author = "Lovon-Melgarejo, Jesus and
Moreno, Jose G. and
Besan{\c{c}}on, Romaric and
Ferret, Olivier and
Lechani, Lynda",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.121",
doi = "10.18653/v1/2023.semeval-1.121",
pages = "878--884",
abstract = "This paper describes the system we submitted to the SemEval 2023 Task 2 Multilingual Complex Named Entity Recognition (MultiCoNER II) in four monolingual tracks (English, Spanish, French, and Portuguese). Considering the low context setting and the fine-grained taxonomy presented in this task, we propose a system that leverages the language model representations using hand-crafted tag descriptors. We explored how integrating the contextualized representations of tag descriptors with a language model can help improve the model performance for this task. We performed our evaluations on the development and test sets used in the task for the Practice Phase and the Evaluation Phase respectively.",
}
| This paper describes the system we submitted to the SemEval 2023 Task 2 Multilingual Complex Named Entity Recognition (MultiCoNER II) in four monolingual tracks (English, Spanish, French, and Portuguese). Considering the low context setting and the fine-grained taxonomy presented in this task, we propose a system that leverages the language model representations using hand-crafted tag descriptors. We explored how integrating the contextualized representations of tag descriptors with a language model can help improve the model performance for this task. We performed our evaluations on the development and test sets used in the task for the Practice Phase and the Evaluation Phase respectively. | [
"Lovon-Melgarejo, Jesus",
"Moreno, Jose G.",
"Besan{\\c{c}}on, Romaric",
"Ferret, Olivier",
"Lechani, Lynda"
] | MEERQAT-IRIT at SemEval-2023 Task 2: Leveraging Contextualized Tag Descriptors for Multilingual Named Entity Recognition | semeval-1.121 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.122.bib | https://aclanthology.org/2023.semeval-1.122/ | @inproceedings{bangerter-etal-2023-unisa,
title = "Unisa at {S}em{E}val-2023 Task 3: A {SHAP}-based method for Propaganda Detection",
author = "Bangerter, Micaela and
Fenza, Giuseppe and
Gallo, Mariacristina and
Loia, Vincenzo and
Volpe, Alberto and
Maio, Carmen De and
Stanzione, Claudio",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.122",
doi = "10.18653/v1/2023.semeval-1.122",
pages = "885--891",
abstract = "This paper presents proposed solutions for addressing two subtasks in SemEval-2023 Task 3: {``}Detecting the Genre, the Framing, and the Persuasion techniques in online news in a multi-lingual setup. In subtask 1, {``}News Genre Categorisation, the goal is to classify a news article as an opinion, a report, or a satire. In subtask 3, {``}Detection of Persuasion Technique, the system must reveal persuasion techniques used in each news article paragraph choosing among23 defined methods. Solutions leverage the application of the eXplainable Artificial Intelligence (XAI) method, Shapley Additive Explanations (SHAP). In subtask 1, SHAP was used to understand what was driving the model to fail so that it could be improved accordingly. In contrast, in subtask 3, a re-calibration of the Attention Mechanism was realized by extracting critical tokens for each persuasion technique. The underlying idea is the exploitation of XAI for countering the overfitting of the resulting model and attempting to improve the performance when there are few samples in the training data. The achieved performance on English for subtask 1 ranked 6th with an F1-score of 58.6{\%} (despite 78.4{\%} of the 1st) and for subtask 3 ranked 12th with a micro-averaged F1-score of 29.8{\%} (despite 37.6{\%} of the 1st).",
}
| This paper presents proposed solutions for addressing two subtasks in SemEval-2023 Task 3: {``}Detecting the Genre, the Framing, and the Persuasion techniques in online news in a multi-lingual setup. In subtask 1, {``}News Genre Categorisation, the goal is to classify a news article as an opinion, a report, or a satire. In subtask 3, {``}Detection of Persuasion Technique, the system must reveal persuasion techniques used in each news article paragraph choosing among23 defined methods. Solutions leverage the application of the eXplainable Artificial Intelligence (XAI) method, Shapley Additive Explanations (SHAP). In subtask 1, SHAP was used to understand what was driving the model to fail so that it could be improved accordingly. In contrast, in subtask 3, a re-calibration of the Attention Mechanism was realized by extracting critical tokens for each persuasion technique. The underlying idea is the exploitation of XAI for countering the overfitting of the resulting model and attempting to improve the performance when there are few samples in the training data. The achieved performance on English for subtask 1 ranked 6th with an F1-score of 58.6{\%} (despite 78.4{\%} of the 1st) and for subtask 3 ranked 12th with a micro-averaged F1-score of 29.8{\%} (despite 37.6{\%} of the 1st). | [
"Bangerter, Micaela",
"Fenza, Giuseppe",
"Gallo, Mariacristina",
"Loia, Vincenzo",
"Volpe, Alberto",
"Maio, Carmen De",
"Stanzione, Claudio"
] | Unisa at SemEval-2023 Task 3: A SHAP-based method for Propaganda Detection | semeval-1.122 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.123.bib | https://aclanthology.org/2023.semeval-1.123/ | @inproceedings{yu-etal-2023-dutir,
title = "{DUTIR} at {S}em{E}val-2023 Task 10: Semi-supervised Learning for Sexism Detection in {E}nglish",
author = "Yu, Bingjie and
Bai, Zewen and
Ji, Haoran and
Li, Shiyi and
Zhang, Hao and
Lin, Hongfei",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.123",
doi = "10.18653/v1/2023.semeval-1.123",
pages = "892--896",
abstract = "Sexism is an injustice afflicting women and has become a common form of oppression in social media. In recent years, the automatic detection of sexist instances has been utilized to combat this oppression. The Subtask A of SemEval-2023 Task 10, Explainable Detection of Online Sexism, aims to detect whether an English-language post is sexist. In this paper, we describe our system for the competition. The structure of the classification model is based on RoBERTa, and we further pre-train it on the domain corpus. For fine-tuning, we adopt Unsupervised Data Augmentation (UDA), a semi-supervised learning approach, to improve the robustness of the system. Specifically, we employ Easy Data Augmentation (EDA) method as the noising operation for consistency training. We train multiple models based on different hyperparameter settings and adopt the majority voting method to predict the labels of test entries. Our proposed system achieves a Macro-F1 score of 0.8352 and a ranking of 41/84 on the leaderboard of Subtask A.",
}
| Sexism is an injustice afflicting women and has become a common form of oppression in social media. In recent years, the automatic detection of sexist instances has been utilized to combat this oppression. The Subtask A of SemEval-2023 Task 10, Explainable Detection of Online Sexism, aims to detect whether an English-language post is sexist. In this paper, we describe our system for the competition. The structure of the classification model is based on RoBERTa, and we further pre-train it on the domain corpus. For fine-tuning, we adopt Unsupervised Data Augmentation (UDA), a semi-supervised learning approach, to improve the robustness of the system. Specifically, we employ Easy Data Augmentation (EDA) method as the noising operation for consistency training. We train multiple models based on different hyperparameter settings and adopt the majority voting method to predict the labels of test entries. Our proposed system achieves a Macro-F1 score of 0.8352 and a ranking of 41/84 on the leaderboard of Subtask A. | [
"Yu, Bingjie",
"Bai, Zewen",
"Ji, Haoran",
"Li, Shiyi",
"Zhang, Hao",
"Lin, Hongfei"
] | DUTIR at SemEval-2023 Task 10: Semi-supervised Learning for Sexism Detection in English | semeval-1.123 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.124.bib | https://aclanthology.org/2023.semeval-1.124/ | @inproceedings{lu-etal-2023-netease,
title = "{N}et{E}ase.{AI} at {S}em{E}val-2023 Task 2: Enhancing Complex Named Entities Recognition in Noisy Scenarios via Text Error Correction and External Knowledge",
author = "Lu, Ruixuan and
Tang, Zihang and
Hu, Guanglong and
Liu, Dong and
Li, Jiacheng",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.124",
doi = "10.18653/v1/2023.semeval-1.124",
pages = "897--904",
abstract = "Complex named entities (NE), like the titles of creative works, are not simple nouns and pose challenges for NER systems. In the SemEval 2023, Task 2: MultiCoNER II was proposed, whose goal is to recognize complex entities against out of knowledge-base entities and noisy scenarios. To address the challenges posed by MultiCoNER II, our team NetEase.AI proposed an entity recognition system that integrates text error correction system and external knowledge, which can recognize entities in scenes that contain entities out of knowledge base and text with noise. Upon receiving an input sentence, our systems will correct the sentence, extract the entities in the sentence as candidate set using the entity recognition model that incorporates the gazetteer information, and then use the external knowledge to classify the candidate entities to obtain entity type features. Finally, our system fused the multi-dimensional features of the candidate entities into a stacking model, which was used to select the correct entities from the candidate set as the final output. Our system exhibited good noise resistance and excellent entity recognition performance, resulting in our team{'}s first place victory in the Chinese track of MultiCoNER II.",
}
| Complex named entities (NE), like the titles of creative works, are not simple nouns and pose challenges for NER systems. In the SemEval 2023, Task 2: MultiCoNER II was proposed, whose goal is to recognize complex entities against out of knowledge-base entities and noisy scenarios. To address the challenges posed by MultiCoNER II, our team NetEase.AI proposed an entity recognition system that integrates text error correction system and external knowledge, which can recognize entities in scenes that contain entities out of knowledge base and text with noise. Upon receiving an input sentence, our systems will correct the sentence, extract the entities in the sentence as candidate set using the entity recognition model that incorporates the gazetteer information, and then use the external knowledge to classify the candidate entities to obtain entity type features. Finally, our system fused the multi-dimensional features of the candidate entities into a stacking model, which was used to select the correct entities from the candidate set as the final output. Our system exhibited good noise resistance and excellent entity recognition performance, resulting in our team{'}s first place victory in the Chinese track of MultiCoNER II. | [
"Lu, Ruixuan",
"Tang, Zihang",
"Hu, Guanglong",
"Liu, Dong",
"Li, Jiacheng"
] | NetEase.AI at SemEval-2023 Task 2: Enhancing Complex Named Entities Recognition in Noisy Scenarios via Text Error Correction and External Knowledge | semeval-1.124 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.125.bib | https://aclanthology.org/2023.semeval-1.125/ | @inproceedings{lima-etal-2023-irit,
title = "{IRIT}{\_}{IRIS}{\_}{A} at {S}em{E}val-2023 Task 6: Legal Rhetorical Role Labeling Supported by Dynamic-Filled Contextualized Sentence Chunks",
author = "Lima, Alexandre Gomes de and
Moreno, Jose G. and
H. da S. Aranha, Eduardo",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.125",
doi = "10.18653/v1/2023.semeval-1.125",
pages = "905--912",
abstract = "This work presents and evaluates an approach to efficiently leverage the context exploitation ability of pre-trained Transformer models as a way of boosting the performance of models tackling the Legal Rhetorical Role Labeling task. The core idea is to feed the model with sentence chunks that are assembled in a way that avoids the insertion of padding tokens and the truncation of sentences and, hence, obtain better sentence embeddings. The achieved results show that our proposal is efficient, despite its simplicity, since models based on it overcome strong baselines by 3.76{\%} in the worst case and by 8.71{\%} in the best case.",
}
| This work presents and evaluates an approach to efficiently leverage the context exploitation ability of pre-trained Transformer models as a way of boosting the performance of models tackling the Legal Rhetorical Role Labeling task. The core idea is to feed the model with sentence chunks that are assembled in a way that avoids the insertion of padding tokens and the truncation of sentences and, hence, obtain better sentence embeddings. The achieved results show that our proposal is efficient, despite its simplicity, since models based on it overcome strong baselines by 3.76{\%} in the worst case and by 8.71{\%} in the best case. | [
"Lima, Alex",
"re Gomes de",
"Moreno, Jose G.",
"H. da S. Aranha, Eduardo"
] | IRIT_IRIS_A at SemEval-2023 Task 6: Legal Rhetorical Role Labeling Supported by Dynamic-Filled Contextualized Sentence Chunks | semeval-1.125 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.126.bib | https://aclanthology.org/2023.semeval-1.126/ | @inproceedings{oica-etal-2023-togedemaru,
title = "Togedemaru at {S}em{E}val-2023 Task 8: Causal Medical Claim Identification and Extraction from Social Media Posts",
author = "Oica, Andra and
Gifu, Daniela and
Trandabat, Diana",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.126",
doi = "10.18653/v1/2023.semeval-1.126",
pages = "913--921",
abstract = "The {``}Causal Medical Claim Identification and Extraction from Social Media Posts task at SemEval 2023 competition focuses on identifying and validating medical claims in English, by posing two subtasks on causal claim identification and PIO (Population, Intervention, Outcome) frame extraction. In the context of SemEval, we present a method for sentence classification in four categories (claim, experience, experience{\_}based{\_}claim or a question) based on BioBERT model with a MLP layer. The website from which the dataset was gathered, Reddit, is a social news and content discussion site. The evaluation results show the effectiveness of the solution of this study (83.68{\%}).",
}
| The {``}Causal Medical Claim Identification and Extraction from Social Media Posts task at SemEval 2023 competition focuses on identifying and validating medical claims in English, by posing two subtasks on causal claim identification and PIO (Population, Intervention, Outcome) frame extraction. In the context of SemEval, we present a method for sentence classification in four categories (claim, experience, experience{\_}based{\_}claim or a question) based on BioBERT model with a MLP layer. The website from which the dataset was gathered, Reddit, is a social news and content discussion site. The evaluation results show the effectiveness of the solution of this study (83.68{\%}). | [
"Oica, Andra",
"Gifu, Daniela",
"Tr",
"abat, Diana"
] | Togedemaru at SemEval-2023 Task 8: Causal Medical Claim Identification and Extraction from Social Media Posts | semeval-1.126 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.127.bib | https://aclanthology.org/2023.semeval-1.127/ | @inproceedings{baumann-deisenhofer-2023-framingfreaks,
title = "{F}raming{F}reaks at {S}em{E}val-2023 Task 3: Detecting the Category and the Framing of Texts as Subword Units with Traditional Machine Learning",
author = "Baumann, Rosina and
Deisenhofer, Sabrina",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.127",
doi = "10.18653/v1/2023.semeval-1.127",
pages = "922--926",
abstract = "This paper describes our participation as team FramingFreaks in the SemEval-2023 task 3 {``}Category and Framing Predictions in online news in a multi-lingual setup.{''} We participated in subtasks 1 and 2. Our approach was to classify texts by splitting them into subwords to reduce the feature set size and then using these tokens as input in Support Vector Machine (SVM) or logistic regression classifiers. Our results are similar to the baseline results.",
}
| This paper describes our participation as team FramingFreaks in the SemEval-2023 task 3 {``}Category and Framing Predictions in online news in a multi-lingual setup.{''} We participated in subtasks 1 and 2. Our approach was to classify texts by splitting them into subwords to reduce the feature set size and then using these tokens as input in Support Vector Machine (SVM) or logistic regression classifiers. Our results are similar to the baseline results. | [
"Baumann, Rosina",
"Deisenhofer, Sabrina"
] | FramingFreaks at SemEval-2023 Task 3: Detecting the Category and the Framing of Texts as Subword Units with Traditional Machine Learning | semeval-1.127 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.128.bib | https://aclanthology.org/2023.semeval-1.128/ | @inproceedings{yuan-chen-2023-lazybob,
title = "Lazybob at {S}em{E}val-2023 Task 9: Quantifying Intimacy of Multilingual Tweets with Multi-Task Learning",
author = "Yuan, Mengfei and
Chen, Cheng",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.128",
doi = "10.18653/v1/2023.semeval-1.128",
pages = "927--933",
abstract = "This study presents a systematic method for analyzing the level of intimacy in tweets across ten different languages, using multi-task learning for SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis. The system begins with the utilization of the official training data, and then we experiment with different fine-tuning tricks and effective strategies, such as data augmentation, multi-task learning, etc. Through additional experiments, the approach is shown to be effective for the task. To enhance the model{'}s robustness, different transformer-based language models and some widely-used plug-and-play priors are incorporated into our system. Our final submission achieved a Pearson R of 0.6160 for the intimacy score on the official test set, placing us at the top of the leader board among 45 teams.",
}
| This study presents a systematic method for analyzing the level of intimacy in tweets across ten different languages, using multi-task learning for SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis. The system begins with the utilization of the official training data, and then we experiment with different fine-tuning tricks and effective strategies, such as data augmentation, multi-task learning, etc. Through additional experiments, the approach is shown to be effective for the task. To enhance the model{'}s robustness, different transformer-based language models and some widely-used plug-and-play priors are incorporated into our system. Our final submission achieved a Pearson R of 0.6160 for the intimacy score on the official test set, placing us at the top of the leader board among 45 teams. | [
"Yuan, Mengfei",
"Chen, Cheng"
] | Lazybob at SemEval-2023 Task 9: Quantifying Intimacy of Multilingual Tweets with Multi-Task Learning | semeval-1.128 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.129.bib | https://aclanthology.org/2023.semeval-1.129/ | @inproceedings{yao-etal-2023-hitszq,
title = "{HITSZQ} at {S}em{E}val-2023 Task 10: Category-aware Sexism Detection Model with Self-training Strategy",
author = "Yao, Ziyi and
Chai, Heyan and
Cui, Jinhao and
Tang, Siyu and
Liao, Qing",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.129",
doi = "10.18653/v1/2023.semeval-1.129",
pages = "934--940",
abstract = "This paper describes our system used in the SemEval-2023 {\textbackslash}textit{Task 10 Explainable Detection of Online Sexism (EDOS)}. Specifically, we participated in subtask B: a 4-class sexism classification task, and subtask C: a more fine-grained (11-class) sexism classification task, where it is necessary to predict the category of sexism. We treat these two subtasks as one multi-label hierarchical text classification problem, and propose an integrated sexism detection model for improving the performance of the sexism detection task. More concretely, we use the pre-trained BERT model to encode the text and class label and a hierarchy-relevant structure encoder is employed to model the relationship between classes of subtasks B and C. Additionally, a self-training strategy is designed to alleviate the imbalanced problem of distribution classes. Extensive experiments on subtasks B and C demonstrate the effectiveness of our proposed approach.",
}
| This paper describes our system used in the SemEval-2023 {\textbackslash}textit{Task 10 Explainable Detection of Online Sexism (EDOS)}. Specifically, we participated in subtask B: a 4-class sexism classification task, and subtask C: a more fine-grained (11-class) sexism classification task, where it is necessary to predict the category of sexism. We treat these two subtasks as one multi-label hierarchical text classification problem, and propose an integrated sexism detection model for improving the performance of the sexism detection task. More concretely, we use the pre-trained BERT model to encode the text and class label and a hierarchy-relevant structure encoder is employed to model the relationship between classes of subtasks B and C. Additionally, a self-training strategy is designed to alleviate the imbalanced problem of distribution classes. Extensive experiments on subtasks B and C demonstrate the effectiveness of our proposed approach. | [
"Yao, Ziyi",
"Chai, Heyan",
"Cui, Jinhao",
"Tang, Siyu",
"Liao, Qing"
] | HITSZQ at SemEval-2023 Task 10: Category-aware Sexism Detection Model with Self-training Strategy | semeval-1.129 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.130.bib | https://aclanthology.org/2023.semeval-1.130/ | @inproceedings{reiter-haas-etal-2023-mcpt,
title = "m{CPT} at {S}em{E}val-2023 Task 3: Multilingual Label-Aware Contrastive Pre-Training of Transformers for Few- and Zero-shot Framing Detection",
author = "Reiter-Haas, Markus and
Ertl, Alexander and
Innerhofer, Kevin and
Lex, Elisabeth",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.130",
doi = "10.18653/v1/2023.semeval-1.130",
pages = "941--949",
abstract = "This paper presents the winning system for the zero-shot Spanish framing detection task, which also achieves competitive places in eight additional languages. The challenge of the framing detection task lies in identifying a set of 14 frames when only a few or zero samples are available, i.e., a multilingual multi-label few- or zero-shot setting. Our developed solution employs a pre-training procedure based on multilingual Transformers using a label-aware contrastive loss function. In addition to describing the system, we perform an embedding space analysis and ablation study to demonstrate how our pre-training procedure supports framing detection to advance computational framing analysis.",
}
| This paper presents the winning system for the zero-shot Spanish framing detection task, which also achieves competitive places in eight additional languages. The challenge of the framing detection task lies in identifying a set of 14 frames when only a few or zero samples are available, i.e., a multilingual multi-label few- or zero-shot setting. Our developed solution employs a pre-training procedure based on multilingual Transformers using a label-aware contrastive loss function. In addition to describing the system, we perform an embedding space analysis and ablation study to demonstrate how our pre-training procedure supports framing detection to advance computational framing analysis. | [
"Reiter-Haas, Markus",
"Ertl, Alex",
"er",
"Innerhofer, Kevin",
"Lex, Elisabeth"
] | mCPT at SemEval-2023 Task 3: Multilingual Label-Aware Contrastive Pre-Training of Transformers for Few- and Zero-shot Framing Detection | semeval-1.130 | Poster | 2303.09901 | [
"https://github.com/socialcomplab/semeval23-mcpt"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.131.bib | https://aclanthology.org/2023.semeval-1.131/ | @inproceedings{noor-mohamed-srinivasan-2023-ssnsheerinkavitha,
title = "{SSNS}heerin{K}avitha at {S}em{E}val-2023 Task 7: Semantic Rule Based Label Prediction Using {TF}-{IDF} and {BM}25 Techniques",
author = "Noor Mohamed, Sheerin Sitara and
Srinivasan, Kavitha",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.131",
doi = "10.18653/v1/2023.semeval-1.131",
pages = "950--957",
abstract = "The advancement in the healthcare sector assures improved diagnosis and supports appropriate decision making in medical domain. The medical domain data can be either radiology images or clinical data. The clinical data plays a major role in the healthcare sector by preventing and treating the health problem based on the evidence learned from the trials. This paper is related to multi-evidence natural language inference for clinical trial data analysis and its solution for the given subtasks (SemEval 2023 Task 7 - NLI4CT). In subtask 1 of NLI4CT, the inference relationship (entailment or contradiction) between the Clinical Trial Reports (CTRs) statement pairs with respect to the Clinical Trial Data (CTD) statement are determined. In subtask 2 of NLI4CT, predicted label (inference relationship) are defined and justified using set of supporting facts extracted from the premises. The objective of this work is to derive the conclusion from premises (CTRs statement pairs) and extracting the supporting premises using proposed Semantic Rule based Clinical Data Analysis (SRCDA) approach. From the results, the proposed model attained an highest F1-score of 0.667 and 0.716 for subtasks 1 and 2 respectively. The novelty of this proposed approach includes, creation of External Knowledge Base (EKB) along with its suitable semantic rules based on the input statements.",
}
| The advancement in the healthcare sector assures improved diagnosis and supports appropriate decision making in medical domain. The medical domain data can be either radiology images or clinical data. The clinical data plays a major role in the healthcare sector by preventing and treating the health problem based on the evidence learned from the trials. This paper is related to multi-evidence natural language inference for clinical trial data analysis and its solution for the given subtasks (SemEval 2023 Task 7 - NLI4CT). In subtask 1 of NLI4CT, the inference relationship (entailment or contradiction) between the Clinical Trial Reports (CTRs) statement pairs with respect to the Clinical Trial Data (CTD) statement are determined. In subtask 2 of NLI4CT, predicted label (inference relationship) are defined and justified using set of supporting facts extracted from the premises. The objective of this work is to derive the conclusion from premises (CTRs statement pairs) and extracting the supporting premises using proposed Semantic Rule based Clinical Data Analysis (SRCDA) approach. From the results, the proposed model attained an highest F1-score of 0.667 and 0.716 for subtasks 1 and 2 respectively. The novelty of this proposed approach includes, creation of External Knowledge Base (EKB) along with its suitable semantic rules based on the input statements. | [
"Noor Mohamed, Sheerin Sitara",
"Srinivasan, Kavitha"
] | SSNSheerinKavitha at SemEval-2023 Task 7: Semantic Rule Based Label Prediction Using TF-IDF and BM25 Techniques | semeval-1.131 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.132.bib | https://aclanthology.org/2023.semeval-1.132/ | @inproceedings{li-etal-2023-janko,
title = "Janko at {S}em{E}val-2023 Task 2: Bidirectional {LSTM} Model Based on Pre-training for {C}hinese Named Entity Recognition",
author = "Li, Jiankuo and
Guan, Zhengyi and
Ding, Haiyan",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.132",
doi = "10.18653/v1/2023.semeval-1.132",
pages = "958--962",
abstract = "This paper describes the method we submitted as the Janko team in the SemEval-2023 Task 2,Multilingual Complex Named Entity Recognition (MultiCoNER 2). We only participated in the Chinese track. In this paper, we implement the BERT-BiLSTM-RDrop model. We use the fine-tuned BERT models, take the output of BERT as the input of the BiLSTM network, and finally use R-Drop technology to optimize the loss function. Our submission achieved a macro-averaged F1 score of 0.579 on the testset.",
}
| This paper describes the method we submitted as the Janko team in the SemEval-2023 Task 2,Multilingual Complex Named Entity Recognition (MultiCoNER 2). We only participated in the Chinese track. In this paper, we implement the BERT-BiLSTM-RDrop model. We use the fine-tuned BERT models, take the output of BERT as the input of the BiLSTM network, and finally use R-Drop technology to optimize the loss function. Our submission achieved a macro-averaged F1 score of 0.579 on the testset. | [
"Li, Jiankuo",
"Guan, Zhengyi",
"Ding, Haiyan"
] | Janko at SemEval-2023 Task 2: Bidirectional LSTM Model Based on Pre-training for Chinese Named Entity Recognition | semeval-1.132 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.133.bib | https://aclanthology.org/2023.semeval-1.133/ | @inproceedings{zhang-wang-2023-hhs,
title = "{HHS} at {S}em{E}val-2023 Task 10: A Comparative Analysis of Sexism Detection Based on the {R}o{BERT}a Model",
author = "Zhang, Yao and
Wang, Liqing",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.133",
doi = "10.18653/v1/2023.semeval-1.133",
pages = "963--968",
abstract = "This paper describes the methods and models applied by our team HHS in SubTask-A of SemEval-2023 Task 10 about sexism detection. In this task, we trained with the officially released data and analyzed the performance of five models, TextCNN, BERT, RoBERTa, XLNet, and Sup-SimCSE-RoBERTa. The experiments show that most of the models can achieve good results. Then, we tried data augmentation, model ensemble, dropout, and other operations on several of these models, and compared the results for analysis. In the end, the most effective approach that yielded the best results on the test set involved the following steps: enhancing the sexist data using dropout, feeding it as input to the Sup-SimCSE-RoBERTa model, and providing the raw data as input to the XLNet model. Then, combining the outputs of the two methods led to even better results. This method yielded a Macro-F1 score of 0.823 in the final evaluation phase of the SubTask-A of the competition.",
}
| This paper describes the methods and models applied by our team HHS in SubTask-A of SemEval-2023 Task 10 about sexism detection. In this task, we trained with the officially released data and analyzed the performance of five models, TextCNN, BERT, RoBERTa, XLNet, and Sup-SimCSE-RoBERTa. The experiments show that most of the models can achieve good results. Then, we tried data augmentation, model ensemble, dropout, and other operations on several of these models, and compared the results for analysis. In the end, the most effective approach that yielded the best results on the test set involved the following steps: enhancing the sexist data using dropout, feeding it as input to the Sup-SimCSE-RoBERTa model, and providing the raw data as input to the XLNet model. Then, combining the outputs of the two methods led to even better results. This method yielded a Macro-F1 score of 0.823 in the final evaluation phase of the SubTask-A of the competition. | [
"Zhang, Yao",
"Wang, Liqing"
] | HHS at SemEval-2023 Task 10: A Comparative Analysis of Sexism Detection Based on the RoBERTa Model | semeval-1.133 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.134.bib | https://aclanthology.org/2023.semeval-1.134/ | @inproceedings{birkenheuer-etal-2023-sabrina,
title = "Sabrina Spellman at {S}em{E}val-2023 Task 5: Discover the Shocking Truth Behind this Composite Approach to Clickbait Spoiling!",
author = "Birkenheuer, Simon and
Drechsel, Jonathan and
Justen, Paul and
Phlmann, Jimmy and
Gonsior, Julius and
Reusch, Anja",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.134",
doi = "10.18653/v1/2023.semeval-1.134",
pages = "969--977",
abstract = "This paper describes an approach to automat- ically close the knowledge gap of Clickbait- Posts via a transformer model trained for Question-Answering, augmented by a task- specific post-processing step. This was part of the SemEval 2023 Clickbait shared task (Frbe et al., 2023a) - specifically task 2. We devised strategies to improve the existing model to fit the task better, e.g. with different special mod- els and a post-processor tailored to different inherent challenges of the task. Furthermore, we explored the possibility of expanding the original training data by using strategies from Heuristic Labeling and Semi-Supervised Learn- ing. With those adjustments, we were able to improve the baseline by 9.8 percentage points to a BLEU-4 score of 48.0{\%}.",
}
| This paper describes an approach to automat- ically close the knowledge gap of Clickbait- Posts via a transformer model trained for Question-Answering, augmented by a task- specific post-processing step. This was part of the SemEval 2023 Clickbait shared task (Frbe et al., 2023a) - specifically task 2. We devised strategies to improve the existing model to fit the task better, e.g. with different special mod- els and a post-processor tailored to different inherent challenges of the task. Furthermore, we explored the possibility of expanding the original training data by using strategies from Heuristic Labeling and Semi-Supervised Learn- ing. With those adjustments, we were able to improve the baseline by 9.8 percentage points to a BLEU-4 score of 48.0{\%}. | [
"Birkenheuer, Simon",
"Drechsel, Jonathan",
"Justen, Paul",
"Phlmann, Jimmy",
"Gonsior, Julius",
"Reusch, Anja"
] | Sabrina Spellman at SemEval-2023 Task 5: Discover the Shocking Truth Behind this Composite Approach to Clickbait Spoiling! | semeval-1.134 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.135.bib | https://aclanthology.org/2023.semeval-1.135/ | @inproceedings{sullivan-etal-2023-university,
title = "University at Buffalo at {S}em{E}val-2023 Task 11: {MASDA}{--}Modelling Annotator Sensibilities through {D}is{A}ggregation",
author = "Sullivan, Michael and
Yasin, Mohammed and
Jacobs, Cassandra L.",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.135",
doi = "10.18653/v1/2023.semeval-1.135",
pages = "978--985",
abstract = "Modeling the most likely label when an annotation task is perspective-dependent discards relevant sources of variation that come from the annotators themselves. We present three approaches to modeling the controversiality of a particular text. First, we explicitly represented annotators using annotator embeddings to predict the training signals of each annotator{'}s selections in addition to a majority class label. This method leads to reduction in error relative to models without these features, allowing the overall result to influence the weights of each annotator on the final prediction. In a second set of experiments, annotators were not modeled individually but instead annotator judgments were combined in a pairwise fashion that allowed us to implicitly combine annotators. Overall, we found that aggregating and explicitly comparing annotators{'} responses to a static document representation produced high-quality predictions in all datasets, though some systems struggle to account for large or variable numbers of annotators.",
}
| Modeling the most likely label when an annotation task is perspective-dependent discards relevant sources of variation that come from the annotators themselves. We present three approaches to modeling the controversiality of a particular text. First, we explicitly represented annotators using annotator embeddings to predict the training signals of each annotator{'}s selections in addition to a majority class label. This method leads to reduction in error relative to models without these features, allowing the overall result to influence the weights of each annotator on the final prediction. In a second set of experiments, annotators were not modeled individually but instead annotator judgments were combined in a pairwise fashion that allowed us to implicitly combine annotators. Overall, we found that aggregating and explicitly comparing annotators{'} responses to a static document representation produced high-quality predictions in all datasets, though some systems struggle to account for large or variable numbers of annotators. | [
"Sullivan, Michael",
"Yasin, Mohammed",
"Jacobs, Cass",
"ra L."
] | University at Buffalo at SemEval-2023 Task 11: MASDA–Modelling Annotator Sensibilities through DisAggregation | semeval-1.135 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.136.bib | https://aclanthology.org/2023.semeval-1.136/ | @inproceedings{vallecillo-rodrguez-etal-2023-sinai,
title = "{SINAI} at {S}em{E}val-2023 Task 10: Leveraging Emotions, Sentiments, and Irony Knowledge for Explainable Detection of Online Sexism",
author = "Vallecillo Rodrguez, Mar{\'\i}a Estrella and
Plaza Del Arco, Flor Miriam and
Ure{\~n}a L{\'o}pez, L. Alfonso and
Mart{\'\i}n Valdivia, M. Teresa",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.136",
doi = "10.18653/v1/2023.semeval-1.136",
pages = "986--994",
abstract = "This paper describes the participation of SINAI research team in the Explainable Detection of Online Sexism (EDOS) Shared Task at SemEval 2023. Specifically, we participate in subtask A (binary sexism detection), subtask B (category of sexism), and subtask C (fine-grained vector of sexism). For the three subtasks, we propose a system that integrates information related to emotions, sentiments, and irony in order to check whether these features help detect sexism content. Our team ranked 46th in subtask A, 37th in subtask B, and 29th in subtask C, achieving 0.8245, 0.6043, and 0.4376 of macro f1-score, respectively, among the participants.",
}
| This paper describes the participation of SINAI research team in the Explainable Detection of Online Sexism (EDOS) Shared Task at SemEval 2023. Specifically, we participate in subtask A (binary sexism detection), subtask B (category of sexism), and subtask C (fine-grained vector of sexism). For the three subtasks, we propose a system that integrates information related to emotions, sentiments, and irony in order to check whether these features help detect sexism content. Our team ranked 46th in subtask A, 37th in subtask B, and 29th in subtask C, achieving 0.8245, 0.6043, and 0.4376 of macro f1-score, respectively, among the participants. | [
"Vallecillo Rodrguez, Mar{\\'\\i}a Estrella",
"Plaza Del Arco, Flor Miriam",
"Ure{\\~n}a L{\\'o}pez, L. Alfonso",
"Mart{\\'\\i}n Valdivia, M. Teresa"
] | SINAI at SemEval-2023 Task 10: Leveraging Emotions, Sentiments, and Irony Knowledge for Explainable Detection of Online Sexism | semeval-1.136 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.137.bib | https://aclanthology.org/2023.semeval-1.137/ | @inproceedings{kanakarajan-sankarasubbu-2023-saama,
title = "Saama {AI} Research at {S}em{E}val-2023 Task 7: Exploring the Capabilities of Flan-T5 for Multi-evidence Natural Language Inference in Clinical Trial Data",
author = "Kanakarajan, Kamal Raj and
Sankarasubbu, Malaikannan",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.137",
doi = "10.18653/v1/2023.semeval-1.137",
pages = "995--1003",
abstract = "The goal of the NLI4CT task is to build a Natural Language Inference system for Clinical Trial Reports that will be used for evidence interpretation and retrieval. Large Language models have demonstrated state-of-the-art performance in various natural language processing tasks across multiple domains. We suggest using an instruction-finetuned Large Language Models (LLMs) to take on this particular task in light of these developments. We have evaluated the publicly available LLMs under zeroshot setting, and finetuned the best performing Flan-T5 model for this task. On the leaderboard, our system ranked second, with an F1 Score of 0.834 on the official test set.",
}
| The goal of the NLI4CT task is to build a Natural Language Inference system for Clinical Trial Reports that will be used for evidence interpretation and retrieval. Large Language models have demonstrated state-of-the-art performance in various natural language processing tasks across multiple domains. We suggest using an instruction-finetuned Large Language Models (LLMs) to take on this particular task in light of these developments. We have evaluated the publicly available LLMs under zeroshot setting, and finetuned the best performing Flan-T5 model for this task. On the leaderboard, our system ranked second, with an F1 Score of 0.834 on the official test set. | [
"Kanakarajan, Kamal Raj",
"Sankarasubbu, Malaikannan"
] | Saama AI Research at SemEval-2023 Task 7: Exploring the Capabilities of Flan-T5 for Multi-evidence Natural Language Inference in Clinical Trial Data | semeval-1.137 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.138.bib | https://aclanthology.org/2023.semeval-1.138/ | @inproceedings{el-mahdaouy-etal-2023-um6p,
title = "{UM}6{P} at {S}em{E}val-2023 Task 12: Out-Of-Distribution Generalization Method for {A}frican Languages Sentiment Analysis",
author = "El Mahdaouy, Abdelkader and
Alami, Hamza and
Lamsiyah, Salima and
Berrada, Ismail",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.138",
doi = "10.18653/v1/2023.semeval-1.138",
pages = "1004--1010",
abstract = "This paper presents our submitted system to AfriSenti SemEval-2023 Task 12: Sentiment Analysis for African Languages. The AfriSenti consists of three different tasks, covering monolingual, multilingual, and zero-shot sentiment analysis scenarios for African languages. To improve model generalization, we have explored the following steps: 1) further pre-training of the AfroXLM Pre-trained Language Model (PLM), 2) combining AfroXLM and MARBERT PLMs using a residual layer, and 3) studying the impact of metric learning and two out-of-distribution generalization training objectives. The overall evaluation results show that our system has achieved promising results on several sub-tasks of Task A. For Tasks B and C, our system is ranked among the top six participating systems.",
}
| This paper presents our submitted system to AfriSenti SemEval-2023 Task 12: Sentiment Analysis for African Languages. The AfriSenti consists of three different tasks, covering monolingual, multilingual, and zero-shot sentiment analysis scenarios for African languages. To improve model generalization, we have explored the following steps: 1) further pre-training of the AfroXLM Pre-trained Language Model (PLM), 2) combining AfroXLM and MARBERT PLMs using a residual layer, and 3) studying the impact of metric learning and two out-of-distribution generalization training objectives. The overall evaluation results show that our system has achieved promising results on several sub-tasks of Task A. For Tasks B and C, our system is ranked among the top six participating systems. | [
"El Mahdaouy, Abdelkader",
"Alami, Hamza",
"Lamsiyah, Salima",
"Berrada, Ismail"
] | UM6P at SemEval-2023 Task 12: Out-Of-Distribution Generalization Method for African Languages Sentiment Analysis | semeval-1.138 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.139.bib | https://aclanthology.org/2023.semeval-1.139/ | @inproceedings{tavan-najafi-2023-marsan,
title = "{M}ar{S}an at {S}em{E}val-2023 Task 10: Can Adversarial Training with help of a Graph Convolutional Network Detect Explainable Sexism?",
author = "Tavan, Ehsan and
Najafi, Maryam",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.139",
doi = "10.18653/v1/2023.semeval-1.139",
pages = "1011--1020",
abstract = "This paper describes SemEval-2022{'}s shared task {``}Explainable Detection of Online Sexism{''}. The fine-grained classification of sexist content plays a major role in building explainable frameworks for online sexism detection. We hypothesize that by encoding dependency information using Graph Convolutional Networks (GCNs) we may capture more stylistic information about sexist contents. Online sexism has the potential to cause significant harm to women who are the targets of such behavior. It not only creates unwelcoming and inaccessible spaces for women online but also perpetuates social asymmetries and injustices. We believed improving the robustness and generalization ability of neural networks during training will allow models to capture different belief distributions for sexism categories. So we proposed adversarial training with GCNs for explainable detection of online sexism. In the end, our proposed method achieved very competitive results in all subtasks and shows that adversarial training of GCNs is a promising method for the explainable detection of online sexism.",
}
| This paper describes SemEval-2022{'}s shared task {``}Explainable Detection of Online Sexism{''}. The fine-grained classification of sexist content plays a major role in building explainable frameworks for online sexism detection. We hypothesize that by encoding dependency information using Graph Convolutional Networks (GCNs) we may capture more stylistic information about sexist contents. Online sexism has the potential to cause significant harm to women who are the targets of such behavior. It not only creates unwelcoming and inaccessible spaces for women online but also perpetuates social asymmetries and injustices. We believed improving the robustness and generalization ability of neural networks during training will allow models to capture different belief distributions for sexism categories. So we proposed adversarial training with GCNs for explainable detection of online sexism. In the end, our proposed method achieved very competitive results in all subtasks and shows that adversarial training of GCNs is a promising method for the explainable detection of online sexism. | [
"Tavan, Ehsan",
"Najafi, Maryam"
] | MarSan at SemEval-2023 Task 10: Can Adversarial Training with help of a Graph Convolutional Network Detect Explainable Sexism? | semeval-1.139 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.140.bib | https://aclanthology.org/2023.semeval-1.140/ | @inproceedings{michail-etal-2023-uzh,
title = "{UZH}{\_}{CL}yp at {S}em{E}val-2023 Task 9: Head-First Fine-Tuning and {C}hat{GPT} Data Generation for Cross-Lingual Learning in Tweet Intimacy Prediction",
author = "Michail, Andrianos and
Konstantinou, Stefanos and
Clematide, Simon",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.140",
doi = "10.18653/v1/2023.semeval-1.140",
pages = "1021--1029",
abstract = "This paper describes the submission of UZH{\_}CLyp for the SemEval 2023 Task 9 {``}Multilingual Tweet Intimacy Analysis. We achieved second-best results in all 10 languages according to the official Pearson{'}s correlation regression evaluation measure. Our cross-lingual transfer learning approach explores the benefits of using a Head-First Fine-Tuning method (HeFiT) that first updates only the regression head parameters and then also updates the pre-trained transformer encoder parameters at a reduced learning rate. Additionally, we study the impact of using a small set of automatically generated examples (in our case, from ChatGPT) for low-resource settings where no human-labeled data is available. Our study shows that HeFiT stabilizes training and consistently improves results for pre-trained models that lack domain adaptation to tweets. Our study also shows a noticeable performance increase in cross-lingual learning when synthetic data is used, confirming the usefulness of current text generation systems to improve zeroshot baseline results. Finally, we examine how possible inconsistencies in the annotated data contribute to cross-lingual interference issues.",
}
| This paper describes the submission of UZH{\_}CLyp for the SemEval 2023 Task 9 {``}Multilingual Tweet Intimacy Analysis. We achieved second-best results in all 10 languages according to the official Pearson{'}s correlation regression evaluation measure. Our cross-lingual transfer learning approach explores the benefits of using a Head-First Fine-Tuning method (HeFiT) that first updates only the regression head parameters and then also updates the pre-trained transformer encoder parameters at a reduced learning rate. Additionally, we study the impact of using a small set of automatically generated examples (in our case, from ChatGPT) for low-resource settings where no human-labeled data is available. Our study shows that HeFiT stabilizes training and consistently improves results for pre-trained models that lack domain adaptation to tweets. Our study also shows a noticeable performance increase in cross-lingual learning when synthetic data is used, confirming the usefulness of current text generation systems to improve zeroshot baseline results. Finally, we examine how possible inconsistencies in the annotated data contribute to cross-lingual interference issues. | [
"Michail, Andrianos",
"Konstantinou, Stefanos",
"Clematide, Simon"
] | UZH_CLyp at SemEval-2023 Task 9: Head-First Fine-Tuning and ChatGPT Data Generation for Cross-Lingual Learning in Tweet Intimacy Prediction | semeval-1.140 | Poster | 2303.01194 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.141.bib | https://aclanthology.org/2023.semeval-1.141/ | @inproceedings{grotzinger-etal-2023-cicl,
title = "{CICL}{\_}{DMS} at {S}em{E}val-2023 Task 11: Learning With Disagreements (Le-Wi-Di)",
author = {Gr{\"o}tzinger, Dennis and
Heuschkel, Simon and
Drews, Matthias},
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.141",
doi = "10.18653/v1/2023.semeval-1.141",
pages = "1030--1036",
abstract = "In this system paper, we describe our submission for the 11th task of SemEval2023: Learning with Disagreements, or Le-Wi-Di for short. In the task, the assumption that there is a single gold label in NLP tasks such as hate speech or misogyny detection is challenged, and instead the opinions of multiple annotators are considered. The goal is instead to capture the agreements/disagreements of the annotators. For our system, we utilize the capabilities of modern large-language models as our backbone and investigate various techniques built on top, such as ensemble learning, multi-task learning, or Gaussian processes. Our final submission shows promising results and we achieve an upper-half finish.",
}
| In this system paper, we describe our submission for the 11th task of SemEval2023: Learning with Disagreements, or Le-Wi-Di for short. In the task, the assumption that there is a single gold label in NLP tasks such as hate speech or misogyny detection is challenged, and instead the opinions of multiple annotators are considered. The goal is instead to capture the agreements/disagreements of the annotators. For our system, we utilize the capabilities of modern large-language models as our backbone and investigate various techniques built on top, such as ensemble learning, multi-task learning, or Gaussian processes. Our final submission shows promising results and we achieve an upper-half finish. | [
"Gr{\\\"o}tzinger, Dennis",
"Heuschkel, Simon",
"Drews, Matthias"
] | CICL_DMS at SemEval-2023 Task 11: Learning With Disagreements (Le-Wi-Di) | semeval-1.141 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.142.bib | https://aclanthology.org/2023.semeval-1.142/ | @inproceedings{zaikis-etal-2023-aristoxenus,
title = "Aristoxenus at {S}em{E}val-2023 Task 4: A Domain-Adapted Ensemble Approach to the Identification of Human Values behind Arguments",
author = "Zaikis, Dimitrios and
Stefanidis, Stefanos D. and
Anagnostopoulos, Konstantinos and
Vlahavas, Ioannis",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.142",
doi = "10.18653/v1/2023.semeval-1.142",
pages = "1037--1043",
abstract = "This paper presents our system for the SemEval-2023 Task 4, which aims to identify human values behind arguments by classifying whether or not an argument draws on a specific category. Our approach leverages a second-phase pre-training method to adapt a RoBERTa Language Model (LM) and tackles the problem using a One-Versus-All strategy. Final predictions are determined by a majority voting module that combines the outputs of an ensemble of three sets of per-label models. We conducted experiments to evaluate the impact of different pre-trained LMs on the task, comparing their performance in both pre-trained and task-adapted settings. Our findings show that fine-tuning the RoBERTa LM on the task-specific dataset improves its performance, outperforming the best-performing baseline BERT approach. Overall, our approach achieved a macro-F1 score of 0.47 on the official test set, demonstrating its potential in identifying human values behind arguments.",
}
| This paper presents our system for the SemEval-2023 Task 4, which aims to identify human values behind arguments by classifying whether or not an argument draws on a specific category. Our approach leverages a second-phase pre-training method to adapt a RoBERTa Language Model (LM) and tackles the problem using a One-Versus-All strategy. Final predictions are determined by a majority voting module that combines the outputs of an ensemble of three sets of per-label models. We conducted experiments to evaluate the impact of different pre-trained LMs on the task, comparing their performance in both pre-trained and task-adapted settings. Our findings show that fine-tuning the RoBERTa LM on the task-specific dataset improves its performance, outperforming the best-performing baseline BERT approach. Overall, our approach achieved a macro-F1 score of 0.47 on the official test set, demonstrating its potential in identifying human values behind arguments. | [
"Zaikis, Dimitrios",
"Stefanidis, Stefanos D.",
"Anagnostopoulos, Konstantinos",
"Vlahavas, Ioannis"
] | Aristoxenus at SemEval-2023 Task 4: A Domain-Adapted Ensemble Approach to the Identification of Human Values behind Arguments | semeval-1.142 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.143.bib | https://aclanthology.org/2023.semeval-1.143/ | @inproceedings{ferrara-etal-2023-augustine,
title = "Augustine of Hippo at {S}em{E}val-2023 Task 4: An Explainable Knowledge Extraction Method to Identify Human Values in Arguments with {S}uper{ASKE}",
author = "Ferrara, Alfio and
Picascia, Sergio and
Rocchetti, Elisabetta",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.143",
doi = "10.18653/v1/2023.semeval-1.143",
pages = "1044--1053",
abstract = "In this paper we present and discuss the results achieved by the {``}Augustine of Hippo{''} team at SemEval-2023 Task 4 about human value detection. In particular, we provide a quantitative and qualitative reviews of the results obtained by SuperASKE, discussing respectively performance metrics and classification errors. Finally, we present our main contribution: an explainable and unsupervised approach mapping arguments to concepts, followed by a supervised classification model mapping concepts to human values.",
}
| In this paper we present and discuss the results achieved by the {``}Augustine of Hippo{''} team at SemEval-2023 Task 4 about human value detection. In particular, we provide a quantitative and qualitative reviews of the results obtained by SuperASKE, discussing respectively performance metrics and classification errors. Finally, we present our main contribution: an explainable and unsupervised approach mapping arguments to concepts, followed by a supervised classification model mapping concepts to human values. | [
"Ferrara, Alfio",
"Picascia, Sergio",
"Rocchetti, Elisabetta"
] | Augustine of Hippo at SemEval-2023 Task 4: An Explainable Knowledge Extraction Method to Identify Human Values in Arguments with SuperASKE | semeval-1.143 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.144.bib | https://aclanthology.org/2023.semeval-1.144/ | @inproceedings{ronningstad-2023-uio,
title = "{UIO} at {S}em{E}val-2023 Task 12: Multilingual fine-tuning for sentiment classification in low-resource Languages",
author = "R{\o}nningstad, Egil",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.144",
doi = "10.18653/v1/2023.semeval-1.144",
pages = "1054--1060",
abstract = "Our contribution to the 2023 AfriSenti-SemEval shared task 12: Sentiment Analysis for African Languages, provides insight into how a multilingual large language model can be a resource for sentiment analysis in languages not seen during pretraining. The shared task provides datasets of a variety of African languages from different language families. The languages are to various degrees related to languages used during pretraining, and the language data contain various degrees of code-switching. We experiment with both monolingual and multilingual datasets for the final fine-tuning, and find that with the provided datasets that contain samples in the thousands, monolingual fine-tuning yields the best results.",
}
| Our contribution to the 2023 AfriSenti-SemEval shared task 12: Sentiment Analysis for African Languages, provides insight into how a multilingual large language model can be a resource for sentiment analysis in languages not seen during pretraining. The shared task provides datasets of a variety of African languages from different language families. The languages are to various degrees related to languages used during pretraining, and the language data contain various degrees of code-switching. We experiment with both monolingual and multilingual datasets for the final fine-tuning, and find that with the provided datasets that contain samples in the thousands, monolingual fine-tuning yields the best results. | [
"R{\\o}nningstad, Egil"
] | UIO at SemEval-2023 Task 12: Multilingual fine-tuning for sentiment classification in low-resource Languages | semeval-1.144 | Poster | 2304.14189 | [
"https://github.com/egilron/afrisenti-semeval-2023"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.145.bib | https://aclanthology.org/2023.semeval-1.145/ | @inproceedings{garcia-diaz-etal-2023-umuteam-semeval,
title = "{UMUT}eam at {S}em{E}val-2023 Task 11: Ensemble Learning applied to Binary Supervised Classifiers with disagreements",
author = "Garc{\'\i}a-D{\'\i}az, Jos{\'e} Antonio and
Pan, Ronghao and
Alcar{\'a}z-M{\'a}rmol, Gema and
Mar{\'\i}n-P{\'e}rez, Mar{\'\i}a Jos{\'e} and
Valencia-Garc{\'\i}a, Rafael",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.145",
doi = "10.18653/v1/2023.semeval-1.145",
pages = "1061--1066",
abstract = "This paper describes the participation of the UMUTeam in the Learning With Disagreements (Le-Wi-Di) shared task proposed at SemEval 2023, which objective is the development of supervised automatic classifiers that consider, during training, the agreements and disagreements among the annotators of the datasets. Specifically, this edition includes a multilingual dataset. Our proposal is grounded on the development of ensemble learning classifiers that combine the outputs of several Large Language Models. Our proposal ranked position 18 of a total of 30 participants. However, our proposal did not incorporate the information about the disagreements. In contrast, we compare the performance of building several classifiers for each dataset separately with a merged dataset.",
}
| This paper describes the participation of the UMUTeam in the Learning With Disagreements (Le-Wi-Di) shared task proposed at SemEval 2023, which objective is the development of supervised automatic classifiers that consider, during training, the agreements and disagreements among the annotators of the datasets. Specifically, this edition includes a multilingual dataset. Our proposal is grounded on the development of ensemble learning classifiers that combine the outputs of several Large Language Models. Our proposal ranked position 18 of a total of 30 participants. However, our proposal did not incorporate the information about the disagreements. In contrast, we compare the performance of building several classifiers for each dataset separately with a merged dataset. | [
"Garc{\\'\\i}a-D{\\'\\i}az, Jos{\\'e} Antonio",
"Pan, Ronghao",
"Alcar{\\'a}z-M{\\'a}rmol, Gema",
"Mar{\\'\\i}n-P{\\'e}rez, Mar{\\'\\i}a Jos{\\'e}",
"Valencia-Garc{\\'\\i}a, Rafael"
] | UMUTeam at SemEval-2023 Task 11: Ensemble Learning applied to Binary Supervised Classifiers with disagreements | semeval-1.145 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.146.bib | https://aclanthology.org/2023.semeval-1.146/ | @inproceedings{tailor-mamidi-2023-matt,
title = "Matt {B}ai at {S}em{E}val-2023 Task 5: Clickbait spoiler classification via {BERT}",
author = "Tailor, Nukit and
Mamidi, Radhika",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.146",
doi = "10.18653/v1/2023.semeval-1.146",
pages = "1067--1068",
abstract = "The Clickbait Spoiling shared task aims at tackling two aspects of spoiling: classifying the spoiler type based on its length and generating the spoiler. This paper focuses on the task of classifying the spoiler type. Better classification of the spoiler type would eventually help in generating a better spoiler for the post. We use BERT-base (cased) to classify the clickbait posts. The model achieves a balanced accuracy of 0.63 as we give only the post content as the input to our model instead of the concatenation of the post title and post content to find out the differences that the post title might be bringing in.",
}
| The Clickbait Spoiling shared task aims at tackling two aspects of spoiling: classifying the spoiler type based on its length and generating the spoiler. This paper focuses on the task of classifying the spoiler type. Better classification of the spoiler type would eventually help in generating a better spoiler for the post. We use BERT-base (cased) to classify the clickbait posts. The model achieves a balanced accuracy of 0.63 as we give only the post content as the input to our model instead of the concatenation of the post title and post content to find out the differences that the post title might be bringing in. | [
"Tailor, Nukit",
"Mamidi, Radhika"
] | Matt Bai at SemEval-2023 Task 5: Clickbait spoiler classification via BERT | semeval-1.146 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.147.bib | https://aclanthology.org/2023.semeval-1.147/ | @inproceedings{pickard-etal-2023-shefnlp,
title = "shefnlp at {S}em{E}val-2023 Task 10: Compute-Efficient Category Adapters",
author = "Pickard, Thomas and
Loakman, Tyler and
Pandya, Mugdha",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.147",
doi = "10.18653/v1/2023.semeval-1.147",
pages = "1069--1075",
abstract = "As social media platforms grow, so too does the volume of hate speech and negative sentiment expressed towards particular social groups. In this paper, we describe our approach to SemEval-2023 Task 10, involving the detection and classification of online sexism (abuse directed towards women), with fine-grained categorisations intended to facilitate the development of a more nuanced understanding of the ideologies and processes through which online sexism is expressed. We experiment with several approaches involving language model finetuning, class-specific adapters, and pseudo-labelling. Our best-performing models involve the training of adapters specific to each subtask category (combined via fusion layers) using a weighted loss function, in addition to performing naive pseudo-labelling on a large quantity of unlabelled data. We successfully outperform the baseline models on all 3 subtasks, placing 56th (of 84) on Task A, 43rd (of 69) on Task B,and 37th (of 63) on Task C.",
}
| As social media platforms grow, so too does the volume of hate speech and negative sentiment expressed towards particular social groups. In this paper, we describe our approach to SemEval-2023 Task 10, involving the detection and classification of online sexism (abuse directed towards women), with fine-grained categorisations intended to facilitate the development of a more nuanced understanding of the ideologies and processes through which online sexism is expressed. We experiment with several approaches involving language model finetuning, class-specific adapters, and pseudo-labelling. Our best-performing models involve the training of adapters specific to each subtask category (combined via fusion layers) using a weighted loss function, in addition to performing naive pseudo-labelling on a large quantity of unlabelled data. We successfully outperform the baseline models on all 3 subtasks, placing 56th (of 84) on Task A, 43rd (of 69) on Task B,and 37th (of 63) on Task C. | [
"Pickard, Thomas",
"Loakman, Tyler",
"P",
"ya, Mugdha"
] | shefnlp at SemEval-2023 Task 10: Compute-Efficient Category Adapters | semeval-1.147 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.148.bib | https://aclanthology.org/2023.semeval-1.148/ | @inproceedings{cui-2023-xiacui,
title = "xiacui at {S}em{E}val-2023 Task 11: Learning a Model in Mixed-Annotator Datasets Using Annotator Ranking Scores as Training Weights",
author = "Cui, Xia",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.148",
doi = "10.18653/v1/2023.semeval-1.148",
pages = "1076--1084",
abstract = "This paper describes the development of a system for SemEval-2023 Shared Task 11 on Learning with Disagreements (Le-Wi-Di). Labelled data plays a vital role in the development of machine learning systems. The human-annotated labels are usually considered the truth for training or validation. To obtain truth labels, a traditional way is to hire domain experts to perform an expensive annotation process. Crowd-sourcing labelling is comparably cheap, whereas it raises a question on the reliability of annotators. A common strategy in a mixed-annotator dataset with various sets of annotators for each instance is to aggregate the labels among multiple groups of annotators to obtain the truth labels. However, these annotators might not reach an agreement, and there is no guarantee of the reliability of these labels either. With further problems caused by human label variation, subjective tasks usually suffer from the different opinions provided by the annotators. In this paper, we propose two simple heuristic functions to compute the annotator ranking scores, namely AnnoHard and AnnoSoft, based on the hard labels (i.e., aggregative labels) and soft labels (i.e., cross-entropy values). By introducing these scores, we adjust the weights of the training instances to improve the learning with disagreements among the annotators.",
}
| This paper describes the development of a system for SemEval-2023 Shared Task 11 on Learning with Disagreements (Le-Wi-Di). Labelled data plays a vital role in the development of machine learning systems. The human-annotated labels are usually considered the truth for training or validation. To obtain truth labels, a traditional way is to hire domain experts to perform an expensive annotation process. Crowd-sourcing labelling is comparably cheap, whereas it raises a question on the reliability of annotators. A common strategy in a mixed-annotator dataset with various sets of annotators for each instance is to aggregate the labels among multiple groups of annotators to obtain the truth labels. However, these annotators might not reach an agreement, and there is no guarantee of the reliability of these labels either. With further problems caused by human label variation, subjective tasks usually suffer from the different opinions provided by the annotators. In this paper, we propose two simple heuristic functions to compute the annotator ranking scores, namely AnnoHard and AnnoSoft, based on the hard labels (i.e., aggregative labels) and soft labels (i.e., cross-entropy values). By introducing these scores, we adjust the weights of the training instances to improve the learning with disagreements among the annotators. | [
"Cui, Xia"
] | xiacui at SemEval-2023 Task 11: Learning a Model in Mixed-Annotator Datasets Using Annotator Ranking Scores as Training Weights | semeval-1.148 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.149.bib | https://aclanthology.org/2023.semeval-1.149/ | @inproceedings{hancharova-etal-2023-team,
title = "Team {ISCL}{\_}{WINTER} at {S}em{E}val-2023 Task 12:{A}fri{S}enti-{S}em{E}val: Sentiment Analysis for Low-resource {A}frican Languages using {T}witter Dataset",
author = "Hancharova, Alina and
Wang, John and
Kumar, Mayank",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.149",
doi = "10.18653/v1/2023.semeval-1.149",
pages = "1085--1089",
abstract = "This paper presents a study on the effectiveness of various approaches for addressing the challenge of multilingual sentiment analysis in low-resource African languages. . The approaches evaluated in the study include Support Vector Machines (SVM), translation, and an ensemble of pre-trained multilingual sentimental models methods. The paper provides a detailed analysis of the performance of each approach based on experimental results. In our findings, we suggest that the ensemble method is the most effective with an F1-Score of 0.68 on the final testing. This system ranked 19 out of 33 participants in the competition.",
}
| This paper presents a study on the effectiveness of various approaches for addressing the challenge of multilingual sentiment analysis in low-resource African languages. . The approaches evaluated in the study include Support Vector Machines (SVM), translation, and an ensemble of pre-trained multilingual sentimental models methods. The paper provides a detailed analysis of the performance of each approach based on experimental results. In our findings, we suggest that the ensemble method is the most effective with an F1-Score of 0.68 on the final testing. This system ranked 19 out of 33 participants in the competition. | [
"Hancharova, Alina",
"Wang, John",
"Kumar, Mayank"
] | Team ISCL_WINTER at SemEval-2023 Task 12:AfriSenti-SemEval: Sentiment Analysis for Low-resource African Languages using Twitter Dataset | semeval-1.149 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.150.bib | https://aclanthology.org/2023.semeval-1.150/ | @inproceedings{wangsadirdja-etal-2023-jack,
title = "Jack-Ryder at {S}em{E}val-2023 Task 5: Zero-Shot Clickbait Spoiling by Rephrasing Titles as Questions",
author = "Wangsadirdja, Dirk and
Pfister, Jan and
Kobs, Konstantin and
Hotho, Andreas",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.150",
doi = "10.18653/v1/2023.semeval-1.150",
pages = "1090--1095",
abstract = "In this paper, we describe our approach to the clickbait spoiling task of SemEval 2023.The core idea behind our system is to leverage pre-trained models capable of Question Answering (QA) to extract the spoiler from article texts based on the clickbait title without any task-specific training. Since oftentimes, these titles are not phrased as questions, we automatically rephrase the clickbait titles as questions in order to better suit the pretraining task of the QA-capable models. Also, to fit as much relevant context into the model{'}s limited input size as possible, we propose to reorder the sentences by their relevance using a semantic similarity model. Finally, we evaluate QA as well as text generation models (via prompting) to extract the spoiler from the text. Based on the validation data, our final model selects each of these components depending on the spoiler type and achieves satisfactory zero-shot results. The ideas described in this paper can easily be applied in fine-tuning settings.",
}
| In this paper, we describe our approach to the clickbait spoiling task of SemEval 2023.The core idea behind our system is to leverage pre-trained models capable of Question Answering (QA) to extract the spoiler from article texts based on the clickbait title without any task-specific training. Since oftentimes, these titles are not phrased as questions, we automatically rephrase the clickbait titles as questions in order to better suit the pretraining task of the QA-capable models. Also, to fit as much relevant context into the model{'}s limited input size as possible, we propose to reorder the sentences by their relevance using a semantic similarity model. Finally, we evaluate QA as well as text generation models (via prompting) to extract the spoiler from the text. Based on the validation data, our final model selects each of these components depending on the spoiler type and achieves satisfactory zero-shot results. The ideas described in this paper can easily be applied in fine-tuning settings. | [
"Wangsadirdja, Dirk",
"Pfister, Jan",
"Kobs, Konstantin",
"Hotho, Andreas"
] | Jack-Ryder at SemEval-2023 Task 5: Zero-Shot Clickbait Spoiling by Rephrasing Titles as Questions | semeval-1.150 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.151.bib | https://aclanthology.org/2023.semeval-1.151/ | @inproceedings{khanchandani-etal-2023-mlmodeler5,
title = "{MLM}odeler5 at {S}em{E}val-2023 Task 3: Detecting the Category and the Framing Techniques in Online News in a Multi-lingual Setup",
author = "Khanchandani, Arjun and
Jain, Nitansh and
Bedi, Jatin",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.151",
doi = "10.18653/v1/2023.semeval-1.151",
pages = "1096--1101",
abstract = "System Description Paper for Task 3 Subtask 1 and 2 of Semeval 2023. The paper describes our approach to handling the News Genre Categorisation and Framing Detection using RoBERTa and ALBERT models.",
}
| System Description Paper for Task 3 Subtask 1 and 2 of Semeval 2023. The paper describes our approach to handling the News Genre Categorisation and Framing Detection using RoBERTa and ALBERT models. | [
"Khanch",
"ani, Arjun",
"Jain, Nitansh",
"Bedi, Jatin"
] | MLModeler5 at SemEval-2023 Task 3: Detecting the Category and the Framing Techniques in Online News in a Multi-lingual Setup | semeval-1.151 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.152.bib | https://aclanthology.org/2023.semeval-1.152/ | @inproceedings{padmavathi-2023-ds,
title = "{DS} at {S}em{E}val-2023 Task 10: Explaining Online Sexism using Transformer based Approach",
author = "Padmavathi, Madisetty",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.152",
doi = "10.18653/v1/2023.semeval-1.152",
pages = "1102--1106",
abstract = "In this paper, I describe the approach used in the SemEval 2023 - Task 10 Explainable Detection of Online Sexism (EDOS) competition (Kirk et al., 2023). I use different transformermodels, including BERT and RoBERTa which were fine-tuned on the EDOS dataset to classify text into different categories of sexism. I participated in three subtasks: subtask A is to classify given text as either sexist or not, while subtask B is to identify the specific category of sexism, such as (1) threats, (2) derogation, (3) animosity, (4) prejudiced discussions. Finally, subtask C involves predicting a finegrained vector representation of sexism, which included information about the severity, target and type of sexism present in the text. The use of transformer models allows the system to learn from the input data and make predictions on unseen text. By fine-tuning the models on the EDOS dataset, the system can improve its performance on the specific task of detecting online sexism. I got the following macro F1 scores: subtask A:77.16, subtask B: 46.11, and subtask C: 30.2.",
}
| In this paper, I describe the approach used in the SemEval 2023 - Task 10 Explainable Detection of Online Sexism (EDOS) competition (Kirk et al., 2023). I use different transformermodels, including BERT and RoBERTa which were fine-tuned on the EDOS dataset to classify text into different categories of sexism. I participated in three subtasks: subtask A is to classify given text as either sexist or not, while subtask B is to identify the specific category of sexism, such as (1) threats, (2) derogation, (3) animosity, (4) prejudiced discussions. Finally, subtask C involves predicting a finegrained vector representation of sexism, which included information about the severity, target and type of sexism present in the text. The use of transformer models allows the system to learn from the input data and make predictions on unseen text. By fine-tuning the models on the EDOS dataset, the system can improve its performance on the specific task of detecting online sexism. I got the following macro F1 scores: subtask A:77.16, subtask B: 46.11, and subtask C: 30.2. | [
"Padmavathi, Madisetty"
] | DS at SemEval-2023 Task 10: Explaining Online Sexism using Transformer based Approach | semeval-1.152 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.153.bib | https://aclanthology.org/2023.semeval-1.153/ | @inproceedings{lupancu-etal-2023-fii,
title = "{FII}{\_}{B}etter at {S}em{E}val-2023 Task 2: {M}ulti{C}o{NER} {II} Multilingual Complex Named Entity Recognition",
author = "Lupancu, Viorica-Camelia and
Platica, Alexandru-Gabriel and
Rosu, Cristian-Mihai and
Gifu, Daniela and
Trandabat, Diana",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.153",
doi = "10.18653/v1/2023.semeval-1.153",
pages = "1107--1113",
abstract = "This task focuses on identifying complex named entities (NEs) in several languages. In the context of SemEval-2023 competition, our team presents an exploration of a base transformer model{'}s capabilities regarding the task, focused more specifically on five languages (English, Spanish, Swedish, German, Italian). We take DistilBERT and BERT as two examples of basic transformer models, using DistilBERT as a baseline and BERT as the platform to create an improved model. The dataset that we are using, MultiCoNER II, is a large multilingual dataset used for NER, that covers domains like: Wiki sentences, questions and search queries across 12 languages. This dataset contains 26M tokens and it is assembled from public resources. MultiCoNER II defines a NER tag-set with 6 classes and 67 tags. We have managed to get moderate results in the English track (we ranked 17th out of 34), while our results in the other tracks could be further improved in the future (overall third to last).",
}
| This task focuses on identifying complex named entities (NEs) in several languages. In the context of SemEval-2023 competition, our team presents an exploration of a base transformer model{'}s capabilities regarding the task, focused more specifically on five languages (English, Spanish, Swedish, German, Italian). We take DistilBERT and BERT as two examples of basic transformer models, using DistilBERT as a baseline and BERT as the platform to create an improved model. The dataset that we are using, MultiCoNER II, is a large multilingual dataset used for NER, that covers domains like: Wiki sentences, questions and search queries across 12 languages. This dataset contains 26M tokens and it is assembled from public resources. MultiCoNER II defines a NER tag-set with 6 classes and 67 tags. We have managed to get moderate results in the English track (we ranked 17th out of 34), while our results in the other tracks could be further improved in the future (overall third to last). | [
"Lupancu, Viorica-Camelia",
"Platica, Alex",
"ru-Gabriel",
"Rosu, Cristian-Mihai",
"Gifu, Daniela",
"Tr",
"abat, Diana"
] | FII_Better at SemEval-2023 Task 2: MultiCoNER II Multilingual Complex Named Entity Recognition | semeval-1.153 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.154.bib | https://aclanthology.org/2023.semeval-1.154/ | @inproceedings{mahibha-etal-2023-brainstormers,
title = "Brainstormers{\_}msec at {S}em{E}val-2023 Task 10: Detection of sexism related comments in social media using deep learning",
author = "Mahibha, C. Jerin and
Swaathi, C. M and
Jeevitha, R. and
Martina, R. Princy and
Thenmozhi, Durairaj",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.154",
doi = "10.18653/v1/2023.semeval-1.154",
pages = "1114--1120",
abstract = "Social media is the media through which people share their thoughts and opinions. This has both its pros and cons which depends on the type of information being conveyed. If any information conveyed over social media hurts or affects a person, such information can be removed as it may disturb their mental health and may decrease their self confidence. During the last decade, hateful and sexist content towards women in being increasingly spread on social networks. The exposure to sexist speech has serious consequences to women{'}s life and limits their freedom of speech. Sexism is expressed in very different forms: it includes subtle stereotypes and attitudes that, although frequently unnoticed, are extremely harmful for both women and society. Sexist comments have a major impact on women being subjected to it. We as a team participated in the shared task Explainable Detection of Online Sexism (EDOS) at SemEval 2023 and have proposed a model which identifies the sexist comments and its type from English social media posts using the data set shared for the task. Different transformer model like BERT , DistilBERT and RoBERT are used by the proposed model for implementing all the three tasks shared by EDOS. On using the BERT model, macro F1 score of 0.8073, 0.5876 and 0.3729 are achieved for Task A, Task B and Task C respectively.",
}
| Social media is the media through which people share their thoughts and opinions. This has both its pros and cons which depends on the type of information being conveyed. If any information conveyed over social media hurts or affects a person, such information can be removed as it may disturb their mental health and may decrease their self confidence. During the last decade, hateful and sexist content towards women in being increasingly spread on social networks. The exposure to sexist speech has serious consequences to women{'}s life and limits their freedom of speech. Sexism is expressed in very different forms: it includes subtle stereotypes and attitudes that, although frequently unnoticed, are extremely harmful for both women and society. Sexist comments have a major impact on women being subjected to it. We as a team participated in the shared task Explainable Detection of Online Sexism (EDOS) at SemEval 2023 and have proposed a model which identifies the sexist comments and its type from English social media posts using the data set shared for the task. Different transformer model like BERT , DistilBERT and RoBERT are used by the proposed model for implementing all the three tasks shared by EDOS. On using the BERT model, macro F1 score of 0.8073, 0.5876 and 0.3729 are achieved for Task A, Task B and Task C respectively. | [
"Mahibha, C. Jerin",
"Swaathi, C. M",
"Jeevitha, R.",
"Martina, R. Princy",
"Thenmozhi, Durairaj"
] | Brainstormers_msec at SemEval-2023 Task 10: Detection of sexism related comments in social media using deep learning | semeval-1.154 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
Subsets and Splits