entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
parisot-zavrel-2022-multi
Multi-objective Representation Learning for Scientific Document Retrieval
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.9/
Parisot, Mathias and Zavrel, Jakub
Proceedings of the Third Workshop on Scholarly Document Processing
80--88
Existing dense retrieval models for scientific documents have been optimized for either retrieval by short queries, or for document similarity, but usually not for both. In this paper, we explore the space of combining multiple objectives to achieve a single representation model that presents a good balance between both modes of dense retrieval, combining the relevance judgements from MS MARCO with the citation similarity of SPECTER, and the self-supervised objective of independent cropping. We also consider the addition of training data from document co-citation in a sentence context and domain-specific synthetic data. We show that combining multiple objectives yields models that generalize well across different benchmark tasks, improving up to 73{\%} over models trained on a single objective.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,156
inproceedings
kazi-etal-2022-visualisation
Visualisation Methods for Diachronic Semantic Shift
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.10/
Kazi, Raef and Amato, Alessandra and Wang, Shenghui and Bucur, Doina
Proceedings of the Third Workshop on Scholarly Document Processing
89--94
The meaning and usage of a concept or a word changes over time. These diachronic semantic shifts reflect the change of societal and cultural consensus as well as the evolution of science. The availability of large-scale corpora and recent success in language models have enabled researchers to analyse semantic shifts in great detail. However, current research lacks intuitive ways of presenting diachronic semantic shifts and making them comprehensive. In this paper, we study the PubMed dataset and compute semantic shifts across six decades. We develop three visualisation methods that can show, given a root word: the temporal change in its linguistic context, word re-occurrence, degree of similarity, time continuity, and separate trends per publisher location. We also propose a taxonomy that classifies visualisation methods for diachronic semantic shifts with respect to different purposes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,157
inproceedings
ricci-etal-2022-unsupervised
Unsupervised Partial Sentence Matching for Cited Text Identification
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.11/
Ricci, Kathryn and Chang, Haw-Shiuan and Goyal, Purujit and McCallum, Andrew
Proceedings of the Third Workshop on Scholarly Document Processing
95--104
Given a citation in the body of a research paper, cited text identification aims to find the sentences in the cited paper that are most relevant to the citing sentence. The task is fundamentally one of sentence matching, where affinity is often assessed by a cosine similarity between sentence embeddings. However, (a) sentences may not be well-represented by a single embedding because they contain multiple distinct semantic aspects, and (b) good matches may not require a strong match in all aspects. To overcome these limitations, we propose a simple and efficient unsupervised method for cited text identification that adapts an asymmetric similarity measure to allow partial matches of multiple aspects in both sentences. On the CL-SciSumm dataset we find that our method outperforms a baseline symmetric approach, and, surprisingly, also outperforms all supervised and unsupervised systems submitted to past editions of CL-SciSumm Shared Task 1a.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,158
inproceedings
toney-dunham-2022-multi
Multi-label Classification of Scientific Research Documents Across Domains and Languages
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.12/
Toney, Autumn and Dunham, James
Proceedings of the Third Workshop on Scholarly Document Processing
105--114
Automatically organizing scholarly literature is a necessary and challenging task. By assigning scientific research publications key concepts, researchers, policymakers, and the general public are able to search for and discover relevant research literature. The organization of scientific research evolves with new discoveries and publications, requiring an up-to-date and scalable text classification model. Additionally, scientific research publications benefit from multi-label classification, particularly with more fine-grained sub-domains. Prior work has focused on classifying scientific publications from one research area (e.g., computer science), referencing static concept descriptions, and implementing an English-only classification model. We propose a multi-label classification model that can be implemented in non-English languages, across all of scientific literature, with updatable concept descriptions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,159
inproceedings
yang-wan-2022-investigating
Investigating Metric Diversity for Evaluating Long Document Summarisation
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.13/
Yang, Cai and Wan, Stephen
Proceedings of the Third Workshop on Scholarly Document Processing
115--125
Long document summarisation, a challenging summarisation scenario, is the focus of the recently proposed LongSumm shared task. One of the limitations of this shared task has been its use of a single family of metrics for evaluation (the ROUGE metrics). In contrast, other fields, like text generation, employ multiple metrics. We replicated the LongSumm evaluation using multiple test set samples (vs. the single test set of the official shared task) and investigated how different metrics might complement each other in this evaluation framework. We show that under this more rigorous evaluation, (1) some of the key learnings from Longsumm 2020 and 2021 still hold, but the relative ranking of systems changes, and (2) the use of additional metrics reveals additional high-quality summaries missed by ROUGE, and (3) we show that SPICE is a candidate metric for summarisation evaluation for LongSumm.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,160
inproceedings
zhuang-etal-2022-exploiting
Exploiting Unary Relations with Stacked Learning for Relation Extraction
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.14/
Zhuang, Yuan and Riloff, Ellen and Wagstaff, Kiri L. and Francis, Raymond and Golombek, Matthew P. and Tamppari, Leslie K.
Proceedings of the Third Workshop on Scholarly Document Processing
126--137
Relation extraction models typically cast the problem of determining whether there is a relation between a pair of entities as a single decision. However, these models can struggle with long or complex language constructions in which two entities are not directly linked, as is often the case in scientific publications. We propose a novel approach that decomposes a binary relation into two unary relations that capture each argument`s role in the relation separately. We create a stacked learning model that incorporates information from unary and binary relation extractors to determine whether a relation holds between two entities. We present experimental results showing that this approach outperforms several competitive relation extractors on a new corpus of planetary science publications as well as a benchmark dataset in the biology domain.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,161
inproceedings
ebadi-etal-2022-mitigating
Mitigating Data Shift of Biomedical Research Articles for Information Retrieval and Semantic Indexing
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.15/
Ebadi, Nima and Rios, Anthony and Najafirad, Peyman
Proceedings of the Third Workshop on Scholarly Document Processing
138--151
Researchers have explored novel methods for both semantic indexing and information retrieval of biomedical research articles. Moreover, most solutions treat each task independently. However, both tasks are related. For instance, semantic indexes are generally used to filter results from an information retrieval system. Hence, one task can potentially improve the performance of models trained for the other task. Thus, this study proposes a unified retriever-ranker-based model to tackle the tasks of information retrieval (IR) and semantic indexing (SI). Particularly, our proposed model can adapt to rapid shifts in scientific research. Our results show that the model effectively leverages task similarity to improve the robustness to dataset shift. For SI, the Micro f1 score increases by 8{\%} and the LCA-F score improves by 5{\%}. For IR, the MAP increases by 5{\%} on average.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,162
inproceedings
yamauchi-etal-2022-japanese
A {J}apanese Masked Language Model for Academic Domain
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.16/
Yamauchi, Hiroki and Kajiwara, Tomoyuki and Katsurai, Marie and Ohmukai, Ikki and Ninomiya, Takashi
Proceedings of the Third Workshop on Scholarly Document Processing
152--157
We release a pretrained Japanese masked language model for an academic domain. Pretrained masked language models have recently improved the performance of various natural language processing applications. In domains such as medical and academic, which include a lot of technical terms, domain-specific pretraining is effective. While domain-specific masked language models for medical and SNS domains are widely used in Japanese, along with domain-independent ones, pretrained models specific to the academic domain are not publicly available. In this study, we pretrained a RoBERTa-based Japanese masked language model on paper abstracts from the academic database CiNii Articles. Experimental results on Japanese text classification in the academic domain revealed the effectiveness of the proposed model over existing pretrained models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,163
inproceedings
berezin-batura-2022-named
Named Entity Inclusion in Abstractive Text Summarization
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.17/
Berezin, Sergey and Batura, Tatiana
Proceedings of the Third Workshop on Scholarly Document Processing
158--162
We address the named entity omission - the drawback of many current abstractive text summarizers. We suggest a custom pretraining objective to enhance the model`s attention on the named entities in a text. At first, the named entity recognition model RoBERTa is trained to determine named entities in the text. After that this model is used to mask named entities in the text and the BART model is trained to reconstruct them. Next, BART model is fine-tuned on the summarization task. Our experiments showed that this pretraining approach drastically improves named entity inclusion precision and recall metrics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,164
inproceedings
rehman-etal-2022-named
Named Entity Recognition Based Automatic Generation of Research Highlights
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.18/
Rehman, Tohida and Sanyal, Debarshi Kumar and Majumder, Prasenjit and Chattopadhyay, Samiran
Proceedings of the Third Workshop on Scholarly Document Processing
163--169
A scientific paper is traditionally prefaced by an abstract that summarizes the paper. Recently, research highlights that focus on the main findings of the paper have emerged as a complementary summary in addition to an abstract. However, highlights are not yet as common as abstracts, and are absent in many papers. In this paper, we aim to automatically generate research highlights using different sections of a research paper as input. We investigate whether the use of named entity recognition on the input improves the quality of the generated highlights. In particular, we have used two deep learning-based models: the first is a pointer-generator network, and the second augments the first model with coverage mechanism. We then augment each of the above models with named entity recognition features. The proposed method can be used to produce highlights for papers with missing highlights. Our experiments show that adding named entity information improves the performance of the deep learning-based summarizers in terms of ROUGE, METEOR and BERTScore measures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,165
inproceedings
arita-etal-2022-citation
Citation Sentence Generation Leveraging the Content of Cited Papers
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.19/
Arita, Akito and Sugiyama, Hiroaki and Dohsaka, Kohji and Tanaka, Rikuto and Taira, Hirotoshi
Proceedings of the Third Workshop on Scholarly Document Processing
170--174
We address automatic citation sentence generation, which reduces the burden on writing scientific papers. For highly accurate citation senetence generation, appropriate language must be learned using information such as the relationship between the cited source and the cited paper as well as the context in which the paper cited. Although the abstracts of papers have been used for the generation in the past, they often contain extra information in the citation sentence, which might negatively impact the generation of citation sentences. Therefore, this study attempts to learn a highly accurate citation sentence generation model using sentences from cited articles that resemble the previous sentence to the cited location, thereby utilizing information that is more useful for citation sentence generation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,166
inproceedings
wang-etal-2022-overview
Overview of {MSLR}2022: A Shared Task on Multi-document Summarization for Literature Reviews
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.20/
Wang, Lucy Lu and DeYoung, Jay and Wallace, Byron
Proceedings of the Third Workshop on Scholarly Document Processing
175--180
We provide an overview of the MSLR2022 shared task on multi-document summarization for literature reviews. The shared task was hosted at the Third Scholarly Document Processing (SDP) Workshop at COLING 2022. For this task, we provided data consisting of gold summaries extracted from review papers along with the groups of input abstracts that were synthesized into these summaries, split into two subtasks. In total, six teams participated, making 10 public submissions, 6 to the Cochrane subtask and 4 to the MS{\textasciicircum}2 subtask. The top scoring systems reported over 2 points ROUGE-L improvement on the Cochrane subtask, though performance improvements are not consistently reported across all automated evaluation metrics; qualitative examination of the results also suggests the inadequacy of current evaluation metrics for capturing factuality and consistency on this task. Significant work is needed to improve system performance, and more importantly, to develop better methods for automatically evaluating performance on this task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,167
inproceedings
otmakhova-etal-2022-led
{LED} down the rabbit hole: exploring the potential of global attention for biomedical multi-document summarisation
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.21/
Otmakhova, Yulia and Truong, Thinh Hung and Baldwin, Timothy and Cohn, Trevor and Verspoor, Karin and Lau, Jey Han
Proceedings of the Third Workshop on Scholarly Document Processing
181--187
In this paper we report the experiments performed for the submission to the Multidocument summarisation for Literature Review (MSLR) Shared Task. In particular, we adopt Primera model to the biomedical domain by placing global attention on important biomedical entities in several ways. We analyse the outputs of 23 resulting models and report some patterns related to the presence of additional global attention, number of training steps and the inputs configuration.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,168
inproceedings
yu-2022-evaluating
Evaluating Pre-Trained Language Models on Multi-Document Summarization for Literature Reviews
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.22/
Yu, Benjamin
Proceedings of the Third Workshop on Scholarly Document Processing
188--192
Systematic literature reviews in the biomedical space are often expensive to conduct. Automation through machine learning and large language models could improve the accuracy and research outcomes from such reviews. In this study, we evaluate a pre-trained LongT5 model on the MSLR22: Multi-Document Summarization for Literature Reviews Shared Task datasets. We weren`t able to make any improvements on the dataset benchmark, but we do establish some evidence that current summarization metrics are insufficient in measuring summarization accuracy. A multi-document summarization web tool was also built to demonstrate the viability of summarization models for future investigators: \url{https://ben-yu.github.io/summarizer}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,169
inproceedings
obonyo-etal-2022-exploring
Exploring the limits of a base {BART} for multi-document summarization in the medical domain
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.23/
Obonyo, Ishmael and Casola, Silvia and Saggion, Horacio
Proceedings of the Third Workshop on Scholarly Document Processing
193--198
This paper is a description of our participation in the Multi-document Summarization for Literature Review (MSLR) Shared Task, in which we explore summarization models to create an automatic review of scientific results. Rather than maximizing the metrics using expensive computational models, we placed ourselves in a situation of scarce computational resources and explore the limits of a base sequence to sequence models (thus with a limited input length) to the task. Although we explore methods to feed the abstractive model with salient sentences only (using a first extractive step), we find the results still need some improvements.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,170
inproceedings
tangsali-etal-2022-abstractive
Abstractive Approaches To Multidocument Summarization Of Medical Literature Reviews
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.24/
Tangsali, Rahul and Vyawahare, Aditya Jagdish and Mandke, Aditya Vyankatesh and Litake, Onkar Rupesh and Kadam, Dipali Dattatray
Proceedings of the Third Workshop on Scholarly Document Processing
199--203
Text summarization has been a trending domain of research in NLP in the past few decades. The medical domain is no exception to the same. Medical documents often contain a lot of jargon pertaining to certain domains, and performing an abstractive summarization on the same remains a challenge. This paper presents a summary of the findings that we obtained based on the shared task of Multidocument Summarization for Literature Review (MSLR). We stood fourth in the leaderboards for evaluation on the MS{\textasciicircum}2 and Cochrane datasets. We finetuned pre-trained models such as BART-large, DistilBART and T5-base on both these datasets. These models' accuracy was later tested with a part of the same dataset using ROUGE scores as the evaluation metrics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,171
inproceedings
shinde-etal-2022-extractive
An Extractive-Abstractive Approach for Multi-document Summarization of Scientific Articles for Literature Review
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.25/
Shinde, Kartik and Roy, Trinita and Ghosal, Tirthankar
Proceedings of the Third Workshop on Scholarly Document Processing
204--209
Research in the biomedical domain is con- stantly challenged by its large amount of ever- evolving textual information. Biomedical re- searchers are usually required to conduct a lit- erature review before any medical interven- tion to assess the effectiveness of the con- cerned research. However, the process is time- consuming, and therefore, automation to some extent would help reduce the accompanying information overload. Multi-document sum- marization of scientific articles for literature reviews is one approximation of such automa- tion. Here in this paper, we describe our pipelined approach for the aforementioned task. We design a BERT-based extractive method followed by a BigBird PEGASUS-based ab- stractive pipeline for generating literature re- view summaries from the abstracts of biomedi- cal trial reports as part of the Multi-document Summarization for Literature Review (MSLR) shared task1 in the Scholarly Document Pro- cessing (SDP) workshop 20222. Our proposed model achieves the best performance on the MSLR-Cochrane leaderboard3 on majority of the evaluation metrics. Human scrutiny of our automatically generated summaries indicates that our approach is promising to yield readable multi-article summaries for conducting such lit- erature reviews.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,172
inproceedings
kashnitsky-etal-2022-overview
Overview of the {DAGP}ap22 Shared Task on Detecting Automatically Generated Scientific Papers
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.26/
Kashnitsky, Yury and Herrmannova, Drahomira and de Waard, Anita and Tsatsaronis, George and Fennell, Catriona Catriona and Labbe, Cyril
Proceedings of the Third Workshop on Scholarly Document Processing
210--213
This paper provides an overview of the DAGPap22 shared task on the detection of automatically generated scientific papers at the Scholarly Document Process workshop colocated with COLING. We frame the detection problem as a binary classification task: given an excerpt of text, label it as either human-written or machine-generated. We shared a dataset containing excerpts from human-written papers as well as artificially generated content and suspicious documents collected by Elsevier publishing and editorial teams. As a test set, the participants are provided with a 5x larger corpus of openly accessible human-written as well as generated papers from the same scientific domains of documents. The shared task saw 180 submissions across 14 participating teams and resulted in two published technical reports. We discuss our findings from the shared task in this overview paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,173
inproceedings
rosati-2022-synscipass
{S}yn{S}ci{P}ass: detecting appropriate uses of scientific text generation
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.27/
Rosati, Domenic
Proceedings of the Third Workshop on Scholarly Document Processing
214--222
Approaches to machine generated text detection tend to focus on binary classification of human versus machine written text. In the scientific domain where publishers might use these models to examine manuscripts under submission, misclassification has the potential to cause harm to authors. Additionally, authors may appropriately use text generation models such as with the use of assistive technologies like translation tools. In this setting, a binary classification scheme might be used to flag appropriate uses of assistive text generation technology as simply machine generated which is a cause of concern. In our work, we simulate this scenario by presenting a state-of-the-art detector trained on the DAGPap22 with machine translated passages from Scielo and find that the model performs at random. Given this finding, we develop a framework for dataset development that provides a nuanced approach to detecting machine generated text by having labels for the type of technology used such as for translation or paraphrase resulting in the construction of SynSciPass. By training the same model that performed well on DAGPap22 on SynSciPass, we show that not only is the model more robust to domain shifts but also is able to uncover the type of technology used for machine generated text. Despite this, we conclude that current datasets are neither comprehensive nor realistic enough to understand how these models would perform in the wild where manuscript submissions can come from many unknown or novel distributions, how they would perform on scientific full-texts rather than small passages, and what might happen when there is a mix of appropriate and inappropriate uses of natural language generation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,174
inproceedings
glazkova-glazkov-2022-detecting
Detecting generated scientific papers using an ensemble of transformer models
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.28/
Glazkova, Anna and Glazkov, Maksim
Proceedings of the Third Workshop on Scholarly Document Processing
223--228
The paper describes neural models developed for the DAGPap22 shared task hosted at the Third Workshop on Scholarly Document Processing. This shared task targets the automatic detection of generated scientific papers. Our work focuses on comparing different transformer-based models as well as using additional datasets and techniques to deal with imbalanced classes. As a final submission, we utilized an ensemble of SciBERT, RoBERTa, and DeBERTa fine-tuned using random oversampling technique. Our model achieved 99.24{\%} in terms of F1-score. The official evaluation results have put our system at the third place.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,175
inproceedings
tsereteli-etal-2022-overview
Overview of the {SV}-Ident 2022 Shared Task on Survey Variable Identification in Social Science Publications
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.29/
Tsereteli, Tornike and Kartal, Yavuz Selim and Ponzetto, Simone Paolo and Zielinski, Andrea and Eckert, Kai and Mayr, Philipp
Proceedings of the Third Workshop on Scholarly Document Processing
229--246
In this paper, we provide an overview of the SV-Ident shared task as part of the 3rd Workshop on Scholarly Document Processing (SDP) at COLING 2022. In the shared task, participants were provided with a sentence and a vocabulary of variables, and asked to identify which variables, if any, are mentioned in individual sentences from scholarly documents in full text. Two teams made a total of 9 submissions to the shared task leaderboard. While none of the teams improve on the baseline systems, we still draw insights from their submissions. Furthermore, we provide a detailed evaluation. Data and baselines for our shared task are freely available at \url{https://github.com/vadis-project/sv-ident}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,176
inproceedings
hovelmeyer-kartal-2022-varanalysis
Varanalysis@{SV}-Ident 2022: Variable Detection and Disambiguation Based on Semantic Similarity
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.30/
H{\"ovelmeyer, Alica and Kartal, Yavuz Selim
Proceedings of the Third Workshop on Scholarly Document Processing
247--252
This paper describes an approach to the SV-Ident Shared Task which requires the detection and disambiguation of survey variables in sentences taken from social science publications. It deals with both subtasks as problems of semantic textual similarity (STS) and relies on the use of sentence transformers. Sentences and variables are examined for semantic similarity for both detecting sentences containing variables and disambiguating the respective variables. The focus is placed on analyzing the effects of including different parts of the variables and observing the differences between English and German instances. Additionally, for the variable detection task a bag of words model is used to filter out sentences which are likely to contain a variable mention as a preselection of sentences to perform the semantic similarity comparison on.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,177
inproceedings
e-mendoza-etal-2022-benchmark
Benchmark for Research Theme Classification of Scholarly Documents
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.31/
E. Mendoza, {\'O}scar and Kusa, Wojciech and El-Ebshihy, Alaa and Wu, Ronin and Pride, David and Knoth, Petr and Herrmannova, Drahomira and Piroi, Florina and Pasi, Gabriella and Hanbury, Allan
Proceedings of the Third Workshop on Scholarly Document Processing
253--262
We present a new gold-standard dataset and a benchmark for the Research Theme Identification task, a sub-task of the Scholarly Knowledge Graph Generation shared task, at the 3rd Workshop on Scholarly Document Processing. The objective of the shared task was to label given research papers with research themes from a total of 36 themes. The benchmark was compiled using data drawn from the largest overall assessment of university research output ever undertaken globally (the Research Excellence Framework - 2014). We provide a performance comparison of a transformer-based ensemble, which obtains multiple predictions for a research paper, given its multiple textual fields (e.g. title, abstract, reference), with traditional machine learning models. The ensemble involves enriching the initial data with additional information from open-access digital libraries and Argumentative Zoning techniques (CITATION). It uses a weighted sum aggregation for the multiple predictions to obtain a final single prediction for the given research paper. Both data and the ensemble are publicly available on \url{https://www.kaggle.com/competitions/sdp2022-scholarly-knowledge-graph-generation/data?select=task1_test_no_label.csv} and \url{https://github.com/ProjectDoSSIER/sdp2022}, respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,178
inproceedings
cohan-etal-2022-overview-first
Overview of the First Shared Task on Multi Perspective Scientific Document Summarization ({M}u{P})
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.32/
Cohan, Arman and Feigenblat, Guy and Ghosal, Tirthankar and Shmueli-Scheuer, Michal
Proceedings of the Third Workshop on Scholarly Document Processing
263--267
We present the main findings of MuP 2022 shared task, the first shared task on multi-perspective scientific document summarization. The task provides a testbed representing challenges for summarization of scientific documents, and facilitates development of better models to leverage summaries generated from multiple perspectives. We received 139 total submissions from 9 teams. We evaluated submissions both by automated metrics (i.e., Rouge) and human judgments on faithfulness, coverage, and readability which provided a more nuanced view of the differences between the systems. While we observe encouraging results from the participating teams, we conclude that there is still significant room left for improving summarization leveraging multiple references. Our dataset is available at \url{https://github.com/allenai/mup}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,179
inproceedings
akkasi-2022-multi
Multi Perspective Scientific Document Summarization With Graph Attention Networks ({GATS})
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.33/
Akkasi, Abbas
Proceedings of the Third Workshop on Scholarly Document Processing
268--272
It is well recognized that creating summaries of scientific texts can be difficult. For each given document, the majority of summarizing research believes there is only one best gold summary. Having just one gold summary limits our capacity to assess the effectiveness of summarizing algorithms because creating summaries is an art. Likewise, because it takes subject-matter experts a lot of time to read and comprehend lengthy scientific publications, annotating several gold summaries for scientific documents can be very expensive. The shared task known as the Multi perspective Scientific Document Summarization (Mup) is an exploration of various methods to produce multi perspective scientific summaries. Utilizing Graph Attention Networks (GATs), we take an extractive text summarization approach to the issue as a kind of sentence ranking task. Although the results produced by the suggested model are not particularly impressive, comparing them to the state-of-the-arts demonstrates the model`s potential for improvement.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,180
inproceedings
sotudeh-goharian-2022-guir
{GUIR} @ {M}u{P} 2022: Towards Generating Topic-aware Multi-perspective Summaries for Scientific Documents
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.34/
Sotudeh, Sajad and Goharian, Nazli
Proceedings of the Third Workshop on Scholarly Document Processing
273--278
This paper presents our approach for the MuP 2022 shared task {---}-Multi-Perspective Scientific Document Summarization, where the objective is to enable summarization models to explore methods for generating multi-perspective summaries for scientific papers. We explore two orthogonal ways to cope with this task. The first approach involves incorporating a neural topic model (i.e., NTM) into the state-of-the-art abstractive summarizer (LED); the second approach involves adding a two-step summarizer that extracts the salient sentences from the document and then writes abstractive summaries from those sentences. Our latter model outperformed our other submissions on the official test set. Specifically, among 10 participants (including organizers' baseline) who made their results public with 163 total runs. Our best system ranks first in Rouge-1 (F), and second in Rouge-1 (R), Rouge-2 (F) and Average Rouge (F) scores.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,181
inproceedings
urlana-etal-2022-ltrc
{LTRC} @{M}u{P} 2022: Multi-Perspective Scientific Document Summarization Using Pre-trained Generation Models
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.35/
Urlana, Ashok and Surange, Nirmal and Shrivastava, Manish
Proceedings of the Third Workshop on Scholarly Document Processing
279--284
The MuP-2022 shared task focuses on multiperspective scientific document summarization. Given a scientific document, with multiple reference summaries, our goal was to develop a model that can produce a generic summary covering as many aspects of the document as covered by all of its reference summaries. This paper describes our best official model, a finetuned BART-large, along with a discussion on the challenges of this task and some of our unofficial models including SOTA generation models. Our submitted model out performedthe given, MuP 2022 shared task, baselines on ROUGE-2, ROUGE-L and average ROUGE F1-scores. Code of our submission can be ac- cessed here.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,182
inproceedings
kumar-etal-2022-team
Team {AINLPML} @ {M}u{P} in {SDP} 2021: Scientific Document Summarization by End-to-End Extractive and Abstractive Approach
Cohan, Arman and Feigenblat, Guy and Freitag, Dayne and Ghosal, Tirthankar and Herrmannova, Drahomira and Knoth, Petr and Lo, Kyle and Mayr, Philipp and Shmueli-Scheuer, Michal and de Waard, Anita and Wang, Lucy Lu
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.sdp-1.36/
Kumar, Sandeep and Kohli, Guneet Singh and Shinde, Kartik and Ekbal, Asif
Proceedings of the Third Workshop on Scholarly Document Processing
285--290
This paper introduces the proposed summarization system of the AINLPML team for the First Shared Task on Multi-Perspective Scientific Document Summarization at SDP 2022. We present a method to produce abstractive summaries of scientific documents. First, we perform an extractive summarization step to identify the essential part of the paper. The extraction step includes utilizing a contributing sentence identification model to determine the contributing sentences in selected sections and portions of the text. In the next step, the extracted relevant information is used to condition the transformer language model to generate an abstractive summary. In particular, we fine-tuned the pre-trained BART model on the extracted summary from the previous step. Our proposed model successfully outperformed the baseline provided by the organizers by a significant margin. Our approach achieves the best average Rouge F1 Score, Rouge-2 F1 Score, and Rouge-L F1 Score among all submissions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,183
inproceedings
stranisci-etal-2022-o
{O}-Dang! The Ontology of Dangerous Speech Messages
Kernerman, Ilan and Carvalho, Sara and Iglesias, Carlos A. and Sprugnoli, Rachele
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.salld-1.2/
Stranisci, Marco Antonio and Frenda, Simona and Lai, Mirko and Araque, Oscar and Cignarella, Alessandra Teresa and Basile, Valerio and Bosco, Cristina and Patti, Viviana
Proceedings of the 2nd Workshop on Sentiment Analysis and Linguistic Linked Data
2--8
Inside the NLP community there is a considerable amount of language resources created, annotated and released every day with the aim of studying specific linguistic phenomena. Despite a variety of attempts in order to organize such resources has been carried on, a lack of systematic methods and of possible interoperability between resources are still present. Furthermore, when storing linguistic information, still nowadays, the most common practice is the concept of {\textquotedblleft}gold standard{\textquotedblright}, which is in contrast with recent trends in NLP that aim at stressing the importance of different subjectivities and points of view when training machine learning and deep learning methods. In this paper we present O-Dang!: The Ontology of Dangerous Speech Messages, a systematic and interoperable Knowledge Graph (KG) for the collection of linguistic annotated data. O-Dang! is designed to gather and organize Italian datasets into a structured KG, according to the principles shared within the Linguistic Linked Open Data community. The ontology has also been designed to account a perspectivist approach, since it provides a model for encoding both gold standard and single-annotator labels in the KG. The paper is structured as follows. In Section 1 the motivations of our work are outlined. Section 2 describes the O-Dang! Ontology, that provides a common semantic model for the integration of datasets in the KG. The Ontology Population stage with information about corpora, users, and annotations is presented in Section 3. Finally, in Section 4 an analysis of offensiveness across corpora is provided as a first case study for the resource.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,213
inproceedings
ramos-etal-2022-movie
Movie Rating Prediction using Sentiment Features
Kernerman, Ilan and Carvalho, Sara and Iglesias, Carlos A. and Sprugnoli, Rachele
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.salld-1.3/
Ramos, Jo{\~a}o and Ap{\'o}stolo, Diogo and Gon{\c{c}}alo Oliveira, Hugo
Proceedings of the 2nd Workshop on Sentiment Analysis and Linguistic Linked Data
9--18
We analyze the impact of using sentiment features in the prediction of movie review scores. The effort included the creation of a new lexicon, Expanded OntoSenticNet (EON), by merging OntoSenticNet and SentiWordNet, and experiments were made on the {\textquotedblleft}IMDB movie review{\textquotedblright} dataset, with the three main approaches for sentiment analysis: lexicon-based, supervised machine learning and hybrids of the previous. Hybrid approaches performed the best, demonstrating the potential of merging knowledge bases and machine learning, but supervised approaches based on review embeddings were not far.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,214
inproceedings
schneidermann-pedersen-2022-evaluating
Evaluating a New {D}anish Sentiment Resource: the {D}anish Sentiment Lexicon, {DSL}
Kernerman, Ilan and Carvalho, Sara and Iglesias, Carlos A. and Sprugnoli, Rachele
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.salld-1.4/
Schneidermann, Nina and Pedersen, Bolette
Proceedings of the 2nd Workshop on Sentiment Analysis and Linguistic Linked Data
19--24
In this paper, we evaluate a new sentiment lexicon for Danish, the Danish Sentiment Lexicon (DSL), to gain input regarding how to carry out the final adjustments of the lexicon. A feature of the lexicon that differentiates it from other sentiment resources for Danish is that it is linked to a large number of other Danish lexical resources via the DDO lemma and sense inventory and the LLOD via the Danish wordnet, DanNet. We perform our evaluation on four datasets labeled with sentiments. In addition, we compare the lexicon against two existing benchmarks for Danish: the Afinn and the Sentida resources. We observe that DSL performs mostly comparably to the existing resources, but that more fine-grained explorations need to be done in order to fully exploit its possibilities given its linking properties.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,215
inproceedings
mcnamee-etal-2022-correlating
Correlating Facts and Social Media Trends on Environmental Quantities Leveraging Commonsense Reasoning and Human Sentiments
Kernerman, Ilan and Carvalho, Sara and Iglesias, Carlos A. and Sprugnoli, Rachele
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.salld-1.5/
McNamee, Brad and Varde, Aparna and Razniewski, Simon
Proceedings of the 2nd Workshop on Sentiment Analysis and Linguistic Linked Data
25--30
As climate change alters the physical world we inhabit, opinions surrounding this hot-button issue continue to fluctuate. This is apparent on social media, particularly Twitter. In this paper, we explore concrete climate change data concerning the Air Quality Index (AQI), and its relationship to tweets. We incorporate commonsense connotations for appeal to the masses. Earlier work focuses primarily on accuracy and performance of sentiment analysis tools / models, much geared towards experts. We present commonsense interpretations of results, such that they are not impervious to the masses. Moreover, our study uses real data on multiple environmental quantities comprising AQI. We address human sentiments gathered from linked data on hashtagged tweets with geolocations. Tweets are analyzed using VADER, subtly entailing commonsense reasoning. Interestingly, correlations between climate change tweets and air quality data vary not only based upon the year, but also the specific environmental quantity. It is hoped that this study will shed light on possible areas to increase awareness of climate change, and methods to address it, by the scientists as well as the common public. In line with Linked Data initiatives, we aim to make this work openly accessible on a network, published with the Creative Commons license.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,216
inproceedings
stankovic-etal-2022-sentiment
Sentiment Analysis of {S}erbian Old Novels
Kernerman, Ilan and Carvalho, Sara and Iglesias, Carlos A. and Sprugnoli, Rachele
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.salld-1.6/
Stankovi{\'c}, Ranka and Ko{\v{s}}prdi{\'c}, Milo{\v{s}} and Ikoni{\'c} Ne{\v{s}}i{\'c}, Milica and Radovi{\'c}, Tijana
Proceedings of the 2nd Workshop on Sentiment Analysis and Linguistic Linked Data
31--38
In this paper we present first study of Sentiment Analysis (SA) of Serbian novels from the 1840-1920 period. The preparation of sentiment lexicon was based on three existing lexicons: \textit{NRC}, \textit{AFFIN} and \textit{Bing} with additional extensive corrections. The first phase of dataset refinement included filtering the word that are not found in Serbian morphological dictionary and in second automatic POS tagging and lemma were manually corrected. The polarity lexicon was extracted and transformed into \textit{ontolex-lemon} and published as initial version. The complex inflection system of Serbian language required expansion of sentiment lexicon with inflected forms from Serbian morphological dictionaries. Set of sentences for SA was extracted from 120 novels of Serbian part of ELTeC collection, labelled for polarity and used for several model training. Several approaches for SA are compared, starting with for variation of lexicon based and followed by Logistic Regression, Naive Bayes, Decision Tree, Random Forest, SVN and k-NN. The comparison with models trained on labelled movie reviews dataset indicates that it can not successfully be used for sentiment analysis of sentences in old novels.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,217
inproceedings
wang-etal-2022-language
Language Model Based {C}hinese Handwriting Address Recognition
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.1/
Wang, Chieh-Jen and Tien, Yung-Ping and Hung, Yun-Wei
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
1--6
Chinese handwritten address recognition of consignment note is an important challenge of smart logistics automation. Chinese handwritten characters detection and recognition is the key technology for this application. Since the writing mode of handwritten characters is more complex and diverse than printed characters, it is easy misjudgment for recognition. Moreover, the address text occupies a small proportion in the image of the consignment note and arranged closely, which is easy to cause difficulties in detection. Therefore, how to detect the address text on the consignment note accurately is a focus of this paper. The consignment note address automatic detection and recognition system proposed in this paper detects and recognizes address characters, reduces the probability of misjudgment of Chinese handwriting recognition through language model, and improves the accuracy.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,219
inproceedings
luo-etal-2022-chinese
{C}hinese Movie Dialogue Question Answering Dataset
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.2/
Luo, Shang-Bao and Fan, Cheng-Chung and Chen, Kuan-Yu and Tsao, Yu and Wang, Hsin-Min and Su, Keh-Yih
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
7--14
This paper constructs a Chinese dialogue-based information-seeking question answering dataset CMDQA, which is mainly applied to the scenario of getting Chinese movie related information. It contains 10K QA dialogs (40K turns in total). All questions and background documents are compiled from the Wikipedia via an Internet crawler. The answers to the questions are obtained via extracting the corresponding answer spans within the related text passage. In CMDQA, in addition to searching related documents, pronouns are also added to the question to better mimic the real dialog scenario. This dataset can test the individual performance of the information retrieval, the question answering and the question re-writing modules. This paper also provides a baseline system and shows its performance on this dataset. The experiments elucidate that it still has a big gap to catch the human performance. This dataset thus provides enough challenge for the researcher to conduct related research.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,220
inproceedings
huang-etal-2022-unsupervised
Unsupervised Text Summarization of Long Documents using Dependency-based Noun Phrases and Contextual Order Arrangement
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.3/
Huang, Yen-Hao and Lan, Hsiao-Yen and Chen, Yi-Shin
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
15--24
Unsupervised extractive summarization has recently gained importance since it does not require labeled data. Among unsupervised methods, graph-based approaches have achieved outstanding results. These methods represent each document by a graph, with sentences as nodes and word-level similarity among sentences as edges. Common words can easily lead to a strong connection between sentence nodes. Thus, sentences with many common words can be misinterpreted as salient sentences for a summary. This work addresses the common word issue with a phrase-level graph that (1) focuses on the noun phrases of a document based on grammar dependencies and (2) initializes edge weights by term-frequency within the target document and inverse document frequency over the entire corpus. The importance scores of noun phrases extracted from the graph are then used to select the most salient sentences. To preserve summary coherence, the order of the selected sentences is re-arranged by a flow-aware orderBERT. The results reveal that our unsupervised framework outperformed other extractive methods on ROUGE as well as two human evaluations for semantic similarity and summary coherence.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,221
inproceedings
huang-etal-2022-enhancing
Enhancing {C}hinese Multi-Label Text Classification Performance with Response-based Knowledge Distillation
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.4/
Huang, Szu-Chi and Cao, Cheng-Fu and Liao, Po-Hsun and Lee, Lung-Hao and Lee, Po-Lei and Shyu, Kuo-Kai
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
25--31
It`s difficult to optimize individual label performance of multi-label text classification, especially in those imbalanced data containing long-tailed labels. Therefore, this study proposes a response-based knowledge distillation mechanism comprising a teacher model that optimizes binary classifiers of the corresponding labels and a student model that is a standalone multi-label classifier learning from distilled knowledge passed by the teacher model. A total of 2,724 Chinese healthcare texts were collected and manually annotated across nine defined labels, resulting in 8731 labels, each containing an average of 3.2 labels. We used 5-fold cross-validation to compare the performance of several multi-label models, including TextRNN, TextCNN, HAN, and GRU-att. Experimental results indicate that using the proposed knowledge distillation mechanism effectively improved the performance no matter which model was used, about 2-3{\%} of micro-F1, 4-6{\%} of macro-F1, 3-4{\%} of weighted-F1 and 1-2{\%} of subset accuracy for performance enhancement.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,222
inproceedings
lee-etal-2022-combining
Combining Word Vector Technique and Clustering Algorithm for Credit Card Merchant Detection
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.5/
Lee, Fang-Ju and Lo, Ying-Chun and Wu, Jheng-Long
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
32--39
Extracting relevant user behaviors through customer`s transaction description is one of the ways to collect customer information. In the current text mining field, most of the researches are mainly study text classification, and only few study text clusters. Find the relationship between letters and words in the unstructured transaction consumption description. Use Word Embedding and text mining technology to break through the limitation of classification conditions that need to be distinguished in advance, establish automatic identification and analysis methods, and improve the accuracy of grouping. In this study, use Jieba to segment Chinese words, were based on the content of credit card transaction description. Feature extractions of Word2Vec, combined with Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Hierarchical Agglomerative Clustering, cross-combination experiments. The prediction results of MUC, B3 and CEAF`s F1 average of 67.58{\%} are more significant.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,223
inproceedings
lin-etal-2022-taiwanese
{T}aiwanese-Accented {M}andarin and {E}nglish Multi-Speaker Talking-Face Synthesis System
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.6/
Lin, Chia-Hsuan and Liao, Jian-Peng and Hsieh, Cho-Chun and Liao, Kai-Chun and Wu, Chun-Hsin
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
40--48
This paper proposes a multi-speaker talking-face synthesis system. The system incorporates voice cloning and lip-syncing technology to achieve text-to-talking-face generation by acquiring audio and video clips of any speaker and using zero-shot transfer learning. In addition, we used open-source corpora to train several Taiwanese-accented models and proposed using Mandarin Phonetic Symbols (Bopomofo) as the character embedding of the synthesizer to improve the system`s ability to synthesize Chinese-English code-switched sentences. Through our system, users can create rich applications. Also, the research on this technology is novel in the audiovisual speech synthesis field.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,224
inproceedings
smolka-etal-2022-character
Is Character Trigram Overlapping Ratio Still the Best Similarity Measure for Aligning Sentences in a Paraphrased Corpus?
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.7/
Smolka, Aleksandra and Wang, Hsin-Min and Chang, Jason S. and Su, Keh-Yih
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
49--60
Sentence alignment is an essential step in studying the mapping among different language expressions, and the character trigram overlapping ratio was reported to be the most effective similarity measure in aligning sentences in the text simplification dataset. However, the appropriateness of each similarity measure depends on the characteristics of the corpus to be aligned. This paper studies if the character trigram is still a suitable similarity measure for the task of aligning sentences in a paragraph paraphrasing corpus. We compare several embedding-based and non-embeddings model-agnostic similarity measures, including those that have not been studied previously. The evaluation is conducted on parallel paragraphs sampled from the Webis-CPC-11 corpus, which is a paragraph paraphrasing dataset. Our results show that modern BERT-based measures such as Sentence-BERT or BERTScore can lead to significant improvement in this task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,225
inproceedings
su-etal-2022-roberta
{R}o{BERT}a-based Traditional {C}hinese Medicine Named Entity Recognition Model
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.8/
Su, Ming-Hsiang and Lee, Chin-Wei and Hsu, Chi-Lun and Su, Ruei-Cyuan
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
61--66
In this study, a named entity recognition was constructed and applied to the identification of Chinese medicine names and disease names. The results can be further used in a human-machine dialogue system to provide people with correct Chinese medicine medication reminders. First, this study uses web crawlers to sort out web resources into a Chinese medicine named entity corpus, collecting 1097 articles, 1412 disease names and 38714 Chinese medicine names. Then, we annotated each article using TCM name and BIO tagging method. Finally, this study trains and evaluates BERT, ALBERT, RoBERTa, GPT2 with BiLSTM and CRF. The experimental results show that RoBERTa`s NER system combining BiLSTM and CRF achieves the best system performance, with a precision rate of 0.96, a recall rate of 0.96, and an F1-score of 0.96.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,226
inproceedings
chang-hu-2022-study
A Study on Using Different Audio Lengths in Transfer Learning for Improving Chainsaw Sound Recognition
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.9/
Chang, Jia-Wei and Hu, Zhong-Yun
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
67--74
Chainsaw sound recognition is a challenging task because of the complexity of sound and the excessive noises in mountain environments. This study aims to discuss the influence of different sound lengths on the accuracy of model training. Therefore, this study used LeNet, a simple model with few parameters, and adopted the design of average pooling to enable the proposed models to receive audio of any length. In performance comparison, we mainly compared the influence of different audio lengths and further tested the transfer learning from short-to-long and long-to-short audio. In experiments, we used the ESC-10 dataset for training models and validated their performance via the self-collected chainsaw-audio dataset. The experimental results show that (a) the models trained with different audio lengths (1s, 3s, and 5s) have accuracy from 74{\%} 78{\%}, 74{\%} 77{\%}, and 79{\%} 83{\%} on the self-collected dataset. (b) The generalization of the previous models is significantly improved by transfer learning, the models achieved 85.28{\%}, 88.67{\%}, and 91.8{\%} of accuracy. (c) In transfer learning, the model learned from short-to-long audios can achieve better results than that learned from long-to-short audios, especially being differed 14{\%} of accuracy on 5s chainsaw-audios.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,227
inproceedings
li-etal-2022-using-grammatical
Using Grammatical and Semantic Correction Model to Improve {C}hinese-to-{T}aiwanese Machine Translation Fluency
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.10/
Li, Yuan-Han and Young, Chung-Ping and Lu, Wen-Hsiang
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
75--83
Currently, there are three major issues to tackle in Chinese-to-Taiwanese machine translation: multi-pronunciation Taiwanese words, unknown words, and Chinese-to-Taiwanese grammatical and semantic transformation. Recent studies have mostly focused on the issues of multi-pronunciation Taiwanese words and unknown words, while very few research papers focus on grammatical and semantic transformation. However, there exist grammatical rules exclusive to Taiwanese that, if not translated properly, would cause the result to feel unnatural to native speakers and potentially twist the original meaning of the sentence, even with the right words and pronunciations. Therefore, this study collects and organizes a few common Taiwanese sentence structures and grammar rules, then creates a grammar and semantic correction model for Chinese-to-Taiwanese machine translation, which would detect and correct grammatical and semantic discrepancies between the two languages, thus improving translation fluency.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,228
inproceedings
chen-etal-2022-investigation
Investigation of feature processing modules and attention mechanisms in speaker verification system
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.11/
Chen, Ting-Wei and Lin, Wei-Ting and Chen, Chia-Ping and Lu, Chung-Li and Chan, Bo-Cheng and Cheng, Yu-Han and Chuang, Hsiang-Feng and Chen, Wei-Yu
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
84--91
In this paper, we use several combinations of feature front-end modules and attention mechanisms to improve the performance of our speaker verification system. An updated version of ECAPA-TDNN is chosen as a baseline. We replace and integrate different feature front-end and attention mechanism modules to compare and find the most effective model design, and this model would be our final system. We use VoxCeleb 2 dataset as our training set, and test the performance of our models on several test sets. With our final proposed model, we improved performance by 16{\%} over baseline on VoxSRC2022 valudation set, achieving better results for our speaker verification system.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,229
inproceedings
chen-etal-2022-preliminary
A Preliminary Study of the Application of Discrete Wavelet Transform Features in Conv-{T}as{N}et Speech Enhancement Model
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.12/
Chen, Yan-Tong and Wu, Zong-Tai and Hung, Jeih-Weih
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
92--99
Nowadays, time-domain features have been widely used in speech enhancement (SE) networks like frequency-domain features to achieve excellent performance in eliminating noise from input utterances. This study primarily investigates how to extract information from time-domain utterances to create more effective features in speech enhancement. We present employing sub-signals dwelled in multiple acoustic frequency bands in time domain and integrating them into a unified feature set. We propose using the discrete wavelet transform (DWT) to decompose each input frame signal to obtain sub-band signals, and a projection fusion process is performed on these signals to create the ultimate features. The corresponding fusion strategy is the bi-projection fusion (BPF). In short, BPF exploits the sigmoid function to create ratio masks for two feature sources. The concatenation of fused DWT features and time features serves as the encoder output of a celebrated SE framework, fully-convolutional time-domain audio separation network (Conv-TasNet), to estimate the mask and then produce the enhanced time-domain utterances. The evaluation experiments are conducted on the VoiceBank-DEMAND and VoiceBank-QUT tasks. The experimental results reveal that the proposed method achieves higher speech quality and intelligibility than the original Conv-TasNet that uses time features only, indicating that the fusion of DWT features created from the input utterances can benefit time features to learn a superior Conv-TasNet in speech enhancement.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,230
inproceedings
dai-etal-2022-exploiting
Exploiting the compressed spectral loss for the learning of the {DEMUCS} speech enhancement network
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.13/
Dai, Chi-En and Hong, Qi-Wei and Hung, Jeih-Weih
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
100--106
This study aims to improve a highly effective speech enhancement technique, DEMUCS, by revising the respective loss function in learning. DEMUCS, developed by Facebook Team, is built on the Wave-UNet and consists of convolutional layer encoding and decoding blocks with an LSTM layer in between. Although DEMUCS processes the input speech utterance purely in the time (wave) domain, the applied loss function consists of wave-domain L1 distance and multi-scale shorttime-Fourier-transform (STFT) loss. That is, both time- and frequency-domain features are taken into consideration in the learning of DEMUCS. In this study, we present revising the STFT loss in DEMUCS by employing the compressed magnitude spectrogram. The compression is done by either the power-law operation with a positive exponent less than one, or the logarithmic operation. We evaluate the presented novel framework on the VoiceBank-DEMAND database and task. The preliminary experimental results suggest that DEMUCS containing the power-law compressed magnitude spectral loss outperforms the original DEMUCS by providing the test utterances with higher objective quality and intelligibility scores (PESQ and STOI). Relatively, the logarithm compressed magnitude spectral loss does not benefit DEMUCS. Therefore, we reveal that DEMUCS can be further improved by properly revising the STFT terms of its loss function.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,231
inproceedings
lin-etal-2022-using
Using Machine Learning and Pattern-Based Methods for Identifying Elements in {C}hinese Judgment Documents of Civil Cases
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.14/
Lin, Hong-Ren and Liu, Wei-Zhi and Liu, Chao-Lin and Yang, Chieh
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
107--115
Providing structural information about civil cases for judgement prediction systems or recommendation systems can enhance the efficiency of the inference procedures and the justifiability of produced results. In this research, we focus on the civil cases about alimony, which is a relatively uncommon choice in current applications of artificial intelligence in law. We attempt to identify the statements for four types of legal functions in judgement documents, i.e., the pleadings of the applicants, the responses of the opposite parties, the opinions of the courts, and uses of laws to reach the final decisions. In addition, we also try to identify the conflicting issues between the plaintiffs and the defendants in the judgement documents.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,232
inproceedings
lien-etal-2022-development
Development of {M}andarin-{E}nglish code-switching speech synthesis system
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.15/
Lien, Hsin-Jou and Huang, Li-Yu and Chen, Chia-Ping
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
116--120
In this paper, the Mandarin-English code-switching speech synthesis system has been proposed. To focus on learning the content information between two languages, the training dataset is multilingual artificial dataset whose speaker style is unified. Adding language embedding into the system helps it be more adaptive to multilingual dataset. Besides, text preprocessing is applied and be used in different way which depends on the languages. Word segmentation and text-to-pinyin are the text preprocessing for Mandarin, which not only improves the fluency but also reduces the learning complexity. Number normalization decides whether the arabic numerals in sentence needs to add the digits. The preprocessing for English is acronym conversion which decides the pronunciation of acronym.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,233
inproceedings
liu-etal-2022-predicting
Predicting Judgments and Grants for Civil Cases of Alimony for the Elderly
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.16/
Liu, Wei-Zhi and Wu, Po-Hsien and Lin, Hong-Ren and Liu, Chao-Lin
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
121--128
The needs for mediation are increasing rapidly along with the increasing number of cases of the alimony for the elderly in recent years. Offering a prediction mechanism for predicting the outcomes of some prospective lawsuits may alleviate the workload of the mediation courts. This research aims to offer the predictions for the judgments and the granted alimony for the plaintiffs of such civil cases in Chinese, based on our analysis of results of the past lawsuits. We hope that the results can be helpful for both the involved parties and the courts. To build the current system, we segment and vectorize the texts of the judgement documents, and apply the logistic regression and model tree models for predicting the judgments and for estimating the granted alimony of the cases, respectively.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,234
inproceedings
liu-etal-2022-lightweight
Lightweight Sound Event Detection Model with {R}ep{VGG} Architecture
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.17/
Liu, Chia-Chuan and Huang, Sung-Jen and Chen, Chia-Ping and Lu, Chung-Li and Chan, Bo-Cheng and Cheng, Yu-Han and Chuang, Hsiang-Feng and Chen, Wei-Yu
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
129--135
In this paper, we proposed RepVGGRNN, which is a light weight sound event detection model. We use RepVGG convolution blocks in the convolution part to improve performance, and re-parameterize the RepVGG blocks after the model is trained to reduce the parameters of the convolution layers. To further improve the accuracy of the model, we incorporated both the mean teacher method and knowledge distillation to train the lightweight model. The proposed system achieves PSDS (Polyphonic sound event detection score)-scenario 1, 2 of 40.8{\%} and 67.7{\%} outperforms the baseline system of 34.4{\%} and 57.2{\%} on the DCASE 2022 Task4 validation dataset. The quantity of the parameters in the proposed system is about 49.6K, only 44.6{\%} of the baseline system.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,235
inproceedings
chen-etal-2022-analyzing-discourse
Analyzing discourse functions with acoustic features and phone embeddings: non-lexical items in {T}aiwan {M}andarin
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.18/
Chen, Pin-Er and Tseng, Yu-Hsiang and Wang, Chi-Wei and Yeh, Fang-Chi and Hsieh, Shu-Kai
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
136--146
Non-lexical items are expressive devices used in conversations that are not words but are nevertheless meaningful. These items play crucial roles, such as signaling turn-taking or marking stances in interactions. However, as the non-lexical items do not stably correspond to written or phonological forms, past studies tend to focus on studying their acoustic properties, such as pitches and durations. In this paper, we investigate the discourse functions of non-lexical items through their acoustic properties and the phone embeddings extracted from a deep learning model. Firstly, we create a non-lexical item dataset based on the interpellation video clips from Taiwan`s Legislative Yuan. Then, we manually identify the non-lexical items and their discourse functions in the videos. Next, we analyze the acoustic properties of those items through statistical modeling and building classifiers based on phone embeddings extracted from a phone recognition model. We show that (1) the discourse functions have significant effects on the acoustic features; and (2) the classifiers built on phone embeddings perform better than the ones on conventional acoustic properties. These results suggest that phone embeddings may reflect the phonetic variations crucial in differentiating the discourse functions of non-lexical items.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,236
inproceedings
huang-etal-2022-dimensional
A Dimensional Valence-Arousal-Irony Dataset for {C}hinese Sentence and Context
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.19/
Huang, Sheng-Wei and Chung, Wei-Yi and Wu, Yu-Hsuan and Yu, Chen-Chia and Wu, Jheng-Long
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
147--154
Chinese multi-dimensional sentiment detection is a challenging task with a considerable impact on semantic understanding. Past irony datasets are utilized to annotate sentiment type of whole sentences of irony. It does not provide the corresponding intensity of valence and arousal on the sentences and context. However, an ironic statement is defined as a statement whose apparent meaning is the opposite of its actual meaning. This means that in order to understand the actual meaning of a sentence, contextual information is needed. Therefore, the dimensional sentiment intensities of ironic sentences and context are important issues in the natural language processing field. This paper creates the extended NTU irony corpus, which includes valence, arousal and irony intensities on sentence-level; and valence and arousal intensities on context-level, called Chinese Dimensional Valence-Arousal-Irony (CDVAI) dataset. Therefore, this paper analyzes the annotation difference between the human annotators and uses a deep learning model such as BERT to evaluate the prediction performances on CDVAI corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,237
inproceedings
wu-etal-2022-intelligent
Intelligent Future Recreation Harbor Application Service: Taking {K}aohsiung {A}sia New Bay as an Example to Construct a Composite Recreational Knowledge Graph
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.20/
Wu, Dian-Zhi and Lu, Yu-De and Tung, Chia-Ming and Huang, Bo-Yang and Huang, Hsun-Hui and Lin, Chien-Der and Lu, Wen-Hsiang
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
155--163
In view of the lack of overall specialized design services for harbour recreation in Taiwan nowadays, various marine recreational activities and marine scenic spots haven`t yet been planned and developed in the integration of services around the city and harbour. As there are not many state-of-the-art products and application services, and Taiwan`s harbour leisure services-related industries are facing the challenge of digital transformation. Institute for Information Industry proposed an innovative {\textquotedblleft}Smart Future Recreational Harbour Application Service{\textquotedblright} project, taking Kaohsiung Asia`s New Bay Area as the main field of demonstration, Using multi-source knowledge graph integration and inference technology to recommend appropriate recreational service information, as a result, tourists can enjoy the best virtual reality intelligent human-machine interactive service experience during their trip.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,238
inproceedings
lin-ma-2022-hantrans
{H}an{T}rans: An Empirical Study on Cross-Era Transferability of {C}hinese Pre-trained Language Model
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.21/
Lin, Chin-Tung and Ma, Wei-Yun
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
164--173
The pre-trained language model has recently dominated most downstream tasks in the NLP area. Particularly, bidirectional Encoder Representations from Transformers (BERT) is the most iconic pre-trained language model among the NLP tasks. Their proposed masked-language modeling (MLM) is an indispensable part of the existing pre-trained language models. Those outperformed models for downstream tasks benefited directly from the large training corpus in the pre-training stage. However, their training corpus for modern traditional Chinese was light. Most of all, the ancient Chinese corpus is still disappearance in the pre-training stage. Therefore, we aim to address this problem by transforming the annotation data of ancient Chinese into BERT style training corpus. Then we propose a pre-trained Oldhan Chinese BERT model for the NLP community. Our proposed model outperforms the original BERT model by significantly reducing perplexity scores in masked-language modeling (MLM). Also, our fine-tuning models improve F1 scores on word segmentation and part-of-speech tasks. Then we comprehensively study zero-shot cross-eras ability in the BERT model. Finally, we visualize and investigate personal pronouns in the embedding space of ancient Chinese records from four eras. We have released our code at \url{https://github.com/ckiplab/han-transformers}.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,239
inproceedings
wu-etal-2022-preliminary
A Preliminary Study on Automated Speaking Assessment of {E}nglish as a Second Language ({ESL}) Students
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.22/
Wu, Tzu-I and Lo, Tien-Hong and Chao, Fu-An and Sung, Yao-Ting and Chen, Berlin
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
174--183
Due to the surge in global demand for English as a second language (ESL), developments of automated methods for grading speaking proficiency have gained considerable attention. This paper aims to present a computerized regime of grading the spontaneous spoken language for ESL learners. Based on the speech corpus of ESL learners recently collected in Taiwan, we first extract multi-view features (e.g., pronunciation, fluency, and prosody features) from either automatic speech recognition (ASR) transcription or audio signals. These extracted features are, in turn, fed into a tree-based classifier to produce a new set of indicative features as the input of the automated assessment system, viz. the grader. Finally, we use different machine learning models to predict ESL learners' respective speaking proficiency and map the result into the corresponding CEFR level. The experimental results and analysis conducted on the speech corpus of ESL learners in Taiwan show that our approach holds great potential for use in automated speaking assessment, meanwhile offering more reliable predictive results than the human experts.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,240
inproceedings
liu-etal-2022-clustering
Clustering Issues in Civil Judgments for Recommending Similar Cases
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.23/
Liu, Yi-Fan and Liu, Chao-Lin and Yang, Chieh
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
184--192
Similar judgments search is an important task in legal practice, from which valuable legal insights can be obtained. Issues are disputes between both parties in civil litigation, which represents the core topics to be considered in the trials. Many studies calculate the similarity between judgments from different perspectives and methods. We first cluster the issues in the judgments, and then encode the judgments with vectors for whether or not the judgments contain issues in the corresponding clusters. The similarity between the judgments are evaluated based on the encoded messages. We verify the effectiveness of the system with a human scoring process by a legal background assistant, while comparing the effects of several combinations of preprocessing steps and selections of clustering strategies.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,241
inproceedings
yeh-etal-2022-multifaceted
Multifaceted Assessments of Traditional {C}hinese Word Segmentation Tool on Large Corpora
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.24/
Yeh, Wen-Chao and Hsieh, Yu-Lun and Chang, Yung-Chun and Hsu, Wen-Lian
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
193--199
This study aims to evaluate three most popular word segmentation tool for a large Traditional Chinese corpus in terms of their efficiency, resource consumption, and cost. Specifically, we compare the performances of Jieba, CKIP, and MONPA on word segmentation, part-of-speech tagging and named entity recognition through extensive experiments. Experimental results show that MONPA using GPU for batch segmentation can greatly reduce the processing time of massive datasets. In addition, its features such as word segmentation, part-of-speech tagging, and named entity recognition are beneficial to downstream applications.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,242
inproceedings
chiou-etal-2022-mandarin
{M}andarin-{E}nglish Code-Switching Speech Recognition System for Specific Domain
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.25/
Chiou, Chung-Pu and Lin, Hou-An and Chen, Chia-Ping
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
200--204
This paper will introduce the use of Automatic Speech Recognition (ASR) technology to process speech content with specific domain. We will use the Conformer end-to-end model as the system architecture, and use pure Chinese data for initial training. Next, use the transfer learning technology to fine-tune the system with Mandarin-English code-switching data. Finally, use the Mandarin-English code-switching data with a specific domain makes the final fine-tuning of the model so that it can achieve a certain effect on speech recognition in a specific domain. Experiments with different fine-tuning methods reduce the final error rate from 82.0{\%} to 34.8{\%}.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,243
inproceedings
jayasinghe-etal-2022-legal
Legal Case Winning Party Prediction With Domain Specific Auxiliary Models
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.26/
Jayasinghe, Sahan and Rambukkanage, Lakith and Silva, Ashan and de Silva, Nisansa and Perera, Amal Shehan
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
205--213
Sifting through hundreds of old case documents to obtain information pertinent to the case in hand has been a major part of the legal profession for centuries. However, with the expansion of court systems and the compounding nature of case law, this task has become more and more intractable with time and resource constraints. Thus automation by Natural Language Processing presents itself as a viable solution. In this paper, we discuss a novel approach for predicting the winning party of a current court case by training an analytical model on a corpus of prior court cases which is then run on the prepared text on the current court case. This will allow legal professionals to efficiently and precisely prepare their cases to maximize the chance of victory. The model is built with and experimented using legal domain specific sub-models to provide more visibility to the final model, along with other variations. We show that our model with critical sentence annotation with a transformer encoder using RoBERTa based sentence embedding is able to obtain an accuracy of 75.75{\%}, outperforming other models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,244
inproceedings
chan-etal-2022-early
Early Speech Production in Infants and Toddlers Later Diagnosed with Cerebral Palsy: A Retrospective Study
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.27/
Chan, Chien Ju and Chen, Li-Mei and Chen, Li-Wen
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
214--220
In this retrospective study, we compared the early speech development between infants with cerebral palsy (CP) and typically developing (TD) infants. The recordings of utterances were collected from two CP infants and two typically-developing (TD) infants at approximately 8 and 24 months old. The data was analyzed by volubility, consonant emergence, canonical babbling ratio (CBR), mean babbling level (MBL). The major findings show that comparing with TD group, CP group has the characteristics of: 1) lower volubility 2) CBRutter below 0.15 at 2 years old 3) MBL score below 2 at the age of 2 with a feature of above 95{\%} in level 1 4) using consonants mainly at two oral places (bilabials and velars) and three manners of articulation (nasal, fricative, and stop) at 2 years old.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,245
inproceedings
kumarasinghe-de-silva-2022-automatic
Automatic Generation of Abstracts for Research Papers
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.28/
Kumarasinghe, Dushan and de Silva, Nisansa
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
221--229
Summarizing has always been an important utility for reading long documents. Research papers are unique in this regard, as they have a compulsory summary in the form of the abstract in the beginning of the document which gives the gist of the entire study often within a set upper limit for the word count. Writing the abstract to be sufficiently succinct while being descriptive enough is a hard task even for native English speakers. This study is the first step in generating abstracts for research papers in the computational linguistics domain automatically using the domain-specific abstractive summarization power of the GPT-Neo model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,246
inproceedings
lew-etal-2022-speech
Speech Timing in Typically Developing {M}andarin-Speaking Children From Ages 3 To 4
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.29/
Lew, Jeng Man and Chen, Li-Mei and Lin, Yu Ching
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
230--235
This study aims to develop a better understanding of the speech timing development in Mandarin-speaking children from 3 to 4 years of age. Data were selected from two typically developing children. Four 50-min recordings were collected during 3 and 4 years old based on natural conversation among the observers, participants, and the parents, and the picture-naming task. Speech timing were measured by Praat, including speaking rate, articulation rate, mean length of utterance (MLUs), mean utterance duration, mean word duration, pause ratio, and volubility. Major findings of the current study are: 1) Five measurements (speaking rate, mean length of utterance(MLUs), mean utterance length, mean word duration and volubility) decreased with age in both children; 2) Articulation rate of both children increased with age; 3) Comparing with the findings from previous studies, pause ratio of both slightly increased with age. These findings not only contribute to a more comprehensive data for assessment, it also can be a reference in speech intervention.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,247
inproceedings
huang-2022-right
Right-Dominant Tones in Zhangzhou: On and Through Phonetic Surface
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.30/
Huang, Yishan
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
236--245
This study conducts a systematic acoustic exploration into the phonetic nature of rightmost tones in a right-dominant tone sandhi system based on empirical data from 21 native speakers of Zhangzhou Southern Min, which presents eight tonal contrasts at the underlying level. The results reveal that, (a) the F0 contour shape realisation of rightmost tones in Zhangzhou appears not to be categorically affected by their preceding tones. (b) Seven out of eight rightmost tones have two statistically significantly different variants in their F0 onset realisation, indicating their regressive sensitivity to the offset phonetics of preceding tones. (c) The forms of rightmost tones are not straightforward related to their counterparts in citation. Instead, two versions of the F0 system can be identified, with the unmarked forms resembling their citation values and the marked forms occurring as a consequence of the phonetic impact of their preceding tones and the F0-declining effect of utterance-final position. (d) The phonetic variation of rightmost tones reflects the across-linguistic tendency of tonal articulation in connected speech but contradicts the default principle for identifying the right dominance of tone sandhi in Sinitic languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,248
inproceedings
wang-etal-2022-web
Web-{API}-Based Chatbot Generation with Analysis and Expansion for Training Sentences
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.31/
Wang, Sheng-Kai and You, Wan-Lin and Ma, Shang-Pin
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
246--255
With Web API technology becoming increasingly mature, how to integrate Web API and Chatbot technology has become an issue of great interest. This study plans to build a semi-automatic method and tool, BOTEN. This method allows application developers to build Chatbot interfaces with specified Web APIs quickly. To ensure that the Chatbot has sufficient natural language understanding (NLU) capability, this research evaluates the training sentences written by the developer through TF-IDF, WordNet, and SpaCy techniques, and suggests the developer modify the training sentences with poor quality. This technique can also be used to automatically increase the number of training sentences to improve the capability of Intent recognition.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,249
inproceedings
haung-etal-2022-design
The Design and Development of a System for {C}hinese Character Difficulty and Features
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.32/
Haung, Jung-En and Tseng, Hou-Chiang and Chang, Li-Yun and Chen, Hsueh-Chih and Sung, Yao-Ting
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
256--262
Feature analysis of Chinese characters plays a prominent role in {\textquotedblleft}character-based{\textquotedblright} education. However, there is an urgent need for a text analysis system for processing the difficulty of composing components for characters, primarily based on Chinese learners' performance. To meet this need, the purpose of this research was to provide such a system by adapting a data-driven approach. Based on Chen et al.`s (2011) Chinese Orthography Database, this research has designed and developed an system: Character Difficulty - Research on Multi-features (CD-ROM). This system provides three functions: (1) analyzing a text and providing its difficulty regarding Chinese characters; (2) decomposing characters into components and calculating the frequency of components based on the analyzed text; and (3) affording component-deriving characters based on the analyzed text and downloadable images as teaching materials. With these functions highlighting multi-level features of characters, this system has the potential to benefit the fields of Chinese character instruction, Chinese orthographic learning, and Chinese natural language processing.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,250
inproceedings
nath-etal-2022-image
Image Caption Generation for Low-Resource {A}ssamese Language
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.33/
Nath, Prachurya and Adhikary, Prottay Kumar and Dadure, Pankaj and Pakray, Partha and Manna, Riyanka and Bandyopadhyay, Sivaji
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
263--272
Image captioning is a prominent Artificial Intelligence (AI) research area that deals with visual recognition and a linguistic description of the image. It is an interdisciplinary field concerning how computers can see and understand digital images{\&} videos, and describe them in a language known to humans. Constructing a meaningful sentence needs both structural and semantic information of the language. This paper highlights the contribution of image caption generation for the Assamese language. The unavailability of an image caption generation system for the Assamese language is an open problem for AI-NLP researchers, and it`s just an early stage of the research. To achieve our defined objective, we have used the encoder-decoder framework, which combines the Convolutional Neural Networks and the Recurrent Neural Networks. The experiment has been tested on Flickr30k and Coco Captions dataset, which have been originally present in the English language. We have translated these datasets into Assamese language using the state-of-the-art Machine Translation (MT) system for our designed work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,251
inproceedings
wang-etal-2022-building
Building an Enhanced Autoregressive Document Retriever Leveraging Supervised Contrastive Learning
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.34/
Wang, Yi-Cheng and Yang, Tzu-Ting and Wang, Hsin-Wei and Hsu, Yung-Chang and Chen, Berlin
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
273--282
The goal of an information retrieval system is to retrieve documents that are most relevant to a given user query from a huge collection of documents, which usually requires time-consuming multiple comparisons between the query and candidate documents so as to find the most relevant ones. Recently, a novel retrieval modeling approach, dubbed Differentiable Search Index (DSI), has been proposed. DSI dramatically simplifies the whole retrieval process by encoding all information about the document collection into the parameter space of a single Transformer model, on top of which DSI can in turn generate the relevant document identities (IDs) in an autoregressive manner in response to a user query. Although DSI addresses the shortcomings of traditional retrieval systems, previous studies have pointed out that DSI might fail to retrieve relevant documents because DSI uses the document IDs as the pivotal mechanism to establish the relationship between queries and documents, whereas not every document in the document collection has its corresponding relevant and irrelevant queries for the training purpose. In view of this, we put forward to leveraging supervised contrastive learning to better render the relationship between queries and documents in the latent semantic space. Furthermore, an approximate nearest neighbor search strategy is employed at retrieval time to further assist the Transformer model in generating document IDs relevant to a posed query more efficiently. A series of experiments conducted on the Nature Question benchmark dataset confirm the effectiveness and practical feasibility of our approach in relation to some strong baseline systems.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,252
inproceedings
chang-2022-quantitative
A Quantitative Analysis of Comparison of Emoji Sentiment: {T}aiwan {M}andarin Users and {E}nglish Users
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.35/
Chang, Fang-Yu
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
283--288
Emojis have become essential components in our digital communication. Emojis, especially smiley face emojis and heart emojis, are considered the ones conveying more emotions. In this paper, two functions of emoji usages are discussed across two languages, Taiwanese Mandarin and English. The first function discussed here is sentiment enhancement and the other is sentiment modification. Multilingual language model is adopted for seeing the probability distribution of the text sentiment, and relative entropy is used to quantify the degree of changes. The results support the previous research that emojis are more frequently-used in positive contexts, smileys tend to be used for expressing emotions and prove the language-independent nature of emojis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,253
inproceedings
kao-chang-2022-applying
Applying Information Extraction to Storybook Question and Answer Generation
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.36/
Kao, Kai-Yen and Chang, Chia-Hui
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
289--298
For educators, how to generate high quality question-answer pairs from story text is a time-consuming and labor-intensive task. The purpose is not to make students unable to answer, but to ensure that students understand the story text through the generated question-answer pairs. In this paper, we improve the FairyTaleQA question generation method by incorporating question type and its definition to the input for fine-tuning the BART (Lewis et al., 2020) model. Furthermore, we make use of the entity and relation extraction from (Zhong and Chen, 2021) as an element of template-based question generation.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,254
inproceedings
huang-chang-2022-improving
Improving Response Diversity through Commonsense-Aware Empathetic Response Generation
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.37/
Huang, Tzu-Hsien and Chang, Chia-Hui
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
299--306
Due to the lack of conversation practice, the main challenge for the second-language learners is speaking. Our goal is to develop a chatbot to encourage individuals to reflect, describe, analyse and communicate what they read as well as improve students' English expression skills. In this paper, we exploit COMMET, an inferential commonsense knowledge generator, as the background knowledge to improve the generation diversity. We consider two approaches to increase the diversity of empathetic response generation. For nonpretrained models, We apply AdaLabel (Wang et al., 2021) to Commonsense-aware Empathetic model (Sabour et al., 2022) and improve Distinct-2 score from 2.99 to 4.08 on EMPATHETIC DIALOGUES (ED). Furthermore, we augment the pretrained BART model with various commonsense knowledge to generate more informative empathetic responses. Not only has the automatic evaluation of distinct-2 scores improved from 9.11 to 11.21, but the manual case study also shows that CE-BART significantly outperform CEM-AdaLabel.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,255
inproceedings
hung-huang-2022-preliminary
A Preliminary Study on {M}andarin-{H}akka neural machine translation using small-sized data
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.38/
Hung, Yi-Hsiang and Huang, Yi-Chin
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
307--315
In this study, we implemented a machine translation system using the Convolutional Neural Network with Attention mechanism for translating Mandarin to Sixan-accent Hakka. Specifically, to cope with the different idioms or terms used between Northern and Southern Sixan-accent, we analyzed the corpus differences and lexicon definition, and then separated the various word usages for training exclusive models for each accent. Besides, since the collected Hakka corpora are relatively limited, the unseen words frequently occurred during real-world translation. In our system, we selected suitable thresholds for each model based on the model verification to reject non-suitable translated words. Then, by applying the proposed algorithm, which adopted the forced Hakka idioms/terms segmentation and the common Mandarin word substitution, the resultant translation sentences become more intelligible. Therefore, the proposed system achieved promising results using small-sized data. This system could be used for Hakka language teaching and also the front-end of Mandarin and Hakka code-switching speech synthesis systems.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,256
inproceedings
feng-etal-2022-ncu1415
{NCU}1415 at {ROCLING} 2022 Shared Task: A light-weight transformer-based approach for Biomedical Name Entity Recognition
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.39/
Feng, Zhi-Quan and Chen, Po-Kai and Wang, Jia-Ching
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
316--320
Name Entity Recognition (NER) is a very important and basic task in traditional NLP tasks. In the biomedical field, NER tasks have been widely used in various products developed by various manufacturers. These include parsing, QA system, key information extraction or replacement in dialogue systems, and the practical application of knowledge parsing. In different fields, including bio-medicine, communication technology, e-commerce etc., NER technology is needed to identify drugs, diseases, commodities and other objects. This implementation focuses on the CLING 2022 SHARED TASK`s(Lee et al. 2022) NER TASK in biomedical field, with a bit of tuning and experimentation based on the language models.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,257
inproceedings
zhang-etal-2022-crowner
{C}row{NER} at Rocling 2022 Shared Task: {NER} using {M}ac{BERT} and Adversarial Training
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.40/
Zhang, Qiu-Xia and Chi, Te-Yu and Yang, Te-Lun and Jang, Jyh-Shing Roger
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
321--328
This study uses training and validation data from the {\textquotedblleft}ROCLING 2022 Chinese Health Care Named Entity Recognition Task{\textquotedblright} for modeling. The modeling process adopts technologies such as data augmentation and data post-processing, and uses the MacBERT pre-training model to build a dedicated Chinese medical field NER recognizer. During the fine-tuning process, we also added adversarial training methods, such as FGM and PGD, and the results of the final tuned model were close to the best team for task evaluation. In addition, by introducing mixed-precision training, we also greatly reduce the time cost of training.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,258
inproceedings
yang-etal-2022-scu
{SCU}-{MESCL}ab at {ROCLING}-2022 Shared Task: Named Entity Recognition Using {BERT} Classifier
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.41/
Yang, Tsung-Hsien and Su, Ruei-Cyuan and Su, Tzu-En and Chong, Sing-Seong and Su, Ming-Hsiang
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
329--334
In this study, named entity recognition is constructed and applied in the medical domain. Data is labeled in BIO format. For example, {\textquotedblleft}muscle{\textquotedblright} would be labeled {\textquotedblleft}B-BODY{\textquotedblright} and {\textquotedblleft}I-BODY{\textquotedblright}, and {\textquotedblleft}cough{\textquotedblright} would be {\textquotedblleft}B-SYMP{\textquotedblright} and {\textquotedblleft}I-SYMP{\textquotedblright}. All words outside the category are marked with {\textquotedblleft}O{\textquotedblright}. The Chinese HealthNER Corpus contains 30,692 sentences, of which 2531 sentences are divided into the validation set (dev) for this evaluation, and the conference finally provides another 3204 sentences for the test set (test). We use BLSTM{\_}CRF, Roberta+BLSTM{\_}CRF and BERT Classifier to submit three prediction results respectively. Finally, the BERT Classifier system submitted as RUN3 achieved the best prediction performance, with an accuracy of 80.18{\%}, a recall rate of 78.3{\%}, and an F1-score of 79.23.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,259
inproceedings
luo-etal-2022-ynu
{YNU}-{HPCC} at {ROCLING} 2022 Shared Task: A Transformer-based Model with Focal Loss and Regularization Dropout for {C}hinese Healthcare Named Entity Recognition
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.42/
Luo, Xiang and Wang, Jin and Zhang, Xuejie
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
335--342
Named Entity Recognition (NER) is a fundamental task in information extraction that locates the mentions of named entities and classifies them in unstructured texts. Previous studies typically used hidden Markov model (HMM) and conditional random fields (CRF) for NER. To learn long-distance dependencies in text, recurrent neural networks, e.g., LSTM and GRU can extract the semantic features for each token with a sequential manner. Based on Transformers, this paper describes the contribution to ROCLING-2022 Share Task. This paper adopts a transformer-based model with focal Loss and regularization dropout. The focal loss is to overcome the uneven distribution of the label. The regularization dropout (r-drop) is to address the problem of vocabulary and descriptions that are too domain-specific. The ensemble learning is to improve the performance of the model. Comparative experiments were conducted on dev set to select the model with the best performance for submission. That is, BERT model with BiLSTM-CRF, focal loss and R-Drop has achieved the best F1-score of 0.7768 and rank the 4th place.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,260
inproceedings
lin-etal-2022-nerve
{NERVE} at {ROCLING} 2022 Shared Task: A Comparison of Three Named Entity Recognition Frameworks Based on Language Model and Lexicon Approach
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.43/
Lin, Bo-Shau and Chen, Jian-He and Chang, Tao-Hsing
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
343--349
ROCLING 2022 shared task is to design a method that can tag medical entities in sentences and then classify them into categories through an algorithm. This paper proposes three models to deal with NER issues. The first is a BERT model combined with a classifier. The second is a two-stage model, where the first stage is to use a BERT model combined with a classifier for detecting whether medical entities exist in a sentence, and the second stage focuses on classifying the entities into categories. The third approach is to combine the first two models and a model based on the lexicon approach, integrating the outputs of the three models and making predictions. The prediction results of the three models for the validation and testing datasets show little difference in the performance of the three models, with the best performance on the F1 indicator being 0.7569 for the first model.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,261
inproceedings
chiou-etal-2022-scu
{SCU}-{NLP} at {ROCLING} 2022 Shared Task: Experiment and Error Analysis of Biomedical Entity Detection Model
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.44/
Chiou, Sung-Ting and Huang, Sheng-Wei and Lo, Ying-Chun and Wu, Yu-Hsuan and Wu, Jheng-Long
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
350--355
Named entity recognition generally refers to entities with specific meanings in unstructured text, including names of people, places, organizations, dates, times, quantities, proper nouns and other words. In the medical field, it may be drug names, Organ names, test items, nutritional supplements, etc. The purpose of named entity recognition in this study is to search for the above items from unstructured input text. In this study, taking healthcare as the research purpose, and predicting named entity boundaries and categories of sentences based on ten entity types, We explore multiple fundamental NER approaches to solve this task, Include: Hidden Markov Models, Conditional Random Fields, Random Forest Classifier and BERT. The prediction results are more significant in the F-score of the CRF model, and have achieved better results.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,262
inproceedings
ma-etal-2022-migbaseline
{MIGB}aseline at {ROCLING} 2022 Shared Task: Report on Named Entity Recognition Using {C}hinese Healthcare Datasets
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.45/
Ma, Hsing-Yuan and Li, Wei-Jie and Liu, Chao-Lin
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
356--362
Named Entity Recognition (NER) tools have been in development for years, yet few have been aimed at medical documents. The increasing needs for analyzing medical data makes it crucial to build a sophisticated NER model for this missing area. In this paper, W2NER, the state-of-the-art NER model, which has excelled in English and Chinese tasks, is run through selected inputs, several pretrained language models, and training strategies. The objective was to build an NER model suitable for healthcare corpora in Chinese. The best model managed to achieve an F1 score at 81.93{\%}, which ranked first in the ROCLING 2022 shared task.
null
null
null
null
null
null
zho
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,263
inproceedings
lee-etal-2022-overview
Overview of the {ROCLING} 2022 Shared Task for {C}hinese Healthcare Named Entity Recognition
Chang, Yung-Chun and Huang, Yi-Chin
nov
2022
Taipei, Taiwan
The Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
https://aclanthology.org/2022.rocling-1.46/
Lee, Lung-Hao and Chen, Chao-Yi and Yu, Liang-Chih and Tseng, Yuen-Hsien
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
363--368
This paper describes the ROCLING-2022 shared task for Chinese healthcare named entity recognition, including task description, data preparation, performance metrics, and evaluation results. Among ten registered teams, seven participating teams submitted a total of 20 runs. This shared task reveals present NLP techniques for dealing with Chinese named entity recognition in the healthcare domain. All data sets with gold standards and evaluation scripts used in this shared task are publicly available for future research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,264
inproceedings
munoz-sanchez-etal-2022-first
A First Attempt at Unreliable News Detection in {S}wedish
Monti, Johanna and Basile, Valerio and Buono, Maria Pia Di and Manna, Raffaele and Pascucci, Antonio and Tonelli, Sara
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.restup-1.1/
Mu{\~n}oz S{\'a}nchez, Ricardo and Johansson, Eric and Tayefeh, Shakila and Kad, Shreyash
Proceedings of the Second International Workshop on Resources and Techniques for User Information in Abusive Language Analysis
1--7
Throughout the COVID-19 pandemic, a parallel infodemic has also been going on such that the information has been spreading faster than the virus itself. During this time, every individual needs to access accurate news in order to take corresponding protective measures, regardless of their country of origin or the language they speak, as misinformation can cause significant loss to not only individuals but also society. In this paper we train several machine learning models (ranging from traditional machine learning to deep learning) to try to determine whether news articles come from either a reliable or an unreliable source, using just the body of the article. Moreover, we use a previously introduced corpus of news in Swedish related to the COVID-19 pandemic for the classification task. Given that our dataset is both unbalanced and small, we use subsampling and easy data augmentation (EDA) to try to solve these issues. In the end, we realize that, due to the small size of our dataset, using traditional machine learning along with data augmentation yields results that rival those of transformer models such as BERT.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,266
inproceedings
jahan-etal-2022-banglahatebert
{B}angla{H}ate{BERT}: {BERT} for Abusive Language Detection in {B}engali
Monti, Johanna and Basile, Valerio and Buono, Maria Pia Di and Manna, Raffaele and Pascucci, Antonio and Tonelli, Sara
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.restup-1.2/
Jahan, Md Saroar and Haque, Mainul and Arhab, Nabil and Oussalah, Mourad
Proceedings of the Second International Workshop on Resources and Techniques for User Information in Abusive Language Analysis
8--15
This paper introduces BanglaHateBERT, a retrained BERT model for abusive language detection in Bengali. The model was trained with a large-scale Bengali offensive, abusive, and hateful corpus that we have collected from different sources and made available to the public. Furthermore, we have collected and manually annotated 15K Bengali hate speech balanced dataset and made it publicly available for the research community. We used existing pre-trained BanglaBERT model and retrained it with 1.5 million offensive posts. We presented the results of a detailed comparison between generic pre-trained language model and retrained with the abuse-inclined version. In all datasets, BanglaHateBERT outperformed the corresponding available BERT model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,267
inproceedings
soykan-etal-2022-comparison
A Comparison of Machine Learning Techniques for {T}urkish Profanity Detection
Monti, Johanna and Basile, Valerio and Buono, Maria Pia Di and Manna, Raffaele and Pascucci, Antonio and Tonelli, Sara
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.restup-1.3/
Soykan, Levent and Karsak, Cihan and Durgar Elkahlout, Ilknur and Aytan, Burak
Proceedings of the Second International Workshop on Resources and Techniques for User Information in Abusive Language Analysis
16--24
Profanity detection became an important task with the increase of social media usage. Most of the users prefer a clean and profanity free environment to communicate with others. In order to provide a such environment for the users, service providers are using various profanity detection tools. In this paper, we researched on Turkish profanity detection in our search engine. We collected and labeled a dataset from search engine queries as one of the two classes: profane and not-profane. We experimented with several classical machine learning and deep learning methods and compared methods in means of speed and accuracy. We performed our best scores with transformer based Electra model with 0.93 F1 Score. We also compared our models with the state-of-the-art Turkish profanity detection tool and observed that we outperform it from all aspects.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,268
inproceedings
ignat-vogel-2022-features
Features and Categories of Hyperbole in Cyberbullying Discourse on Social Media
Monti, Johanna and Basile, Valerio and Buono, Maria Pia Di and Manna, Raffaele and Pascucci, Antonio and Tonelli, Sara
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.restup-1.4/
Ignat, Simona and Vogel, Carl
Proceedings of the Second International Workshop on Resources and Techniques for User Information in Abusive Language Analysis
25--31
Cyberbullying discourse is achieved with multiple linguistic conveyances. Hyperboles witnessed in a corpus of cyberbullying utterances are studied. Linguistic features of hyperbole using the traditional grammatical indications of exaggerations are analyzed. The method relies on data selected from a larger corpus of utterances identified and labelled as {\textquotedblleft}bullying{\textquotedblright}, from Twitter, from October 2020 to March 2022. An outcome is a lexicon of 250 entries. A small number of lexical level features have been isolated, and chi-squared contingency tests applied to evaluating their information value in identifying hyperbole. Words or affixes indicating superlatives or extremes of scales, with positive but not negative valency items, interact with hyperbole classification in this data set. All utterances extracted has been considered exaggerations and the stylistic status of {\textquotedblleft}hyperbole{\textquotedblright} has been commented within the frame of new meanings in the context of social media.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,269
inproceedings
valerio-miceli-barone-etal-2022-distributionally
Distributionally Robust Recurrent Decoders with Random Network Distillation
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.1/
Valerio Miceli Barone, Antonio and Birch, Alexandra and Sennrich, Rico
Proceedings of the 7th Workshop on Representation Learning for NLP
1--8
Neural machine learning models can successfully model language that is similar to their training distribution, but they are highly susceptible to degradation under distribution shift, which occurs in many practical applications when processing out-of-domain (OOD) text. This has been attributed to {\textquotedblleft}shortcut learning{\textquotedblright}{\textquotedblright}:'' relying on weak correlations over arbitrary large contexts. We propose a method based on OOD detection with Random Network Distillation to allow an autoregressive language model to automatically disregard OOD context during inference, smoothly transitioning towards a less expressive but more robust model as the data becomes more OOD, while retaining its full context capability when operating in-distribution. We apply our method to a GRU architecture, demonstrating improvements on multiple language modeling (LM) datasets.
null
null
10.18653/v1/2022.repl4nlp-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,271
inproceedings
meshgi-etal-2022-q
{Q}-Learning Scheduler for Multi Task Learning Through the use of Histogram of Task Uncertainty
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.2/
Meshgi, Kourosh and Sadat Mirzaei, Maryam and Sekine, Satoshi
Proceedings of the 7th Workshop on Representation Learning for NLP
9--19
Simultaneous training of a multi-task learning network on different domains or tasks is not always straightforward. It could lead to inferior performance or generalization compared to the corresponding single-task networks. An effective training scheduling method is deemed necessary to maximize the benefits of multi-task learning. Traditional schedulers follow a heuristic or prefixed strategy, ignoring the relation of the tasks, their sample complexities, and the state of the emergent shared features. We proposed a deep Q-Learning Scheduler (QLS) that monitors the state of the tasks and the shared features using a novel histogram of task uncertainty, and through trial-and-error, learns an optimal policy for task scheduling. Extensive experiments on multi-domain and multi-task settings with various task difficulty profiles have been conducted, the proposed method is benchmarked against other schedulers, its superior performance has been demonstrated, and results are discussed.
null
null
10.18653/v1/2022.repl4nlp-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,272
inproceedings
bielawski-etal-2022-clip
When does {CLIP} generalize better than unimodal models? When judging human-centric concepts
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.4/
Bielawski, Romain and Devillers, Benjamin and Van De Cruys, Tim and Vanrullen, Rufin
Proceedings of the 7th Workshop on Representation Learning for NLP
29--38
CLIP, a vision-language network trained with a multimodal contrastive learning objective on a large dataset of images and captions, has demonstrated impressive zero-shot ability in various tasks. However, recent work showed that in comparison to unimodal (visual) networks, CLIP`s multimodal training does not benefit generalization (e.g. few-shot or transfer learning) for standard visual classification tasks such as object, street numbers or animal recognition. Here, we hypothesize that CLIP`s improved unimodal generalization abilities may be most prominent in domains that involve human-centric concepts (cultural, social, aesthetic, affective...); this is because CLIP`s training dataset is mainly composed of image annotations made by humans for other humans. To evaluate this, we use 3 tasks that require judging human-centric concepts{\textquotedblright}:{\textquotedblright} sentiment analysis on tweets, genre classification on books or movies. We introduce and publicly release a new multimodal dataset for movie genre classification. We compare CLIP`s visual stream against two visually trained networks and CLIP`s textual stream against two linguistically trained networks, as well as multimodal combinations of these networks. We show that CLIP generally outperforms other networks, whether using one or two modalities. We conclude that CLIP`s multimodal training is beneficial for both unimodal and multimodal tasks that require classification of human-centric concepts.
null
null
10.18653/v1/2022.repl4nlp-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,273
inproceedings
assylbekov-etal-2022-hyperbolic
From Hyperbolic Geometry Back to Word Embeddings
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.5/
Assylbekov, Zhenisbek and Nurmukhamedov, Sultan and Sheverdin, Arsen and Mach, Thomas
Proceedings of the 7th Workshop on Representation Learning for NLP
39--45
We choose random points in the hyperbolic disc and claim that these points are already word representations. However, it is yet to be uncovered which point corresponds to which word of the human language of interest. This correspondence can be approximately established using a pointwise mutual information between words and recent alignment techniques.
null
null
10.18653/v1/2022.repl4nlp-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,274
inproceedings
chen-etal-2022-comparative
A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity Recognition
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.6/
Chen, Yuxuan and Mikkelsen, Jonas and Binder, Arne and Alt, Christoph and Hennig, Leonhard
Proceedings of the 7th Workshop on Representation Learning for NLP
46--59
Pre-trained language models (PLM) are effective components of few-shot named entity recognition (NER) approaches when augmented with continued pre-training on task-specific out-of-domain data or fine-tuning on in-domain data. However, their performance in low-resource scenarios, where such data is not available, remains an open question. We introduce an encoder evaluation framework, and use it to systematically compare the performance of state-of-the-art pre-trained representations on the task of low-resource NER. We analyze a wide range of encoders pre-trained with different strategies, model architectures, intermediate-task fine-tuning, and contrastive learning. Our experimental results across ten benchmark NER datasets in English and German show that encoder performance varies significantly, suggesting that the choice of encoder for a specific low-resource scenario needs to be carefully evaluated.
null
null
10.18653/v1/2022.repl4nlp-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,275
inproceedings
lovenia-etal-2022-clozer
Clozer{\textquotedblright}:{\textquotedblright} Adaptable Data Augmentation for Cloze-style Reading Comprehension
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.7/
Lovenia, Holy and Wilie, Bryan and Chung, Willy and Min, Zeng and Cahyawijaya, Samuel and Su, Dan and Fung, Pascale
Proceedings of the 7th Workshop on Representation Learning for NLP
60--66
Task-adaptive pre-training (TAPT) alleviates the lack of labelled data and provides performance lift by adapting unlabelled data to downstream task. Unfortunately, existing adaptations mainly involve deterministic rules that cannot generalize well. Here, we propose Clozer, a sequence-tagging based cloze answer extraction method used in TAPT that is extendable for adaptation on any cloze-style machine reading comprehension (MRC) downstream tasks. We experiment on multiple-choice cloze-style MRC tasks, and show that Clozer performs significantly better compared to the oracle and state-of-the-art in escalating TAPT effectiveness in lifting model performance, and prove that Clozer is able to recognize the gold answers independently of any heuristics.
null
null
10.18653/v1/2022.repl4nlp-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,276
inproceedings
gonen-etal-2022-analyzing
Analyzing Gender Representation in Multilingual Models
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.8/
Gonen, Hila and Ravfogel, Shauli and Goldberg, Yoav
Proceedings of the 7th Workshop on Representation Learning for NLP
67--77
Multilingual language models were shown to allow for nontrivial transfer across scripts and languages. In this work, we study the structure of the internal representations that enable this transfer. We focus on the representations of gender distinctions as a practical case study, and examine the extent to which the gender concept is encoded in shared subspaces across different languages. Our analysis shows that gender representations consist of several prominent components that are shared across languages, alongside language-specific components. The existence of language-independent and language-specific components provides an explanation for an intriguing empirical observation we make{\textquotedblright}:{\textquotedblright} while gender classification transfers well across languages, interventions for gender removal trained on a single language do not transfer easily to others.
null
null
10.18653/v1/2022.repl4nlp-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,277
inproceedings
liu-etal-2022-detecting
Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.9/
Liu, Na and Dras, Mark and Emma Zhang, Wei
Proceedings of the 7th Workshop on Representation Learning for NLP
78--90
Although deep neural networks have achieved state-of-the-art performance in various machine learning tasks, adversarial examples, constructed by adding small non-random perturbations to correctly classified inputs, successfully fool highly expressive deep classifiers into incorrect predictions. Approaches to adversarial attacks in natural language tasks have boomed in the last five years using character-level, word-level, phrase-level, or sentence-level textual perturbations. While there is some work in NLP on defending against such attacks through proactive methods, like adversarial training, there is to our knowledge no effective general reactive approaches to defence via detection of textual adversarial examples such as is found in the image processing literature. In this paper, we propose two new reactive methods for NLP to fill this gap, which unlike the few limited application baselines from NLP are based entirely on distribution characteristics of learned representations{\textquotedblright}:{\textquotedblright} we adapt one from the image processing literature (Local Intrinsic Dimensionality (LID)), and propose a novel one (MultiDistance Representation Ensemble Method (MDRE)). Adapted LID and MDRE obtain state-of-the-art results on character-level, word-level, and phrase-level attacks on the IMDB dataset as well as on the later two with respect to the MultiNLI dataset. For future research, we publish our code .
null
null
10.18653/v1/2022.repl4nlp-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,278
inproceedings
mofijul-islam-etal-2022-vocabulary
A Vocabulary-Free Multilingual Neural Tokenizer for End-to-End Task Learning
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.10/
Mofijul Islam, Md and Aguilar, Gustavo and Ponnusamy, Pragaash and Solomon Mathialagan, Clint and Ma, Chengyuan and Guo, Chenlei
Proceedings of the 7th Workshop on Representation Learning for NLP
91--99
Subword tokenization is a commonly used input pre-processing step in most recent NLP models. However, it limits the models' ability to leverage end-to-end task learning. Its frequency-based vocabulary creation compromises tokenization in low-resource languages, leading models to produce suboptimal representations. Additionally, the dependency on a fixed vocabulary limits the subword models' adaptability across languages and domains. In this work, we propose a vocabulary-free neural tokenizer by distilling segmentation information from heuristic-based subword tokenization. We pre-train our character-based tokenizer by processing unique words from multilingual corpus, thereby extensively increasing word diversity across languages. Unlike the predefined and fixed vocabularies in subword methods, our tokenizer allows end-to-end task learning, resulting in optimal task-specific tokenization. The experimental results show that replacing the subword tokenizer with our neural tokenizer consistently improves performance on multilingual (NLI) and code-switching (sentiment analysis) tasks, with larger gains in low-resource languages. Additionally, our neural tokenizer exhibits a robust performance on downstream tasks when adversarial noise is present (typos and misspelling), further increasing the initial improvements over statistical subword tokenizers.
null
null
10.18653/v1/2022.repl4nlp-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,279
inproceedings
wu-etal-2022-identifying
Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained Models
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.11/
Wu, Zhengxuan and F. Liu, Nelson and Potts, Christopher
Proceedings of the 7th Workshop on Representation Learning for NLP
100--110
There is growing evidence that pretrained language models improve task-specific fine-tuning even where the task examples are radically different from those seen in training. We study an extreme case of transfer learning by providing a systematic exploration of how much transfer occurs when models are denied any information about word identity via random scrambling. In four classification tasks and two sequence labeling tasks, we evaluate LSTMs using GloVe embeddings, BERT, and baseline models. Among these models, we find that only BERT shows high rates of transfer into our scrambled domains, and for classification but not sequence labeling tasks. Our analyses seek to explain why transfer succeeds for some tasks but not others, to isolate the separate contributions of pretraining versus fine-tuning, to show that the fine-tuning process is not merely learning to unscramble the scrambled inputs, and to quantify the role of word frequency. Furthermore, our results suggest that current benchmarks may overestimate the degree to which current models actually understand language.
null
null
10.18653/v1/2022.repl4nlp-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,280
inproceedings
dikeoulias-etal-2022-temporal
Temporal Knowledge Graph Reasoning with Low-rank and Model-agnostic Representations
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.12/
Dikeoulias, Ioannis and Amin, Saadullah and Neumann, G{\"unter
Proceedings of the 7th Workshop on Representation Learning for NLP
111--120
Temporal knowledge graph completion (TKGC) has become a popular approach for reasoning over the event and temporal knowledge graphs, targeting the completion of knowledge with accurate but missing information. In this context, tensor decomposition has successfully modeled interactions between entities and relations. Their effectiveness in static knowledge graph completion motivates us to introduce Time-LowFER, a family of parameter-efficient and time-aware extensions of the low-rank tensor factorization model LowFER. Noting several limitations in current approaches to represent time, we propose a cycle-aware time-encoding scheme for time features, which is model-agnostic and offers a more generalized representation of time. We implement our methods in a unified temporal knowledge graph embedding framework, focusing on time-sensitive data processing. The experiments show that our proposed methods perform on par or better than the state-of-the-art semantic matching models on two benchmarks.
null
null
10.18653/v1/2022.repl4nlp-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,281
inproceedings
jun-etal-2022-anna
{ANNA}: Enhanced Language Representation for Question Answering
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.13/
Jun, Changwook and Jang, Hansol and Sim, Myoseop and Kim, Hyun and Choi, Jooyoung and Min, Kyungkoo and Bae, Kyunghoon
Proceedings of the 7th Workshop on Representation Learning for NLP
121--132
Pre-trained language models have brought significant improvements in performance in a variety of natural language processing tasks. Most existing models performing state-of-the-art results have shown their approaches in the separate perspectives of data processing, pre-training tasks, neural network modeling, or fine-tuning. In this paper, we demonstrate how the approaches affect performance individually, and that the language model performs the best results on a specific question answering task when those approaches are jointly considered in pre-training models. In particular, we propose an extended pre-training task, and a new neighbor-aware mechanism that attends neighboring tokens more to capture the richness of context for pre-training language modeling. Our best model achieves new state-of-the-art results of 95.7{\%} F1 and 90.6{\%} EM on SQuAD 1.1 and also outperforms existing pre-trained language models such as RoBERTa, ALBERT, ELECTRA, and XLNet on the SQuAD 2.0 benchmark.
null
null
10.18653/v1/2022.repl4nlp-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,282
inproceedings
abdessaied-etal-2022-video
Video Language Co-Attention with Multimodal Fast-Learning Feature Fusion for {V}ideo{QA}
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.15/
Abdessaied, Adnen and Sood, Ekta and Bulling, Andreas
Proceedings of the 7th Workshop on Representation Learning for NLP
143--155
We propose the Video Language Co-Attention Network (VLCN) {--} a novel memory-enhanced model for Video Question Answering (VideoQA). Our model combines two original contributions{\textquotedblright}:{\textquotedblright} A multi-modal fast-learning feature fusion (FLF) block and a mechanism that uses self-attended language features to separately guide neural attention on both static and dynamic visual features extracted from individual video frames and short video clips. When trained from scratch, VLCN achieves competitive results with the state of the art on both MSVD-QA and MSRVTT-QA with 38.06{\%} and 36.01{\%} test accuracies, respectively. Through an ablation study, we further show that FLF improves generalization across different VideoQA datasets and performance for question types that are notoriously challenging in current datasets, such as long questions that require deeper reasoning as well as questions with rare answers.
null
null
10.18653/v1/2022.repl4nlp-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,283
inproceedings
mosca-etal-2022-detecting
Detecting Word-Level Adversarial Text Attacks via {SH}apley Additive ex{P}lanations
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.16/
Huber, Lukas and K{\"uhn, Marc Alexander and Mosca, Edoardo and Groh, Georg
Proceedings of the 7th Workshop on Representation Learning for NLP
156--166
State-of-the-art machine learning models are prone to adversarial attacks{\textquotedblright}:{\textquotedblright} Maliciously crafted inputs to fool the model into making a wrong prediction, often with high confidence. While defense strategies have been extensively explored in the computer vision domain, research in natural language processing still lacks techniques to make models resilient to adversarial text inputs. We adapt a technique from computer vision to detect word-level attacks targeting text classifiers. This method relies on training an adversarial detector leveraging Shapley additive explanations and outperforms the current state-of-the-art on two benchmarks. Furthermore, we prove the detector requires only a low amount of training samples and, in some cases, generalizes to different datasets without needing to retrain.
null
null
10.18653/v1/2022.repl4nlp-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,284
inproceedings
johnson-2022-binary
Binary Encoded Word Mover`s Distance
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.17/
Johnson, Christian
Proceedings of the 7th Workshop on Representation Learning for NLP
167--172
Word Mover`s Distance is a textual distance metric which calculates the minimum transport cost between two sets of word embeddings. This metric achieves impressive results on semantic similarity tasks, but is slow and difficult to scale due to the large number of floating point calculations. This paper demonstrates that by combining pre-existing lower bounds with binary encoded word vectors, the metric can be rendered highly efficient in terms of computation time and memory while still maintaining accuracy on several textual similarity tasks.
null
null
10.18653/v1/2022.repl4nlp-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,285
inproceedings
haim-meirom-bobrowski-2022-unsupervised
Unsupervised Geometric and Topological Approaches for Cross-Lingual Sentence Representation and Comparison
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.18/
Haim Meirom, Shaked and Bobrowski, Omer
Proceedings of the 7th Workshop on Representation Learning for NLP
173--183
We propose novel structural-based approaches for the generation and comparison of cross lingual sentence representations. We do so by applying geometric and topological methods to analyze the structure of sentences, as captured by their word embeddings. The key properties of our methods are{\textquotedblright}:{\textquotedblright} (a) They are designed to be isometric invariant, in order to provide language-agnostic representations. (b) They are fully unsupervised, and use no cross-lingual signal. The quality of our representations, and their preservation across languages, are evaluated in similarity comparison tasks, achieving competitive results. Furthermore, we show that our structural-based representations can be combined with existing methods for improved results.
null
null
10.18653/v1/2022.repl4nlp-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,286
inproceedings
soliman-etal-2022-study
A Study on Entity Linking Across Domains: Which Data is Best for Fine-Tuning?
Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.repl4nlp-1.19/
Soliman, Hassan and Adel, Heike and H. Gad-Elrab, Mohamed and Milchevski, Dragan and Str{\"otgen, Jannik
Proceedings of the 7th Workshop on Representation Learning for NLP
184--190
Entity linking disambiguates mentions by mapping them to entities in a knowledge graph (KG). One important question in today`s research is how to extend neural entity linking systems to new domains. In this paper, we aim at a system that enables linking mentions to entities from a general-domain KG and a domain-specific KG at the same time. In particular, we represent the entities of different KGs in a joint vector space and address the questions of which data is best suited for creating and fine-tuning that space, and whether fine-tuning harms performance on the general domain. We find that a combination of data from both the general and the special domain is most helpful. The first is especially necessary for avoiding performance loss on the general domain. While additional supervision on entities that appear in both KGs performs best in an intrinsic evaluation of the vector space, it has less impact on the downstream task of entity linking.
null
null
10.18653/v1/2022.repl4nlp-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,287