entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
ri-etal-2022-finding
Finding Sub-task Structure with Natural Language Instruction
Andreas, Jacob and Narasimhan, Karthik and Nematzadeh, Aida
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lnls-1.1/
Ri, Ryokan and Hou, Yufang and Marinescu, Radu and Kishimoto, Akihiro
Proceedings of the First Workshop on Learning with Natural Language Supervision
1--9
When mapping a natural language instruction to a sequence of actions, it is often useful toidentify sub-tasks in the instruction. Such sub-task segmentation, however, is not necessarily provided in the training data. We present the A2LCTC (Action-to-Language Connectionist Temporal Classification) algorithm to automatically discover a sub-task segmentation of an action sequence.A2LCTC does not require annotations of correct sub-task segments and learns to find them from pairs of instruction and action sequence in a weakly-supervised manner. We experiment with the ALFRED dataset and show that A2LCTC accurately finds the sub-task structures. With the discovered sub-tasks segments, we also train agents that work on the downstream task and empirically show that our algorithm improves the performance.
null
null
10.18653/v1/2022.lnls-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,254
inproceedings
mosca-etal-2022-grammarshap
{G}rammar{SHAP}: An Efficient Model-Agnostic and Structure-Aware {NLP} Explainer
Andreas, Jacob and Narasimhan, Karthik and Nematzadeh, Aida
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lnls-1.2/
Mosca, Edoardo and Demirt{\"urk, Defne and M{\"ulln, Luca and Raffagnato, Fabio and Groh, Georg
Proceedings of the First Workshop on Learning with Natural Language Supervision
10--16
Interpreting NLP models is fundamental for their development as it can shed light on hidden properties and unexpected behaviors. However, while transformer architectures exploit contextual information to enhance their predictive capabilities, most of the available methods to explain such predictions only provide importance scores at the word level. This work addresses the lack of feature attribution approaches that also take into account the sentence structure. We extend the SHAP framework by proposing GrammarSHAP{---}a model-agnostic explainer leveraging the sentence`s constituency parsing to generate hierarchical importance scores.
null
null
10.18653/v1/2022.lnls-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,255
inproceedings
parrish-etal-2022-single
Single-Turn Debate Does Not Help Humans Answer Hard Reading-Comprehension Questions
Andreas, Jacob and Narasimhan, Karthik and Nematzadeh, Aida
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lnls-1.3/
Parrish, Alicia and Trivedi, Harsh and Perez, Ethan and Chen, Angelica and Nangia, Nikita and Phang, Jason and Bowman, Samuel
Proceedings of the First Workshop on Learning with Natural Language Supervision
17--28
Current QA systems can generate reasonable-sounding yet false answers without explanation or evidence for the generated answer, which is especially problematic when humans cannot readily check the model`s answers. This presents a challenge for building trust in machine learning systems. We take inspiration from real-world situations where difficult questions are answered by considering opposing sides (see Irving et al., 2018). For multiple-choice QA examples, we build a dataset of single arguments for both a correct and incorrect answer option in a debate-style set-up as an initial step in training models to produce explanations for two candidate answers. We use long contexts{---}humans familiar with the context write convincing explanations for pre-selected correct and incorrect answers, and we test if those explanations allow humans who have not read the full context to more accurately determine the correct answer. We do not find that explanations in our set-up improve human accuracy, but a baseline condition shows that providing human-selected text snippets does improve accuracy. We use these findings to suggest ways of improving the debate set up for future data collection efforts.
null
null
10.18653/v1/2022.lnls-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,256
inproceedings
hase-bansal-2022-models
When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data
Andreas, Jacob and Narasimhan, Karthik and Nematzadeh, Aida
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lnls-1.4/
Hase, Peter and Bansal, Mohit
Proceedings of the First Workshop on Learning with Natural Language Supervision
29--39
Many methods now exist for conditioning models on task instructions and user-provided explanations for individual data points. These methods show great promise for improving task performance of language models beyond what can be achieved by learning from individual (x,y) pairs. In this paper, we (1) provide a formal framework for characterizing approaches to learning from explanation data, and (2) we propose a synthetic task for studying how models learn from explanation data. In the first direction, we give graphical models for the available modeling approaches, in which explanation data can be used as model inputs, as targets, or as a prior. In the second direction, we introduce a carefully designed synthetic task with several properties making it useful for studying a model`s ability to learn from explanation data. Each data point in this binary classification task is accompanied by a string that is essentially an answer to the \textit{why} question: {\textquotedblleft}why does data point x have label y?{\textquotedblright} We aim to encourage research into this area by identifying key considerations for the modeling problem and providing an empirical testbed for theories of how models can best learn from explanation data.
null
null
10.18653/v1/2022.lnls-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,257
inproceedings
hartmann-sonntag-2022-survey
A survey on improving {NLP} models with human explanations
Andreas, Jacob and Narasimhan, Karthik and Nematzadeh, Aida
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lnls-1.5/
Hartmann, Mareike and Sonntag, Daniel
Proceedings of the First Workshop on Learning with Natural Language Supervision
40--47
Training a model with access to human explanations can improve data efficiency and model performance on in- and out-of-domain data. Adding to these empirical findings, similarity with the process of human learning makes learning from explanations a promising way to establish a fruitful human-machine interaction. Several methods have been proposed for improving natural language processing (NLP) models with human explanations, that rely on different explanation types and mechanism for integrating these explanations into the learning process. These methods are rarely compared with each other, making it hard for practitioners to choose the best combination of explanation type and integration mechanism for a specific use-case. In this paper, we give an overview of different methods for learning from human explanations, and discuss different factors that can inform the decision of which method to choose for a specific use-case.
null
null
10.18653/v1/2022.lnls-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,258
inproceedings
field-etal-2022-sentiment
Sentiment Analysis and Topic Modeling for Public Perceptions of Air Travel: {COVID} Issues and Policy Amendments
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.2/
Field, Avery and Varde, Aparna and Lal, Pankaj
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
2--8
Among many industries, air travel is impacted by the COVID pandemic. Airlines and airports rely on public sector information to enforce guidelines for ensuring health and safety of travelers. Such guidelines can be policy amendments or laws during the pandemic. In response to the inception of COVID preventive policies, travelers have exercised freedom of expression via the avenue of online reviews. This avenue facilitates voicing public concern while anonymizing / concealing user identity as needed. It is important to assess opinions on policy amendments to ensure transparency and openness, while also preserving confidentiality and ethics. Hence, this study leverages data science to analyze, with identity protection, the online reviews of airlines and airports since 2017, considering impacts of COVID issues and relevant policy amendments since 2020. Supervised learning with VADER sentiment analysis is deployed to predict changes in opinion from 2017 to date. Unsupervised learning with LDA topic modeling is employed to discover air travelers' major areas of concern before and after the pandemic. This study reveals that COVID policies have worsened public perceptions of air travel and aroused notable new concerns, affecting economics, environment and health.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,261
inproceedings
dipersio-2022-data
Data Protection, Privacy and {US} Regulation
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.3/
DiPersio, Denise
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
9--16
This paper examines the state of data protection and privacy in the United States. There is no comprehensive federal data protection or data privacy law despite bipartisan and popular support. There are several data protection bills pending in the 2022 session of the US Congress, five of which are examined in Section 2 below. Although it is not likely that any will be enacted, the growing number reflects the concerns of citizens and lawmakers about the power of big data. Recent actions against data abuses, including data breaches, litigation and settlements, are reviewed in Section 3 of this paper. These reflect the real harm caused when personal data is misused. Section 4 contains a brief US copyright law update on the fair use exemption, highlighting a recent court decision and indications of a re-thinking of the fair use analysis. In Section 5, some observations are made on the role of privacy in data protection regulation. It is argued that privacy should be considered from the start of the data collection and technology development process. Enhanced awareness of ethical issues, including privacy, through university-level data science programs will also lay the groundwork for best practices throughout the data and development cycles.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,262
inproceedings
kamocki-siegert-2022-pseudonymisation
Pseudonymisation of Speech Data as an Alternative Approach to {GDPR} Compliance
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.4/
Kamocki, Pawel and Siegert, Ingo
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
17--21
The debate on the use of personal data in language resources usually focuses {---} and rightfully so {---} on anonymisation. However, this very same debate usually ends quickly with the conclusion that proper anonymisation would necessarily cause loss of linguistically valuable information. This paper discusses an alternative approach {---} pseudonymisation. While pseudonymisation does not solve all the problems (inasmuch as pseudonymised data are still to be regarded as personal data and therefore their processing should still comply with the GDPR principles), it does provide a significant relief, especially {---} but not only {---} for those who process personal data for research purposes. This paper describes pseudonymisation as a measure to safeguard rights and interests of data subjects under the GDPR (with a special focus on the right to be informed). It also provides a concrete example of pseudonymisation carried out within a research project at the Institute of Information Technology and Communications of the Otto von Guericke University Magdeburg.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,263
inproceedings
rigault-etal-2022-categorizing
Categorizing legal features in a metadata-oriented task: defining the conditions of use
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.5/
Rigault, Micka{\"el and Arranz, Victoria and Mapelli, Val{\'erie and Labropoulou, Penny and Piperidis, Stelios
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
22--26
In recent times, more attention has been brought by the Human Language Technology (HLT) community to the legal framework for making available and reusing Language Resources (LR) and tools. Licensing is now an issue that is foreseen in most research projects and that is essential to provide legal certainty for repositories when distributing resources. Some repositories such as Zenodo or Quantum Stat do not offer the possibility to search for resources by licenses which can turn the searching for relevant resources a very complex task. Other repositories such as Hugging Face propose a search feature by license which may make it difficult to figure out what use can be made of such resources. During the European Language Grid (ELG) project, we moved a step forward to link metadata with the terms and conditions of use. In this paper, we document the process we undertook to categorize legal features of licenses listed in the SPDX license list and widely used in the HLT community as well as those licenses used within the ELG platform
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,264
inproceedings
gottschalk-pichierri-2022-migration
About Migration Flows and Sentiment Analysis on {T}witter data: Building the Bridge between Technical and Legal Approaches to Data Protection
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.6/
Gottschalk, Thilo and Pichierri, Francesca
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
27--37
Sentiment analysis has always been an important driver of political decisions and campaigns across all fields. Novel technologies allow automatizing analysis of sentiments on a big scale and hence provide allegedly more accurate outcomes. With user numbers in the billions and their increasingly important role in societal discussions, social media platforms become a glaring data source for these types of analysis. Due to its public availability, the relative ease of access and the sheer amount of available data, the Twitter API has become a particularly important source to researchers and data analysts alike. Despite the evident value of these data sources, the analysis of such data comes with legal, ethical and societal risks that should be taken into consideration when analysing data from Twitter. This paper describes these risks along the technical processing pipeline and proposes related mitigation measures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,265
inproceedings
delecraz-etal-2022-transparency
Transparency and Explainability of a Machine Learning Model in the Context of Human Resource Management
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.7/
Delecraz, Sebastien and Eltarr, Loukman and Oullier, Olivier
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
38--43
We introduce how the proprietary machine learning algorithms developed by Gojob, an HR Tech company, to match candidates to a job offer are as transparent and explainable as possible to users (i.e., our recruiters) and our clients (e.g. companies looking to fill jobs). We detail how our matching algorithm (which identifies the best candidates for a job offer) controls the fairness of its outcome. We have described the steps we have taken to ensure that the decisions made by our mathematical models not only inform but improve the performance of our recruiters.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,266
inproceedings
siegert-etal-2022-public
Public Interactions with Voice Assistant {--} Discussion of Different One-Shot Solutions to Preserve Speaker Privacy
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.8/
Siegert, Ingo and Sinha, Yamini and Winkelmann, Gino and Jokisch, Oliver and Wendemuth, Andreas
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
44--47
In recent years, the use of voice assistants has rapidly grown. Hereby, above all, the user`s speech data is stored and processed on a cloud platform, being the decisive factor for a good performance in speech processing and understanding. Although usually, they can be found in private households, a lot of business cases are also employed using voice assistants for public places, be it as an information service, a tour guide, or a booking system. As long as the systems are used in private spaces, it could be argued that the usage is voluntary and that the user itself is responsible for what is processed by the voice assistant system. When leaving the private space, the voluntary use is not the case anymore, as users may be made aware that their voice is processed in the cloud and background voices can be unintendedly recorded and processed as well. Thus, the usage of voice assistants in public environments raises a lot of privacy concerns. In this contribution, we discuss possible anonymization solutions to hide the speakers' identity, thus allowing a safe cloud processing of speech data. Thereby, we promote the public use of voice assistants.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,267
inproceedings
bridal-etal-2022-cross
Cross-Clinic De-Identification of {S}wedish Electronic Health Records: Nuances and Caveats
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.10/
Bridal, Olle and Vakili, Thomas and Santini, Marina
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
49--52
Privacy preservation of sensitive information is one of the main concerns in clinical text mining. Due to the inherent privacy risks of handling clinical data, the clinical corpora used to create the clinical Named Entity Recognition (NER) models underlying clinical de-identification systems cannot be shared. This situation implies that clinical NER models are trained and tested on data originating from the same institution since it is rarely possible to evaluate them on data belonging to a different organization. These restrictions on sharing make it very difficult to assess whether a clinical NER model has overfitted the data or if it has learned any undetected biases. This paper presents the results of the first-ever cross-institution evaluation of a Swedish de-identification system on Swedish clinical data. Alongside the encouraging results, we discuss differences and similarities across EHR naming conventions and NER tagsets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,269
inproceedings
bruera-etal-2022-generating
Generating Realistic Synthetic Curricula Vitae for Machine Learning Applications under Differential Privacy
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.11/
Bruera, Andrea and Alda, Francesco and Di Cerbo, Francesco
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
53--63
Applications involving machine learning in Human Resources (HR, the management of human talent in order to accomplish organizational goals) must respect the privacy of the individuals whose data is being used. This is a difficult aim, given the extremely personal nature of text data handled by HR departments, such as Curricula Vitae (CVs).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,270
inproceedings
arranz-etal-2022-mapa
{MAPA} Project: Ready-to-Go Open-Source Datasets and Deep Learning Technology to Remove Identifying Information from Text Documents
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.12/
Arranz, Victoria and Choukri, Khalid and Cuadros, Montse and Garc{\'i}a Pablos, Aitor and Gianola, Lucie and Grouin, Cyril and Herranz, Manuel and Paroubek, Patrick and Zweigenbaum, Pierre
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
64--72
This paper presents the outcomes of the MAPA project, a set of annotated corpora for 24 languages of the European Union and an open-source customisable toolkit able to detect and substitute sensitive information in text documents from any domain, using state-of-the art, deep learning-based named entity recognition techniques. In the context of the project, the toolkit has been developed and tested on administrative, legal and medical documents, obtaining state-of-the-art results. As a result of the project, 24 dataset packages have been released and the de-identification toolkit is available as open source.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,271
inproceedings
clos-etal-2022-pripa
{P}ri{PA}: A Tool for Privacy-Preserving Analytics of Linguistic Data
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.13/
Clos, Jeremie and McClaughlin, Emma and Barnard, Pepita and Nichele, Elena and Knight, Dawn and McAuley, Derek and Adolphs, Svenja
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
73--78
The days of large amorphous corpora collected with armies of Web crawlers and stored indefinitely are, or should be, coming to an end. There is a wealth of hidden linguistic information that is increasingly difficult to access, hidden in personal data that would be unethical and technically challenging to collect using traditional methods such as Web crawling and mass surveillance of online discussion spaces. Advances in privacy regulations such as GDPR and changes in the public perception of privacy bring into question the problematic ethical dimension of extracting information from unaware if not unwilling participants. Modern corpora need to adapt, be focused on testing specific hypotheses, and be respectful of the privacy of the people who generated its data. Our work focuses on using a distributed participatory approach and continuous informed consent to solve these issues, by allowing participants to voluntarily contribute their own censored personal data at a granular level. We evaluate our approach in a three-pronged manner, testing the accuracy of measurement of statistical measures of language with respect to standard corpus linguistics tools, evaluating the usability of our application with a participant involvement panel, and using the tool for a case study on health communication.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,272
inproceedings
rigault-etal-2022-legal
Legal and Ethical Challenges in Recording Air Traffic Control Speech
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.14/
Rigault, Micka{\"el and Cevenini, Claudia and Choukri, Khalid and Kocour, Martin and Vesel{\'y, Karel and Szoke, Igor and Motlicek, Petr and Zuluaga-Gomez, Juan Pablo and Blatt, Alexander and Klakow, Dietrich and Tart, Allan and Kol{\v{c{\'arek, Pavel and {\v{Cernock{\'y, Jan
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
79--83
In this paper the authors detail the various legal and ethical issues faced during the ATCO2 project. This project is aimed at developing tools to automatically collect and transcribe air traffic conversations, especially conversations between pilots and air controls towers. In this paper the authors will develop issues related to intellectual property, public data, privacy, and general ethics issues related to the collection of air-traffic control speech.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,273
inproceedings
yanez-fraisse-2022-dance
It is not Dance, is Data: Gearing Ethical Circulation of Intangible Cultural Heritage practices in the Digital Space
Siegert, Ingo and Rigault, Mickael and Arranz, Victoria
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.legal-1.15/
Y{\'a}nez, Jorge and Fraisse, Amel
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
84--91
The documentation, protection and dissemination of Intangible Cultural Heritage (ICH) in the digital age pose significant theoretical, technological and legal challenges. Through a multidisciplinary lens, this paper presents new approaches for collecting, documenting, encrypting and protecting ICH-related data for more ethical circulation. Human-movement recognition technologies such as motion capture, allows for the recording, extraction and reproduction of human movement with unprecedented precision. The once indistinguishable or hard-to-trace reproduction of dance steps between their creators and unauthorized third parties becomes patent through the transmission of embodied knowledge, but in the form of data. This new battlefield prompted by digital technologies only adds to the disputes within the creative industries, in terms of authorship, ownership and commodification of body language. For the sake of this paper, we are aiming to disentangle the various layers present in the process of digitisation of the dancing body, to identify its by-products as well as the possible arising ownership rights that might entail. {\textquotedblright}Who owns what?{\textquotedblright}, the basic premise of intellectual property law, is transposed, in this case, onto the various types of data generated when intangible cultural heritage, in the form of dance, is digitised through motion capture and encrypted with blockchain technologies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,274
inproceedings
abromeit-2022-annohub
The Annohub Web Portal
Declerck, Thierry and McCrae, John P. and Montiel, Elena and Chiarcos, Christian and Ionov, Maxim
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.ldl-1.1/
Abromeit, Frank
Proceedings of the 8th Workshop on Linked Data in Linguistics within the 13th Language Resources and Evaluation Conference
1--6
We introduce the Annohub web portal, specialized on metadata for annotated language resources like corpora, lexica and linguistic terminologies. The new portal provides easy access to our previously released Annohub Linked Data set, by allowing users to explore the annotation metadata in the web browser. In addition, we added features that will allow users to contribute to Annohub by means of uploading language data, in RDF, CoNNL or XML formats, for annotation scheme and language analysis. The generated metadata is finally available for personal use, or for release in Annohub.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,276
inproceedings
ikonic-nesic-etal-2022-eltec
From {ELT}e{C} Text Collection Metadata and Named Entities to Linked-data (and Back)
Declerck, Thierry and McCrae, John P. and Montiel, Elena and Chiarcos, Christian and Ionov, Maxim
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.ldl-1.2/
Ikoni{\'c Ne{\v{si{\'c, Milica and Stankovi{\'c, Ranka and Sch{\"och, Christof and Skoric, Mihailo
Proceedings of the 8th Workshop on Linked Data in Linguistics within the 13th Language Resources and Evaluation Conference
7--16
In this paper we present the wikification of the ELTeC (European Literary Text Collection), developed within the COST Action {\textquotedblleft}Distant Reading for European Literary History{\textquotedblright} (CA16204). ELTeC is a multilingual corpus of novels written in the time period 1840{---}1920, built to apply distant reading methods and tools to explore the European literary history. We present the pipeline that led to the production of the linked dataset, the novels' metadata retrieval and named entity recognition, transformation, mapping and Wikidata population, followed by named entity linking and export to NIF (NLP Interchange Format). The speeding up of the process of data preparation and import to Wikidata is presented on the use case of seven sub-collections of ELTeC (English, Portuguese, French, Slovenian, German, Hungarian and Serbian). Our goal was to automate the process of preparing and importing information, so OpenRefine and QuickStatements were chosen as the best options. The paper also includes examples of SPARQL queries for retrieval of authors, novel titles, publication places and other metadata with different visualisation options as well as statistical overviews.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,277
inproceedings
nordhoff-kramer-2022-imtvault
{IMTV}ault: Extracting and Enriching Low-resource Language Interlinear Glossed Text from Grammatical Descriptions and Typological Survey Articles
Declerck, Thierry and McCrae, John P. and Montiel, Elena and Chiarcos, Christian and Ionov, Maxim
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.ldl-1.3/
Nordhoff, Sebastian and Kr{\"amer, Thomas
Proceedings of the 8th Workshop on Linked Data in Linguistics within the 13th Language Resources and Evaluation Conference
17--25
Many NLP resources and programs focus on a handful of major languages. But there are thousands of languages with low or no resources available as structured data. This paper shows the extraction of 40k examples with interlinear morpheme translation in 280 different languages from LaTeX-based publications of the open access publisher Language Science Press. These examples are transformed into Linked Data. We use LIGT for modelling and enrich the data with Wikidata and Glottolog. The data is made available as HTML, JSON, JSON-LD and N-quads, and query facilities for humans (Elasticsearch) and machines (API) are provided.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,278
inproceedings
fantoli-etal-2022-linking
Linking the {LASLA} Corpus in the {L}i{L}a Knowledge Base of Interoperable Linguistic Resources for {L}atin
Declerck, Thierry and McCrae, John P. and Montiel, Elena and Chiarcos, Christian and Ionov, Maxim
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.ldl-1.4/
Fantoli, Margherita and Passarotti, Marco and Mambrini, Francesco and Moretti, Giovanni and Ruffolo, Paolo
Proceedings of the 8th Workshop on Linked Data in Linguistics within the 13th Language Resources and Evaluation Conference
26--34
This paper describes the process of interlinking the 130 Classical Latin texts provided by an annotated corpus developed at the LASLA laboratory with the LiLa Knowledge Base, which makes linguistic resources for Latin interoperable by following the principles of the Linked Data paradigm and making reference to classes and properties of widely adopted ontologies to model the relevant information. After introducing the overall architecture of the LiLa Knowledge Base and the LASLA corpus, the paper details the phases of the process of linking the corpus with the collection of lemmas of LiLa and presents a federated query to exemplify the added value of interoperability of LASLA`s texts with other resources for Latin.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,279
inproceedings
barbu-mititelu-etal-2022-use
Use Case: {R}omanian Language Resources in the {LOD} Paradigm
Declerck, Thierry and McCrae, John P. and Montiel, Elena and Chiarcos, Christian and Ionov, Maxim
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.ldl-1.5/
Barbu Mititelu, Verginica and Irimia, Elena and Pais, Vasile and Avram, Andrei-Marius and Mitrofan, Maria
Proceedings of the 8th Workshop on Linked Data in Linguistics within the 13th Language Resources and Evaluation Conference
35--44
In this paper, we report on (i) the conversion of Romanian language resources to the Linked Open Data specifications and requirements, on (ii) their publication and (iii) interlinking with other language resources (for Romanian or for other languages). The pool of converted resources is made up of the Romanian Wordnet, the morphosyntactic and phonemic lexicon RoLEX, four treebanks, one for the general language (the Romanian Reference Treebank) and others for specialised domains (SiMoNERo for medicine, LegalNERo for the legal domain, PARSEME-Ro for verbal multiword expressions), frequency information on lemmas and tokens and word embeddings as extracted from the reference corpus for contemporary Romanian (CoRoLa) and a bi-modal (text and speech) corpus. We also present the limitations coming from the representation of the resources in Linked Data format. The metadata of LOD resources have been published in the LOD Cloud. The resources are available for download on our website and a SPARQL endpoint is also available for querying them.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,280
inproceedings
bobillo-etal-2022-fuzzy
Fuzzy Lemon: Making Lexical Semantic Relations More Juicy
Declerck, Thierry and McCrae, John P. and Montiel, Elena and Chiarcos, Christian and Ionov, Maxim
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.ldl-1.6/
Bobillo, Fernando and Bosque-Gil, Julia and Gracia, Jorge and Lanau-Coronas, Marta
Proceedings of the 8th Workshop on Linked Data in Linguistics within the 13th Language Resources and Evaluation Conference
45--51
The OntoLex-Lemon model provides a vocabulary to enrich ontologies with linguistic information that can be exploited by Natural Language Processing applications. The increasing uptake of Lemon illustrates the growing interest in combining linguistic information and Semantic Web technologies. In this paper, we present Fuzzy Lemon, an extension of Lemon that allows to assign an uncertainty degree to lexical semantic relations. Our approach is based on an OWL ontology that defines a hierarchy of data properties encoding different types of uncertainty. We also illustrate the usefulness of Fuzzy Lemon by showing that it can be used to represent the confidence degrees of automatically discovered translations between pairs of bilingual dictionaries from the Apertium family.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,281
inproceedings
chiarcos-serasset-2022-cheap
A Cheap and Dirty Cross-Lingual Linking Service in the Cloud
Declerck, Thierry and McCrae, John P. and Montiel, Elena and Chiarcos, Christian and Ionov, Maxim
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.ldl-1.7/
Chiarcos, Christian and S{\'e}rasset, Gilles
Proceedings of the 8th Workshop on Linked Data in Linguistics within the 13th Language Resources and Evaluation Conference
52--60
In this paper, we describe the application of Linguistic Linked Open Data (LLOD) technology for dynamic cross-lingual querying on demand. Whereas most related research is focusing on providing a static linking, i.e., cross-lingual inference, and then storing the resulting links, we demonstrate the application of the federation capabilities of SPARQL to perform lexical linking on the fly. In the end, we provide a baseline functionality that uses the connection of two web services {--} a SPARQL end point for multilingual lexical data and another SPARQL end point for querying an English language knowledge graph {--} in order to perform querying an English language knowledge graph using foreign language labels. We argue that, for low-resource languages where substantial native knowledge graphs are lacking, this functionality can be used to lower the language barrier by allowing to formulate cross-linguistically applicable queries mediated by a multilingual dictionary.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,282
inproceedings
fath-chiarcos-2022-spicy
Spicy Salmon: Converting between 50+ Annotation Formats with Fintan, Pepper, Salt and Powla
Declerck, Thierry and McCrae, John P. and Montiel, Elena and Chiarcos, Christian and Ionov, Maxim
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.ldl-1.8/
F{\"ath, Christian and Chiarcos, Christian
Proceedings of the 8th Workshop on Linked Data in Linguistics within the 13th Language Resources and Evaluation Conference
61--68
Heterogeneity of formats, models and annotations has always been a primary hindrance for exploiting the ever increasing amount of existing linguistic resources for real world applications in and beyond NLP. Fintan - the Flexible INtegrated Transformation and Annotation eNgineering platform introduced in 2020 is designed to rapidly convert, combine and manipulate language resources both in and outside the Semantic Web by transforming it into segmented RDF representations which can be processed in parallel on a multithreaded environment and integrating it with ontologies and taxonomies. Fintan has recently been extended with a set of additional modules increasing the amount of supported non-RDF formats and the interoperability with existing non-JAVA conversion tools, and parts of this work are demonstrated in this paper. In particular, we focus on a novel recipe for resource transformation in which Fintan works in tandem with the Pepper toolset to allow computational linguists to transform their data between over 50 linguistic corpus formats with a graphical workflow manager.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,283
inproceedings
khan-etal-2022-survey
A Survey of Guidelines and Best Practices for the Generation, Interlinking, Publication, and Validation of Linguistic Linked Data
Declerck, Thierry and McCrae, John P. and Montiel, Elena and Chiarcos, Christian and Ionov, Maxim
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.ldl-1.9/
Khan, Fahad and Chiarcos, Christian and Declerck, Thierry and Di Buono, Maria Pia and Dojchinovski, Milan and Gracia, Jorge and Oleskeviciene, Giedre Valunaite and Gifu, Daniela
Proceedings of the 8th Workshop on Linked Data in Linguistics within the 13th Language Resources and Evaluation Conference
69--77
This article discusses a survey carried out within the NexusLinguarum COST Action which aimed to give an overview of existing guidelines (GLs) and best practices (BPs) in linguistic linked data. In particular it focused on four core tasks in the production/publication of linked data: generation, interlinking, publication, and validation. We discuss the importance of GLs and BPs for LLD before describing the survey and its results in full. Finally we offer a number of directions for future work in order to address the findings of the survey.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,284
inproceedings
chiarcos-etal-2022-computational
Computational Morphology with {O}nto{L}ex-Morph
Declerck, Thierry and McCrae, John P. and Montiel, Elena and Chiarcos, Christian and Ionov, Maxim
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.ldl-1.10/
Chiarcos, Christian and Gkirtzou, Katerina and Khan, Fahad and Labropoulou, Penny and Passarotti, Marco and Pellegrini, Matteo
Proceedings of the 8th Workshop on Linked Data in Linguistics within the 13th Language Resources and Evaluation Conference
78--86
This paper describes the current status of the emerging OntoLex module for linguistic morphology. It serves as an update to the previous version of the vocabulary (Klimek et al. 2019). Whereas this earlier model was exclusively focusing on descriptive morphology and focused on applications in lexicography, we now present a novel part and a novel application of the vocabulary to applications in language technology, i.e., the rule-based generation of lexicons, introducing a dynamic component into OntoLex.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,285
inproceedings
menini-etal-2022-multilingual
A Multilingual Benchmark to Capture Olfactory Situations over Time
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.1/
Menini, Stefano and Paccosi, Teresa and Tonelli, Sara and Van Erp, Marieke and Leemans, Inger and Lisena, Pasquale and Troncy, Raphael and Tullett, William and H{\"urriyeto{\u{glu, Ali and Dijkstra, Ger and Gordijn, Femke and J{\"urgens, Elias and Koopman, Josephine and Ouwerkerk, Aron and Steen, Sanne and Novalija, Inna and Brank, Janez and Mladenic, Dunja and Zidar, Anja
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
1--10
We present a benchmark in six European languages containing manually annotated information about olfactory situations and events following a FrameNet-like approach. The documents selection covers ten domains of interest to cultural historians in the olfactory domain and includes texts published between 1620 to 1920, allowing a diachronic analysis of smell descriptions. With this work, we aim to foster the development of olfactory information extraction approaches as well as the analysis of changes in smell descriptions over time.
null
null
10.18653/v1/2022.lchange-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,287
inproceedings
kali-kodner-2022-language
Language Acquisition, Neutral Change, and Diachronic Trends in Noun Classifiers
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.2/
Kali, Aniket and Kodner, Jordan
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
11--22
Languages around the world employ classifier systems as a method of semantic organization and categorization. These systems are rife with variability, violability, and ambiguity, and are prone to constant change over time. We explicitly model change in classifier systems as the population-level outcome of child language acquisition over time in order to shed light on the factors that drive change to classifier systems. Our research consists of two parts: a contrastive corpus study of Cantonese and Mandarin child-directed speech to determine the role that ambiguity and homophony avoidance may play in classifier learning and change followed by a series of population-level learning simulations of an abstract classifier system. We find that acquisition without reference to ambiguity avoidance is sufficient to drive broad trends in classifier change and suggest an additional role for adults and discourse factors in classifier death.
null
null
10.18653/v1/2022.lchange-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,288
inproceedings
franco-etal-2022-deconstructing
Deconstructing destruction: A Cognitive Linguistics perspective on a computational analysis of diachronic change
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.3/
Franco, Karlien and Montes, Mariana and Heylen, Kris
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
23--32
In this paper, we aim to introduce a Cognitive Linguistics perspective into a computational analysis of near-synonyms. We focus on a single set of Dutch near-synonyms, vernielen and vernietigen, roughly translated as {\textquoteleft}to destroy', replicating the analysis from Geeraerts (1997) with distributional models. Our analysis, which tracks the meaning of both words in a corpus of 16th-20th century prose data, shows that both lexical items have undergone semantic change, led by differences in their prototypical semantic core.
null
null
10.18653/v1/2022.lchange-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,289
inproceedings
periti-etal-2022-done
What is Done is Done: an Incremental Approach to Semantic Shift Detection
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.4/
Periti, Francesco and Ferrara, Alfio and Montanelli, Stefano and Ruskov, Martin
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
33--43
Contextual word embedding techniques for semantic shift detection are receiving more and more attention. In this paper, we present What is Done is Done (WiDiD), an incremental approach to semantic shift detection based on incremental clustering techniques and contextual embedding methods to capture the changes over the meanings of a target word along a diachronic corpus. In WiDiD, the word contexts observed in the past are consolidated as a set of clusters that constitute the {\textquotedblleft}memory{\textquotedblright} of the word meanings observed so far. Such a memory is exploited as a basis for subsequent word observations, so that the meanings observed in the present are stratified over the past ones.
null
null
10.18653/v1/2022.lchange-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,290
inproceedings
feltgen-2022-qualifiers
From qualifiers to quantifiers: semantic shift at the paradigm level
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.5/
Feltgen, Quentin
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
44--53
Language change has often been conceived as a competition between linguistic variants. However, language units may be complex organizations in themselves, e.g. in the case of schematic constructions, featuring a free slot. Such a slot is filled by words forming a set or {\textquoteleft}paradigm' and engaging in inter-related dynamics within this constructional environment. To tackle this complexity, a simple computational method is offered to automatically characterize their interactions, and visualize them through networks of cooperation and competition. Applying this method to the French paradigm of quantifiers, I show that this method efficiently captures phenomena regarding the evolving organization of constructional paradigms, in particular the constitution of competing clusters of fillers that promote different semantic strategies overall.
null
null
10.18653/v1/2022.lchange-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,291
inproceedings
giulianelli-etal-2022-fire
Do Not Fire the Linguist: Grammatical Profiles Help Language Models Detect Semantic Change
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.6/
Giulianelli, Mario and Kutuzov, Andrey and Pivovarova, Lidia
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
54--67
Morphological and syntactic changes in word usage {---} as captured, e.g., by grammatical profiles {---} have been shown to be good predictors of a word`s meaning change. In this work, we explore whether large pre-trained contextualised language models, a common tool for lexical semantic change detection, are sensitive to such morphosyntactic changes. To this end, we first compare the performance of grammatical profiles against that of a multilingual neural language model (XLM-R) on 10 datasets, covering 7 languages, and then combine the two approaches in ensembles to assess their complementarity. Our results show that ensembling grammatical profiles with XLM-R improves semantic change detection performance for most datasets and languages. This indicates that language models do not fully cover the fine-grained morphological and syntactic signals that are explicitly represented in grammatical profiles. An interesting exception are the test sets where the time spans under analysis are much longer than the time gap between them (for example, century-long spans with a one-year gap between them). Morphosyntactic change is slow so grammatical profiles do not detect in such cases. In contrast, language models, thanks to their access to lexical information, are able to detect fast topical changes.
null
null
10.18653/v1/2022.lchange-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,292
inproceedings
rastas-etal-2022-explainable
Explainable Publication Year Prediction of Eighteenth Century Texts with the {BERT} Model
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.7/
Rastas, Iiro and Ciar{\'an Ryan, Yann and Tiihonen, Iiro and Qaraei, Mohammadreza and Repo, Liina and Babbar, Rohit and M{\"akel{\"a, Eetu and Tolonen, Mikko and Ginter, Filip
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
68--77
In this paper, we describe a BERT model trained on the Eighteenth Century Collections Online (ECCO) dataset of digitized documents. The ECCO dataset poses unique modelling challenges due to the presence of Optical Character Recognition (OCR) artifacts. We establish the performance of the BERT model on a publication year prediction task against linear baseline models and human judgement, finding the BERT model to be superior to both and able to date the works, on average, with less than 7 years absolute error. We also explore how language change over time affects the model by analyzing the features the model uses for publication year predictions as given by the Integrated Gradients model explanation method.
null
null
10.18653/v1/2022.lchange-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,293
inproceedings
samohi-etal-2022-using
Using Cross-Lingual Part of Speech Tagging for Partially Reconstructing the Classic Language Family Tree Model
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.8/
Samohi, Anat and Weisberg Mitelman, Daniel and Bar, Kfir
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
78--88
The tree model is well known for expressing the historic evolution of languages. This model has been considered as a method of describing genetic relationships between languages. Nevertheless, some researchers question the model`s ability to predict the proximity between two languages, since it represents genetic relatedness rather than linguistic resemblance. Defining other language proximity models has been an active research area for many years. In this paper we explore a part-of-speech model for defining proximity between languages using a multilingual language model that was fine-tuned on the task of cross-lingual part-of-speech tagging. We train the model on one language and evaluate it on another; the measured performance is then used to define the proximity between the two languages. By further developing the model, we show that it can reconstruct some parts of the tree model.
null
null
10.18653/v1/2022.lchange-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,294
inproceedings
list-etal-2022-new
A New Framework for Fast Automated Phonological Reconstruction Using Trimmed Alignments and Sound Correspondence Patterns
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.9/
List, Johann-Mattis and Forkel, Robert and Hill, Nathan
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
89--96
Computational approaches in historical linguistics have been increasingly applied during the past decade and many new methods that implement parts of the traditional comparative method have been proposed. Despite these increased efforts, there are not many easy-to-use and fast approaches for the task of phonological reconstruction. Here we present a new framework that combines state-of-the-art techniques for automated sequence comparison with novel techniques for phonetic alignment analysis and sound correspondence pattern detection to allow for the supervised reconstruction of word forms in ancestral languages. We test the method on a new dataset covering six groups from three different language families. The results show that our method yields promising results while at the same time being not only fast but also easy to apply and expand.
null
null
10.18653/v1/2022.lchange-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,295
inproceedings
fourrier-montariol-2022-caveats
Caveats of Measuring Semantic Change of Cognates and Borrowings using Multilingual Word Embeddings
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.10/
Fourrier, Cl{\'e}mentine and Montariol, Syrielle
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
97--112
Cognates and borrowings carry different aspects of etymological evolution. In this work, we study semantic change of such items using multilingual word embeddings, both static and contextualised. We underline caveats identified while building and evaluating these embeddings. We release both said embeddings and a newly-built historical words lexicon, containing typed relations between words of varied Romance languages.
null
null
10.18653/v1/2022.lchange-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,296
inproceedings
chen-etal-2022-lexicon
Lexicon of Changes: Towards the Evaluation of Diachronic Semantic Shift in {C}hinese
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.11/
Chen, Jing and Chersoni, Emmanuele and Huang, Chu-ren
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
113--118
Recent research has brought a wind of using computational approaches to the classic topic of semantic change, aiming to tackle one of the most challenging issues in the evolution of human language. While several methods for detecting semantic change have been proposed, such studies are limited to a few languages, where evaluation datasets are available. This paper presents the first dataset for evaluating Chinese semantic change in contexts preceding and following the Reform and Opening-up, covering a 50-year period in Modern Chinese. Following the DURel framework, we collected 6,000 human judgments for the dataset. We also reported the performance of alignment-based word embedding models on this evaluation dataset, achieving high and significant correlation scores.
null
null
10.18653/v1/2022.lchange-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,297
inproceedings
siewert-etal-2022-low
Low {S}axon dialect distances at the orthographic and syntactic level
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.12/
Siewert, Janine and Scherrer, Yves and Wieling, Martijn
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
119--124
We compare five Low Saxon dialects from the 19th and 21st century from Germany and the Netherlands with each other as well as with modern Standard Dutch and Standard German. Our comparison is based on character n-grams on the one hand and PoS n-grams on the other and we show that these two lead to different distances. Particularly in the PoS-based distances, one can observe all of the 21st century Low Saxon dialects shifting towards the modern majority languages.
null
null
10.18653/v1/2022.lchange-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,298
inproceedings
timmermans-etal-2022-vaderland
{\textquotedblleft}Vaderland{\textquotedblright}, {\textquotedblleft}Volk{\textquotedblright} and {\textquotedblleft}Natie{\textquotedblright}: Semantic Change Related to Nationalism in {D}utch Literature Between 1700 and 1880 Captured with Dynamic {B}ernoulli Word Embeddings
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.13/
Timmermans, Marije and Vanmassenhove, Eva and Shterionov, Dimitar
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
125--130
Languages can respond to external events in various ways - the creation of new words or named entities, additional senses might develop for already existing words or the valence of words can change. In this work, we explore the semantic shift of the Dutch words {\textquotedblleft}natie{\textquotedblright} ({\textquotedblleft}nation{\textquotedblright}), {\textquotedblleft}volk{\textquotedblright} ({\textquotedblleft}people{\textquotedblright}) and {\textquotedblleft}vaderland{\textquotedblright} ({\textquotedblleft}fatherland{\textquotedblright}) over a period that is known for the rise of nationalism in Europe: 1700-1880. The semantic change is measured by means of Dynamic Bernoulli Word Embeddings which allow for comparison between word embeddings over different time slices. The word embeddings were generated based on Dutch fiction literature divided over different decades. From the analysis of the absolute drifts, it appears that the word {\textquotedblleft}natie{\textquotedblright} underwent a relatively small drift. However, the drifts of {\textquotedblleft}vaderland'{\textquotedblright} and {\textquotedblleft}volk{\textquotedblright}' show multiple peaks, culminating around the turn of the nineteenth century. To verify whether this semantic change can indeed be attributed to nationalistic movements, a detailed analysis of the nearest neighbours of the target words is provided. From the analysis, it appears that {\textquotedblleft}natie{\textquotedblright}, {\textquotedblleft}volk{\textquotedblright} and {\textquotedblleft}vaderlan{\textquotedblright}' became more nationalistically-loaded over time.
null
null
10.18653/v1/2022.lchange-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,299
inproceedings
kellert-mahmud-uz-zaman-2022-using
Using neural topic models to track context shifts of words: a case study of {COVID}-related terms before and after the lockdown in {A}pril 2020
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.14/
Kellert, Olga and Mahmud Uz Zaman, Md
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
131--139
This paper explores lexical meaning changes in a new dataset, which includes tweets from before and after the COVID-related lockdown in April 2020. We use this dataset to evaluate traditional and more recent unsupervised approaches to lexical semantic change that make use of contextualized word representations based on the BERT neural language model to obtain representations of word usages. We argue that previous models that encode local representations of words cannot capture global context shifts such as the context shift of face masks since the pandemic outbreak. We experiment with neural topic models to track context shifts of words. We show that this approach can reveal textual associations of words that go beyond their lexical meaning representation. We discuss future work and how to proceed capturing the pragmatic aspect of meaning change as opposed to lexical semantic change.
null
null
10.18653/v1/2022.lchange-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,300
inproceedings
alshahrani-etal-2022-roadblocks
Roadblocks in Gender Bias Measurement for Diachronic Corpora
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.15/
Alshahrani, Saied and Wali, Esma and R Alshamsan, Abdullah and Chen, Yan and Matthews, Jeanna
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
140--148
The use of word embeddings is an important NLP technique for extracting meaningful conclusions from corpora of human text. One important question that has been raised about word embeddings is the degree of gender bias learned from corpora. Bolukbasi et al. (2016) proposed an important technique for quantifying gender bias in word embeddings that, at its heart, is lexically based and relies on sets of highly gendered word pairs (e.g., mother/father and madam/sir) and a list of professions words (e.g., doctor and nurse). In this paper, we document problems that arise with this method to quantify gender bias in diachronic corpora. Focusing on Arabic and Chinese corpora, in particular, we document clear changes in profession words used over time and, somewhat surprisingly, even changes in the simpler gendered defining set word pairs. We further document complications in languages such as Arabic, where many words are highly polysemous/homonymous, especially female professions words.
null
null
10.18653/v1/2022.lchange-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,301
inproceedings
d-zamora-reina-etal-2022-black
{LSCD}iscovery: A shared task on semantic change discovery and detection in {S}panish
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.16/
Zamora-Reina, Frank D. and Bravo-Marquez, Felipe and Schlechtweg, Dominik
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
149--164
We present the first shared task on semantic change discovery and detection in Spanish. We create the first dataset of Spanish words manually annotated by semantic change using the DURel framewok (Schlechtweg et al., 2018). The task is divided in two phases: 1) graded change discovery, and 2) binary change detection. In addition to introducing a new language for this task, the main novelty with respect to the previous tasks consists in predicting and evaluating changes for all vocabulary words in the corpus. Six teams participated in phase 1 and seven teams in phase 2 of the shared task, and the best system obtained a Spearman rank correlation of 0.735 for phase 1 and an F1 score of 0.735 for phase 2. We describe the systems developed by the competing teams, highlighting the techniques that were particularly useful.
null
null
10.18653/v1/2022.lchange-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,302
inproceedings
kudisov-arefyev-2022-black
{BOS} at {LSCD}iscovery: Lexical Substitution for Interpretable Lexical Semantic Change Detection
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.17/
Kudisov, Artem and Arefyev, Nikolay
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
165--172
We propose a solution for the LSCDiscovery shared task on Lexical Semantic Change Detection in Spanish. Our approach is based on generating lexical substitutes that describe old and new senses of a given word. This approach achieves the second best result in sense loss and sense gain detection subtasks. By observing those substitutes that are specific for only one time period, one can understand which senses were obtained or lost. This allows providing more detailed information about semantic change to the user and makes our method interpretable.
null
null
10.18653/v1/2022.lchange-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,303
inproceedings
homskiy-arefyev-2022-black
{D}eep{M}istake at {LSCD}iscovery: Can a Multilingual Word-in-Context Model Replace Human Annotators?
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.18/
Homskiy, Daniil and Arefyev, Nikolay
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
173--179
In this paper we describe our solution of the LSCDiscovery shared task on Lexical Semantic Change Discovery (LSCD) in Spanish. Our solution employs a Word-in-Context (WiC) model, which is trained to determine if a particular word has the same meaning in two given contexts. We basically try to replicate the annotation of the dataset for the shared task, but replacing human annotators with a neural network. In the graded change discovery subtask, our solution has achieved the 2nd best result according to all metrics. In the main binary change detection subtask, our F1-score is 0.655 compared to 0.716 of the best submission, corresponding to the 5th place. However, in the optional sense gain detection subtask we have outperformed all other participants. During the post-evaluation experiments we compared different ways to prepare WiC data in Spanish for fine-tuning. We have found that it helps leaving only examples annotated as 1 (unrelated senses) and 4 (identical senses) rather than using 2x more examples including intermediate annotations.
null
null
10.18653/v1/2022.lchange-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,304
inproceedings
teodorescu-etal-2022-black
{UA}lberta at {LSCD}iscovery: Lexical Semantic Change Detection via Word Sense Disambiguation
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.19/
Teodorescu, Daniela and von der Ohe, Spencer and Kondrak, Grzegorz
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
180--186
We describe our two systems for the shared task on Lexical Semantic Change Discovery in Spanish. For binary change detection, we frame the task as a word sense disambiguation (WSD) problem. We derive sense frequency distributions for target words in both old and modern corpora. We assume that the word semantics have changed if a sense is observed in only one of the two corpora, or the relative change for any sense exceeds a tuned threshold. For graded change discovery, we follow the design of CIRCE (P{\"omsl and Lyapin, 2020) by combining both static and contextual embeddings. For contextual embeddings, we use XLM-RoBERTa instead of BERT, and train the model to predict a masked token instead of the time period. Our language-independent methods achieve results that are close to the best-performing systems in the shared task.
null
null
10.18653/v1/2022.lchange-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,305
inproceedings
sabina-uban-etal-2022-black
{C}o{T}o{H}i{L}i at {LSCD}iscovery: the Role of Linguistic Features in Predicting Semantic Change
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.20/
Sabina Uban, Ana and Maria Cristea, Alina and Daniela Dinu, Anca and P Dinu, Liviu and Georgescu, Simona and Zoicas, Laurentiu
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
187--192
This paper presents the contributions of the CoToHiLi team for the LSCDiscovery shared task on semantic change in the Spanish language. We participated in both tasks (graded discovery and binary change, including sense gain and sense loss) and proposed models based on word embedding distances combined with hand-crafted linguistic features, including polysemy, number of neological synonyms, and relation to cognates in English. We find that models that include linguistically informed features combined using weights assigned manually by experts lead to promising results.
null
null
10.18653/v1/2022.lchange-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,306
inproceedings
kashleva-etal-2022-black
{HSE} at {LSCD}iscovery in {S}panish: Clustering and Profiling for Lexical Semantic Change Discovery
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.21/
Kashleva, Kseniia and Shein, Alexander and Tukhtina, Elizaveta and Vydrina, Svetlana
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
193--197
This paper describes the methods used for lexical semantic change discovery in Spanish. We tried the method based on BERT embeddings with clustering, the method based on grammatical profiles and the grammatical profiles method enhanced with permutation tests. BERT embeddings with clustering turned out to show the best results for both graded and binary semantic change detection outperforming the baseline. Our best submission for graded discovery was the 3rd best result, while for binary detection it was the 2nd place (precision) and the 7th place (both F1-score and recall). Our highest precision for binary detection was 0.75 and it was achieved due to improving grammatical profiling with permutation tests.
null
null
10.18653/v1/2022.lchange-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,307
inproceedings
rachinskiy-arefyev-2022-black
{G}loss{R}eader at {LSCD}iscovery: Train to Select a Proper Gloss in {E}nglish {--} Discover Lexical Semantic Change in {S}panish
Tahmasebi, Nina and Montariol, Syrielle and Kutuzov, Andrey and Hengchen, Simon and Dubossarsky, Haim and Borin, Lars
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.lchange-1.22/
Rachinskiy, Maxim and Arefyev, Nikolay
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
198--203
The contextualized embeddings obtained from neural networks pre-trained as Language Models (LM) or Masked Language Models (MLM) are not well suitable for solving the Lexical Semantic Change Detection (LSCD) task because they are more sensitive to changes in word forms rather than word meaning, a property previously known as the word form bias or orthographic bias. Unlike many other NLP tasks, it is also not obvious how to fine-tune such models for LSCD. In order to conclude if there are any differences between senses of a particular word in two corpora, a human annotator or a system shall analyze many examples containing this word from both corpora. This makes annotation of LSCD datasets very labour-consuming. The existing LSCD datasets contain up to 100 words that are labeled according to their semantic change, which is hardly enough for fine-tuning. To solve these problems we fine-tune the XLM-R MLM as part of a gloss-based WSD system on a large WSD dataset in English. Then we employ zero-shot cross-lingual transferability of XLM-R to build the contextualized embeddings for examples in Spanish. In order to obtain the graded change score for each word, we calculate the average distance between our improved contextualized embeddings of its old and new occurrences. For the binary change detection subtask, we apply thresholding to the same scores. Our solution has shown the best results among all other participants in all subtasks except for the optional sense gain detection subtask.
null
null
10.18653/v1/2022.lchange-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,308
inproceedings
basuki-tsuchiya-2022-automatic
Automatic Approach for Building Dataset of Citation Functions for {COVID}-19 Academic Papers
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.1/
Basuki, Setio and Tsuchiya, Masatoshi
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
1--7
This paper develops a new dataset of citation functions of COVID-19-related academic papers. Because the preparation of new labels of citation functions and building a new dataset requires much human effort and is time-consuming, this paper uses our previous citation functions that were built for the Computer Science (CS) domain, which consists of five coarse-grained labels and 21 fine-grained labels. This paper uses the COVID-19 Open Research Dataset (CORD-19) and extracts 99.6k random citing sentences from 10.1k papers. These citing sentences are categorized using the classification models built from the CS domain. The manually check on 475 random samples resulted accuracies of 76.6{\%} and 70.2{\%} on coarse-grained labels and fine-grained labels, respectively. The evaluation reveals three findings. First, two fine-grained labels experienced meaning shift while retaining the same idea. Second, the COVID-19 domain is dominated by statements highlighting the importance, cruciality, usefulness, benefit, consideration, etc. of certain topics for making sensible argumentation. Third, discussing State of The Arts (SOTA) in terms of their outperforming previous works in the COVID-19 domain is less popular compared to the CS domain. Our results will be used for further dataset development by classifying citing sentences in all papers from CORD-19.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,310
inproceedings
gonzalez-2022-development
The Development of a Comprehensive {S}panish Dictionary for Phonetic and Lexical Tagging in Socio-phonetic Research ({ESPADA})
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.2/
Gonzalez, Simon
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
8--14
Pronunciation dictionaries are an important component in the process of speech forced alignment. The accuracy of these dictionaries has a strong effect on the aligned speech data since they help the mapping between orthographic transcriptions and acoustic signals. In this paper, I present the creation of a comprehensive pronunciation dictionary in Spanish (ESPADA) that can be used in most of the dialect variants of Spanish data. Current dictionaries focus on specific regional variants, but with the flexible nature of our tool, it can be readily applied to capture the most common phonetic differences across major dialectal variants. We propose improvements to current pronunciation dictionaries as well as mapping other relevant annotations such as morphological and lexical information. In terms of size, it is currently the most complete dictionary with more than 628,000 entries, representing words from 16 countries. All entries come with their corresponding pronunciations, morphological and lexical tagging, and other relevant information for phonetic analysis: stress patterns, phonotactics, IPA transcriptions, and more. This aims to equip socio-phonetic researchers with a complete open-source tool that enhances dialectal research within socio-phonetic frameworks in the Spanish language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,311
inproceedings
dobrovoljc-ljubesic-2022-extending
Extending the {SSJ} {U}niversal {D}ependencies Treebank for {S}lovenian: Was It Worth It?
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.3/
Dobrovoljc, Kaja and Ljube{\v{s}}i{\'c}, Nikola
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
15--22
This paper presents the creation and evaluation of a new version of the reference SSJ Universal Dependencies Treebank for Slovenian, which has been substantially improved and extended to almost double the original size. The process was based on the initial revision and documentation of the language-specific UD annotation guidelines for Slovenian and the corresponding modification of the original SSJ annotations, followed by a two-stage annotation campaign, in which two new subsets have been added, the previously unreleased sentences from the ssj500k corpus and the Slovenian subset of the ELEXIS parallel corpus. The annotation campaign resulted in an extended version of the SSJ UD treebank with 5,435 newly added sentences comprising of 126,427 tokens. To evaluate the potential benefits of this data increase for Slovenian dependency parsing, we compared the performance of the classla-stanza dependency parser trained on the old and the new SSJ data when evaluated on the new SSJ test set and its subsets. Our results show an increase of LAS performance in general, especially for previously under-represented syntactic phenomena, such as lists, elliptical constructions and appositions, but also confirm the distinct nature of the two newly added subsets and the diversification of the SSJ treebank as a whole.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,312
inproceedings
hsieh-etal-2022-converting
Converting the {S}inica {T}reebank of {M}andarin {C}hinese to {U}niversal {D}ependencies
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.4/
Hsieh, Yu-Ming and Shih, Yueh-Yin and Ma, Wei-Yun
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
23--30
This paper describes the conversion of the Sinica Treebank, one of the major Mandarin Chinese treebanks, to Universal Dependencies. The conversion is rule-based and the process involves POS tag mapping, head adjusting in line with the UD scheme and the dependency conversion. Linguistic insights into Mandarin Chinese alongwith the conversion are also discussed. The resulting corpus is the UD Chinese Sinica Treebank which contains more than fifty thousand tree structures according to the UD scheme. The dataset can be downloaded at \url{https://github.com/ckiplab/ud}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,313
inproceedings
booth-2022-desiderata
Desiderata for the Annotation of Information Structure in Complex Sentences
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.5/
Booth, Hannah
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
31--43
Many annotation schemes for information structure have been developed in recent years (Calhoun et al., 2005; Paggio, 2006; Goetze et al., 2007; Bohnet et al., 2013; Riester et al., 2018), in line with increased attention on the interaction between discourse and other linguistic dimensions (e.g. syntax, semantics, prosody). However, a crucial issue which existing schemes either gloss over, or propose only crude guidelines for, is how to annotate information structure in complex sentences. This unsatisfactory treatment is unsurprising given that theoretical work on information structure has traditionally neglected its status in dependent clauses. In this paper, I evaluate the status of pre-existing annotation schemes in relation to this vexed issue, and outline certain desiderata as a foundation for novel, more nuanced approaches, informed by state-of-the art theoretical insights (Erteschik-Shir, 2007; Bianchi and Frascarelli, 2010; Lahousse, 2010; Ebert et al., 2014; Matic et al., 2014; Lahousse, 2022). These desiderata relate both to annotation formats and the annotation process. The practical implications of these desiderata are illustrated via a test case using the Corpus of Historical Low German (Booth et al., 2020). The paper overall showcases the benefits which result from a free exchange between linguistic annotation models and theoretical research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,314
inproceedings
thorn-jakobsen-etal-2022-sensitivity
The Sensitivity of Annotator Bias to Task Definitions in Argument Mining
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.6/
Thorn Jakobsen, Terne Sasha and Barrett, Maria and S{\o}gaard, Anders and Lassen, David
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
44--61
NLP models are dependent on the data they are trained on, including how this data is annotated. NLP research increasingly examines the social biases of models, but often in the light of their training data and specific social biases that can be identified in the text itself. In this paper, we present an annotation experiment that is the first to examine the extent to which social bias is sensitive to how data is annotated. We do so by collecting annotations of arguments in the same documents following four different guidelines and from four different demographic annotator backgrounds. We show that annotations exhibit widely different levels of group disparity depending on which guidelines annotators follow. The differences are not explained by task complexity, but rather by characteristics of these demographic groups, as previously identified by sociological studies. We release a dataset that is small in the number of instances but large in the number of annotations with demographic information, and our results encourage an increased awareness of annotator bias.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,315
inproceedings
bauer-etal-2022-nlp
{NLP} in Human Rights Research: Extracting Knowledge Graphs about Police and Army Units and Their Commanders
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.7/
Bauer, Daniel and Longley, Tom and Ma, Yueen and Wilson, Tony
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
62--69
In this paper we explore the use of an NLP system to assist the work of Security Force Monitor (SFM). SFM creates data about the organizational structure, command personnel and operations of police, army and other security forces, which assists human rights researchers, journalists and litigators in their work to help identify and bring to account specific units and personnel alleged to have committed abuses of human rights and international criminal law. This paper presents an NLP system that extracts from English language news reports the names of security force units and the biographical details of their personnel, and infers the formal relationship between them. Published alongside this paper are the system`s code and training dataset. We find that the experimental NLP system performs the task at a fair to good level. Its performance is sufficient to justify further development into a live workflow that will give insight into whether its performance translates into savings in time and resource that would make it an effective technical intervention.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,316
inproceedings
hajicova-etal-2022-advantages
Advantages of a Complex Multilayer Annotation Scheme: The Case of the {P}rague Dependency Treebank
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.8/
Hajicova, Eva and Mikulov{\'a}, Marie and {\v{S}}t{\v{e}}p{\'a}nkov{\'a}, Barbora and M{\'i}rovsk{\'y}, Ji{\v{r}}{\'i}
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
70--78
Recently, many corpora have been developed that contain multiple annotations of various linguistic phenomena, from morphological categories of words through the syntactic structure of sentences to discourse and coreference relations in texts. Discussions are ongoing on an appropriate annotation scheme for a large amount of diverse information. In our contribution we express our conviction that a multilayer annotation scheme offers to view the language system in its complexity and in the interaction of individual phenomena and that there are at least two aspects that support such a scheme: (i) A multilayer annotation scheme makes it possible to use the annotation of one layer to design the annotation of another layer(s) both conceptually and in a form of a pre-annotation procedure or annotation checking rules. (ii) A multilayer annotation scheme presents a reliable ground for corpus studies based on features across the layers. These aspects are demonstrated on the case of the Prague Dependency Treebank. Its multilayer annotation scheme withstood the test of time and serves well also for complex textual annotations, in which earlier morpho-syntactic annotations are advantageously used. In addition to a reference to the previous projects that utilise its annotation scheme, we present several current investigations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,317
inproceedings
yenice-etal-2022-introducing
Introducing {S}tar{D}ust: A {UD}-based Dependency Annotation Tool
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.9/
Yenice, Arife B. and Cesur, Neslihan and Kuzgun, Asl{\i} and Y{\i}ld{\i}z, Olcay Taner
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
79--84
This paper aims to introduce StarDust, a new, open-source annotation tool designed for NLP studies. StarDust is designed specifically to be intuitive and simple for the annotators while also supporting the annotation of multiple languages with different morphological typologies, e.g. Turkish and English. This demonstration will mainly focus on our UD-based annotation tool for dependency syntax. Linked to a morphological analyzer, the tool can detect certain annotator mistakes and limit undesired dependency relations as well as offering annotators a quick and effective annotation process thanks to its new simple interface. Our tool can be downloaded from the Github.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,318
inproceedings
deturck-etal-2022-annotation
Annotation of Messages from Social Media for Influencer Detection
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.10/
Deturck, Kevin and Nouvel, Damien and Patel, Namrata and Segond, Fr{\'e}d{\'e}rique
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
85--90
To develop an influencer detection system, we designed an influence model based on the analysis of conversations in the {\textquotedblleft}Change My View{\textquotedblright} debate forum. This led us to identify enunciative features (argumentation, emotion expression, view change, ...) related to influence between participants. In this paper, we present the annotation campaign we conducted to build up a reference corpus on these enunciative features. The annotation task was to identify in social media posts the text segments that corresponded to each enunciative feature. The posts to be annotated were extracted from two social media: the {\textquotedblleft}Change My View{\textquotedblright} debate forum, with discussions on various topics, and Twitter, with posts from users identified as supporters of ISIS (Islamic State of Iraq and Syria). Over a thousand posts have been double or triple annotated throughout five annotation sessions gathering a total of 27 annotators. Some of the sessions involved the same annotators, which allowed us to analyse the evolution of their annotation work. Most of the sessions resulted in a reconciliation phase between the annotators, allowing for discussion and iterative improvement of the guidelines. We measured and analysed inter-annotator agreements over the course of the sessions, which allowed us to validate our iterative approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,319
inproceedings
wein-etal-2022-effect
Effect of Source Language on {AMR} Structure
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.12/
Wein, Shira and Leung, Wai Ching and Mu, Yifu and Schneider, Nathan
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
97--102
The Abstract Meaning Representation (AMR) annotation schema was originally designed for English. But the formalism has since been adapted for annotation in a variety of languages. Meanwhile, cross-lingual parsers have been developed to derive English AMR representations for sentences from other languages{---}implicitly assuming that English AMR can approximate an interlingua. In this work, we investigate the similarity of AMR annotations in parallel data and how much the language matters in terms of the graph structure. We set out to quantify the effect of sentence language on the structure of the parsed AMR. As a case study, we take parallel AMR annotations from Mandarin Chinese and English AMRs, and replace all Chinese concepts with equivalent English tokens. We then compare the two graphs via the Smatch metric as a measure of structural similarity. We find that source language has a dramatic impact on AMR structure, with Smatch scores below 50{\%} between English and Chinese graphs in our sample{---}an important reference point for interpreting Smatch scores in cross-lingual AMR parsing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,321
inproceedings
gessler-etal-2022-midas
{M}idas Loop: A Prioritized Human-in-the-Loop Annotation for Large Scale Multilayer Data
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.13/
Gessler, Luke and Levine, Lauren and Zeldes, Amir
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
103--110
Large scale annotation of rich multilayer corpus data is expensive and time consuming, motivating approaches that integrate high quality automatic tools with active learning in order to prioritize human labeling of hard cases. A related challenge in such scenarios is the concurrent management of automatically annotated data and human annotated data, particularly where different subsets of the data have been corrected for different types of annotation and with different levels of confidence. In this paper we present [REDACTED], a collaborative, version-controlled online annotation environment for multilayer corpus data which includes integrated provenance and confidence metadata for each piece of information at the document, sentence, token and annotation level. We present a case study on improving annotation quality in an existing multilayer parse bank of English called AMALGUM, focusing on active learning in corpus preprocessing, at the surprisingly challenging level of sentence segmentation. Our results show improvements to state-of-the-art sentence segmentation and a promising workflow for getting {\textquotedblleft}silver{\textquotedblright} data to approach gold standard quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,322
inproceedings
mompelat-etal-2022-loco
How {\textquotedblleft}Loco{\textquotedblright} Is the {LOCO} Corpus? Annotating the Language of Conspiracy Theories
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.14/
Mompelat, Ludovic and Tian, Zuoyu and Kessler, Amanda and Luettgen, Matthew and Rajanala, Aaryana and K{\"ubler, Sandra and Seelig, Michelle
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
111--119
Conspiracy theories have found a new channel on the internet and spread by bringing together like-minded people, thus functioning as an echo chamber. The new 88-million word corpus \textit{Language of Conspiracy} (LOCO) was created with the intention to provide a text collection to study how the language of conspiracy differs from mainstream language. We use this corpus to develop a robust annotation scheme that will allow us to distinguish between documents containing conspiracy language and documents that do not contain any conspiracy content or that propagate conspiracy theories via misinformation (which we explicitly disregard in our work). We find that focusing on indicators of a belief in a conspiracy combined with textual cues of conspiracy language allows us to reach a substantial agreement (based on Fleiss' kappa and Krippendorff`s alpha). We also find that the automatic retrieval methods used to collect the corpus work well in finding mainstream documents, but include some documents in the conspiracy category that would not belong there based on our definition.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,323
inproceedings
liu-etal-2022-putting
Putting Context in {SNACS}: A 5-Way Classification of Adpositional Pragmatic Markers
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.15/
Liu, Yang Janet and Hwang, Jena D. and Schneider, Nathan and Srikumar, Vivek
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
120--128
The SNACS framework provides a network of semantic labels called supersenses for annotating adpositional semantics in corpora. In this work, we consider English prepositions (and prepositional phrases) that are chiefly pragmatic, contributing extra-propositional contextual information such as speaker attitudes and discourse structure. We introduce a preliminary taxonomy of pragmatic meanings to supplement the semantic SNACS supersenses, with guidelines for the annotation of coherence connectives, commentary markers, and topic and focus markers. We also examine annotation disagreements, delve into the trickiest boundary cases, and offer a discussion of future improvements.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,324
inproceedings
elder-etal-2022-building
Building a Biomedical Full-Text Part-of-Speech Corpus Semi-Automatically
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.16/
Elder, Nicholas and Mercer, Robert E. and Singha Roy, Sudipta
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
129--138
This paper presents a method for semi-automatically building a corpus of full-text English-language biomedical articles annotated with part-of-speech tags. The outcomes are a semi-automatic procedure to create a large silver standard corpus of 5 million sentences drawn from a large corpus of full-text biomedical articles annotated for part-of-speech, and a robust, easy-to-use software tool that assists the investigation of differences in two tagged datasets. The method to build the corpus uses two part-of-speech taggers designed to tag biomedical abstracts followed by a human dispute settlement when the two taggers differ on the tagging of a token. The dispute resolution aspect is facilitated by the software tool which organizes and presents the disputed tags. The corpus and all of the software that has been implemented for this study are made publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,325
inproceedings
weber-etal-2022-human
Human Schema Curation via Causal Association Rule Mining
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.17/
Weber, Noah and Belyy, Anton and Holzenberger, Nils and Rudinger, Rachel and Van Durme, Benjamin
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
139--150
Event schemas are structured knowledge sources defining typical real-world scenarios (e.g., going to an airport). We present a framework for efficient human-in-the-loop construction of a schema library, based on a novel script induction system and a well-crafted interface that allows non-experts to {\textquotedblleft}program{\textquotedblright} complex event structures. Associated with this work we release a schema library: a machine readable resource of 232 detailed event schemas, each of which describe a distinct typical scenario in terms of its relevant sub-event structure (what happens in the scenario), participants (who plays a role in the scenario), fine-grained typing of each participant, and the implied relational constraints between them. We make our schema library and the SchemaBlocks interface available online.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,326
inproceedings
cao-etal-2022-cognitive
A Cognitive Approach to Annotating Causal Constructions in a Cross-Genre Corpus
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.18/
Cao, Angela and Williamson, Gregor and Choi, Jinho D.
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
151--159
We present a scheme for annotating causal language in various genres of text. Our annotation scheme is built on the popular categories of cause, enable, and prevent. These vague categories have many edge cases in natural language, and as such can prove difficult for annotators to consistently identify in practice. We introduce a decision based annotation method for handling these edge cases. We demonstrate that, by utilizing this method, annotators are able to achieve inter-annotator agreement which is comparable to that of previous studies. Furthermore, our method performs equally well across genres, highlighting the robustness of our annotation scheme. Finally, we observe notable variation in usage and frequency of causal language across different genres.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,327
inproceedings
ji-etal-2022-automatic
Automatic Enrichment of {A}bstract {M}eaning {R}epresentations
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.19/
Ji, Yuxin and Williamson, Gregor and Choi, Jinho D.
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
160--169
Abstract Meaning Representation (AMR) is a semantic graph framework which inadequately represent a number of important semantic features including number, (in)definiteness, quantifiers, and intensional contexts. Several proposals have been made to improve the representational adequacy of AMR by enriching its graph structure. However, these modifications are rarely added to existing AMR corpora due to the labor costs associated with manual annotation. In this paper, we develop an automated annotation tool which algorithmically enriches AMR graphs to better represent number, (in)definite articles, quantificational determiners, and intensional arguments. We compare our automatically produced annotations to gold-standard manual annotations and show that our automatic annotator achieves impressive results. All code for this paper, including our automatic annotation tool, is made publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,328
inproceedings
pradhan-liberman-2022-grail
{GRAIL}{---}{G}eneralized Representation and Aggregation of Information Layers
Pradhan, Sameer and Kuebler, Sandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.law-1.20/
Pradhan, Sameer and Liberman, Mark
Proceedings of the 16th Linguistic Annotation Workshop (LAW-XVI) within LREC2022
170--181
This paper identifies novel characteristics necessary to successfully represent multiple streams of natural language information from speech and text simultaneously, and proposes a multi-tiered system that implements these characteristics centered around a declarative configuration. The system facilitates easy incremental extension by allowing the creation of composable workflows of loosely coupled extensions, or plugins, allowing simple intial systems to be extended to accomodate rich representations while maintaining high data integrity. Key to this is leveraging established tools and technologies. We demonstrate using a small example.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,329
inproceedings
b-etal-2022-casteism
Casteism in {I}ndia, but Not Racism - a Study of Bias in Word Embeddings of {I}ndian Languages
Adebayo, Kolawole and Nanda, Rohan and Verma, Kanishk and Davis, Brian
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lateraisse-1.1/
B, Senthil Kumar and Tiwari, Pranav and Kumar, Aman Chandra and Chandrabose, Aravindan
Proceedings of the First Workshop on Language Technology and Resources for a Fair, Inclusive, and Safe Society within the 13th Language Resources and Evaluation Conference
1--7
In this paper, we studied the gender bias in monolingual word embeddings of two Indian languages Hindi and Tamil. Tamil is one of the classical languages of India from the Dravidian language family. In Indian society and culture, instead of racism, a similar type of discrimination called casteism is against the subgroup of peoples representing lower class or Dalits. The word embeddings measurement to evaluate bias using the WEAT score reveals that the embeddings are biased with gender and casteism which is in line with the common stereotypical human biases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,331
inproceedings
da-cunha-abeille-2022-objectifying
Objectifying Women? A Syntactic Bias in {F}rench and {E}nglish Corpora.
Adebayo, Kolawole and Nanda, Rohan and Verma, Kanishk and Davis, Brian
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lateraisse-1.2/
da Cunha, Yanis and Abeill{\'e}, Anne
Proceedings of the First Workshop on Language Technology and Resources for a Fair, Inclusive, and Safe Society within the 13th Language Resources and Evaluation Conference
8--16
Gender biases in syntax have been documented for languages with grammatical gender for cases where mixed-gender coordination structures take masculine agreement, or with male-first preference in the ordering of pairs (Adam and Eve). On the basis of various annotated corpora spanning different genres (fiction, newspapers, speech and web), we show another syntactic gender bias: masculine pronouns are more often subjects than feminine pronouns, in both English and French. We find the same bias towards masculine subjects for French human nouns, which then refer to males and females. Comparing the subject of passive verbs and the object of active verbs, we show that this syntactic function bias is not reducible to a bias in semantic role assignment since it is also found with non-agentive subjects. For French fiction, we also found that the masculine syntactic function bias is larger in text written by male authors {--} female authors seem to be unbiased. We finally discuss two principles as possible explanations, {\textquoteleft}Like Me' and {\textquoteleft}Easy first', and examine the effect of the discourse tendency for men being agents and topics. We conclude by addressing the impact of such biases in language technologies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,332
inproceedings
erker-etal-2022-cancel
A Cancel Culture Corpus through the Lens of Natural Language Processing
Adebayo, Kolawole and Nanda, Rohan and Verma, Kanishk and Davis, Brian
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lateraisse-1.3/
Erker, Justus-Jonas and Goanta, Catalina and Spanakis, Gerasimos
Proceedings of the First Workshop on Language Technology and Resources for a Fair, Inclusive, and Safe Society within the 13th Language Resources and Evaluation Conference
17--25
Cancel Culture as an Internet phenomenon has been previously explored from a social and legal science perspective. This paper demonstrates how Natural Language Processing tasks can be derived from this previous work, underlying techniques on how cancel culture can be measured, identified and evaluated. As part of this paper, we introduce a first cancel culture data set with of over 2.3 million tweets and a framework to enlarge it further. We provide a detailed analysis of this data set and propose a set of features, based on various models including sentiment analysis and emotion detection that can help characterizing cancel culture.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,333
inproceedings
verma-etal-2022-benchmarking
Benchmarking Language Models for Cyberbullying Identification and Classification from Social-media Texts
Adebayo, Kolawole and Nanda, Rohan and Verma, Kanishk and Davis, Brian
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lateraisse-1.4/
Verma, Kanishk and Milosevic, Tijana and Cortis, Keith and Davis, Brian
Proceedings of the First Workshop on Language Technology and Resources for a Fair, Inclusive, and Safe Society within the 13th Language Resources and Evaluation Conference
26--31
Cyberbullying is bullying perpetrated via the medium of modern communication technologies like social media networks and gaming platforms. Unfortunately, most existing datasets focusing on cyberbullying detection or classification are i) limited in number ii) usually targeted to one specific online social networking (OSN) platform, or iii) often contain low-quality annotations. In this study, we fine-tune and benchmark state of the art neural transformers for the binary classification of cyberbullying in social media texts, which is of high value to Natural Language Processing (NLP) researchers and computational social scientists. Furthermore, this work represents the first step toward building neural language models for cross OSN platform cyberbullying classification to make them as OSN platform agnostic as possible.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,334
inproceedings
husunbeyi-etal-2022-identifying
Identifying Hate Speech Using Neural Networks and Discourse Analysis Techniques
Adebayo, Kolawole and Nanda, Rohan and Verma, Kanishk and Davis, Brian
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lateraisse-1.5/
H{\"us{\"unbeyi, Zehra Melce and Akar, Didar and {\"Ozg{\"ur, Arzucan
Proceedings of the First Workshop on Language Technology and Resources for a Fair, Inclusive, and Safe Society within the 13th Language Resources and Evaluation Conference
32--41
Discriminatory language, in particular hate speech, is a global problem posing a grave threat to democracy and human rights. Yet, it is not always easy to identify, as it is rarely explicit. In order to detect hate speech, we developed Hierarchical Attention Network (HAN) based and Bidirectional Encoder Representations from Transformer (BERT) based deep learning models to capture the changing discursive cues and understand the context around the discourse. In addition, we designed linguistic features using critical discourse analysis techniques and integrated them into these neural network models. We studied the compatibility of our model with the hate speech detection problem by comparing it with traditional machine learning models, as well as a Convolution Neural Network (CNN) based model, a Convolutional Neural Network-Gated Recurrent Unit (CNN-GRU) based model which reached significant performance results for hate speech detection. Our results on a manually annotated corpus of print media in Turkish show that the proposed approach is effective for hate speech detection. We believe that the feature sets created for the Turkish language will encourage new studies in the quantitative analysis of hate speech.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,335
inproceedings
nawar-etal-2022-open
An Open Source Contractual Language Understanding Application Using Machine Learning
Adebayo, Kolawole and Nanda, Rohan and Verma, Kanishk and Davis, Brian
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.lateraisse-1.6/
Nawar, Afra and Rakib, Mohammed and Hai, Salma Abdul and Haq, Sanaulla
Proceedings of the First Workshop on Language Technology and Resources for a Fair, Inclusive, and Safe Society within the 13th Language Resources and Evaluation Conference
42--50
Legal field is characterized by its exclusivity and non-transparency. Despite the frequency and relevance of legal dealings, legal documents like contracts remains elusive to non-legal professionals for the copious usage of legal jargon. There has been little advancement in making legal contracts more comprehensible. This paper presents how Machine Learning and NLP can be applied to solve this problem, further considering the challenges of applying ML to the high length of contract documents and training in a low resource environment. The largest open-source contract dataset so far, the Contract Understanding Atticus Dataset (CUAD) is utilized. Various pre-processing experiments and hyperparameter tuning have been carried out and we successfully managed to eclipse SOTA results presented for models in the CUAD dataset trained on RoBERTa-base. Our model, A-type-RoBERTa-base achieved an AUPR score of 46.6{\%} compared to 42.6{\%} on the original RoBERT-base. This model is utilized in our end to end contract understanding application which is able to take a contract and highlight the clauses a user is looking to find along with it`s descriptions to aid due diligence before signing. Alongside digital, i.e. searchable, contracts the system is capable of processing scanned, i.e. non-searchable, contracts using tesseract OCR. This application is aimed to not only make contract review a comprehensible process to non-legal professionals, but also to help lawyers and attorneys more efficiently review contracts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,336
inproceedings
schiffers-etal-2022-evaluation
Evaluation of Word Embeddings for the Social Sciences
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.1/
Schiffers, Ricardo and Kern, Dagmar and Hienert, Daniel
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
1--6
Word embeddings are an essential instrument in many NLP tasks. Most available resources are trained on general language from Web corpora or Wikipedia dumps. However, word embeddings for domain-specific language are rare, in particular for the social science domain. Therefore, in this work, we describe the creation and evaluation of word embedding models based on 37,604 open-access social science research papers. In the evaluation, we compare domain-specific and general language models for (i) language coverage, (ii) diversity, and (iii) semantic relationships. We found that the created domain-specific model, even with a relatively small vocabulary size, covers a large part of social science concepts, their neighborhoods are diverse in comparison to more general models Across all relation types, we found a more extensive coverage of semantic relationships.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,338
inproceedings
hiippala-etal-2022-developing
Developing a tool for fair and reproducible use of paid crowdsourcing in the digital humanities
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.2/
Hiippala, Tuomo and Hotti, Helmiina and Suviranta, Rosa
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
7--12
This system demonstration paper describes ongoing work on a tool for fair and reproducible use of paid crowdsourcing in the digital humanities. Paid crowdsourcing is widely used in natural language processing and computer vision, but has been rarely applied in the digital humanities due to ethical concerns. We discuss concerns associated with paid crowdsourcing and describe how we seek to mitigate them in designing the tool and crowdsourcing pipelines. We demonstrate how the tool may be used to create annotations for diagrams, a complex mode of expression whose description requires human input.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,339
inproceedings
gutehrle-etal-2022-archive
Archive {T}ime{L}ine Summarization ({ATLS}): Conceptual Framework for Timeline Generation over Historical Document Collections
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.3/
Gutehrl{\'e}, Nicolas and Doucet, Antoine and Jatowt, Adam
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
13--23
Archive collections are nowadays mostly available through search engines interfaces, which allow a user to retrieve documents by issuing queries. The study of these collections may be, however, impaired by some aspects of search engines, such as the overwhelming number of documents returned or the lack of contextual knowledge provided. New methods that could work independently or in combination with search engines are then required to access these collections. In this position paper, we propose to extend TimeLine Summarization (TLS) methods on archive collections to assist in their studies. We provide an overview of existing TLS methods and we describe a conceptual framework for an Archive TimeLine Summarization (ATLS) system, which aims to generate informative, readable and interpretable timelines.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,340
inproceedings
sandhan-etal-2022-prabhupadavani
Prabhupadavani: A Code-mixed Speech Translation Data for 25 Languages
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.4/
Sandhan, Jivnesh and Daksh, Ayush and Paranjay, Om Adideva and Behera, Laxmidhar and Goyal, Pawan
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
24--29
Nowadays, the interest in code-mixing has become ubiquitous in Natural Language Processing (NLP); however, not much attention has been given to address this phenomenon for Speech Translation (ST) task. This can be solely attributed to the lack of code-mixed ST task labelled data. Thus, we introduce Prabhupadavani, which is a multilingual code-mixed ST dataset for 25 languages. It is multi-domain, covers ten language families, containing 94 hours of speech by 130+ speakers, manually aligned with corresponding text in the target language. The Prabhupadavani is about Vedic culture and heritage from Indic literature, where code-switching in the case of quotation from literature is important in the context of humanities teaching. To the best of our knowledge, Prabhupadvani is the first multi-lingual code-mixed ST dataset available in the ST literature. This data also can be used for a code-mixed machine translation task. All the dataset can be accessed at: \url{https://github.com/frozentoad9/CMST}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,341
inproceedings
bonnell-ogihara-2022-using
Using Language Models to Improve Rule-based Linguistic Annotation of Modern Historical {J}apanese Corpora
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.5/
Bonnell, Jerry and Ogihara, Mitsunori
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
30--39
Annotation of unlabeled textual corpora with linguistic metadata is a fundamental technology in many scholarly workflows in the digital humanities (DH). Pretrained natural language processing pipelines offer tokenization, tagging, and dependency parsing of raw text simultaneously using an annotation scheme like Universal Dependencies (UD). However, the accuracy of these UD tools remains unknown for historical texts and current methods lack mechanisms that enable helpful evaluations by domain experts. To address both points for the case of Modern Historical Japanese text, this paper proposes the use of unsupervised domain adaptation methods to develop a domain-adapted language model (LM) that can flag instances of inaccurate UD output from a pretrained LM and the use of these instances to form rules that, when applied, improves pretrained annotation accuracy. To test the efficacy of the proposed approach, the paper evaluates the domain-adapted LM against three baselines that are not adapted to the historical domain. The experiments conducted demonstrate that the domain-adapted LM improves UD annotation in the Modern Historical Japanese domain and that rules produced using this LM are best indicative of characteristics of the domain in terms of out-of-vocabulary rate and candidate normalized form discovery for {\textquotedblleft}difficult{\textquotedblright} bigram terms.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,342
inproceedings
lovenia-etal-2022-every
Every picture tells a story: Image-grounded controllable stylistic story generation
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.6/
Lovenia, Holy and Wilie, Bryan and Barraud, Romain and Cahyawijaya, Samuel and Chung, Willy and Fung, Pascale
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
40--52
Generating a short story out of an image is arduous. Unlike image captioning, story generation from an image poses multiple challenges: preserving the story coherence, appropriately assessing the quality of the story, steering the generated story into a certain style, and addressing the scarcity of image-story pair reference datasets limiting supervision during training. In this work, we introduce Plug-and-Play Story Teller (PPST) and improve image-to-story generation by: 1) alleviating the data scarcity problem by incorporating large pre-trained models, namely CLIP and GPT-2, to facilitate a fluent image-to-text generation with minimal supervision, and 2) enabling a more style-relevant generation by incorporating stylistic adapters to control the story generation. We conduct image-to-story generation experiments with non-styled, romance-styled, and action-styled PPST approaches and compare our generated stories with those of previous work over three aspects, i.e., story coherence, image-story relevance, and style fitness, using both automatic and human evaluation. The results show that PPST improves story coherence and has better image-story relevance, but has yet to be adequately stylistic.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,343
inproceedings
lindqvist-etal-2022-gracious
To the Most Gracious Highness, from Your Humble Servant: Analysing {S}wedish 18th Century Petitions Using Text Classification
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.7/
Lindqvist, Ellinor and Pettersson, Eva and Nivre, Joakim
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
53--64
Petitions are a rich historical source, yet they have been relatively little used in historical research. In this paper, we aim to analyse Swedish texts from around the 18th century, and petitions in particular, using automatic means of text classification. We also test how text pre-processing and different feature representations affect the result, and we examine feature importance for our main class of interest - petitions. Our experiments show that the statistical algorithms NB, RF, SVM, and kNN are indeed very able to classify different genres of historical text. Further, we find that normalisation has a positive impact on classification, and that content words are particularly informative for the traditional models. A fine-tuned BERT model, fed with normalised data, outperforms all other classification experiments with a macro average F1 score at 98.8. However, using less computationally expensive methods, including feature representation with word2vec, fastText embeddings or even TF-IDF values, with a SVM classifier also show good results for both unnormalise and normalised data. In the feature importance analysis, where we obtain the features most decisive for the classification models, we find highly relevant characteristics of the petitions, namely words expressing signs of someone inferior addressing someone superior.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,344
inproceedings
siskou-etal-2022-automatized
Automatized Detection and Annotation for Calls to Action in {L}atin-{A}merican Social Media Postings
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.8/
Siskou, Wassiliki and Giralt Mir{\'o}n, Clara and Molina-Raith, Sarah and Butt, Miriam
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
65--69
Voter mobilization via social media has shown to be an effective tool. While previous research has primarily looked at how calls-to-action (CTAs) were used in Twitter messages from non-profit organizations and protest mobilization, we are interested in identifying the linguistic cues used in CTAs found on Facebook and Twitter for an automatic identification of CTAs. The work is part of an on-going collaboration with researchers from political science, who are investigating CTAs in the period leading up to recent elections in three different Latin American countries. We developed a new NLP pipeline for Spanish to facilitate their work. Our pipeline annotates social media posts with a range of linguistic information and then conducts targeted searches for linguistic cues that allow for an automatic annotation and identification of relevant CTAs. By using carefully crafted and linguistically informed heuristics, our system so far achieves an F1-score of 0.72.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,345
inproceedings
levine-2022-distribution
The Distribution of Deontic Modals in Jane Austen`s Mature Novels
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.9/
Levine, Lauren
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
70--74
Deontic modals are auxiliary verbs which express some kind of necessity, obligation, or moral recommendation. This paper investigates the collocation and distribution within Jane Austen`s six mature novels of the following deontic modals: must, should, ought, and need. We also examine the co-occurrences of these modals with name mentions of the heroines in the six novels, categorizing each occurrence with a category of obligation if applicable. The paper offers a brief explanation of the categories of obligation chosen for this investigation. In order to examine the types of obligations associated with each heroine, we then investigate the distribution of these categories in relation to mentions of each heroine. The patterns observed show a general concurrence with the thematic characterizations of Austen`s heroines which are found in literary analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,346
inproceedings
konovalova-toral-2022-man
Man vs. Machine: Extracting Character Networks from Human and Machine Translations
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.10/
Konovalova, Aleksandra and Toral, Antonio
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
75--82
Most of the work on Character Networks to date is limited to monolingual texts. Conversely, in this paper we apply and analyze Character Networks on both source texts (English novels) and their Finnish translations (both human- and machine-translated). We assume that this analysis could provide some insights on changes in translations that could modify the character networks, as well as the narrative. The results show that the character networks of translations differ from originals in case of long novels, and the differences may also vary depending on the novel and translator`s strategy.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,347
inproceedings
hamilton-piper-2022-covid
The {COVID} That Wasn`t: Counterfactual Journalism Using {GPT}
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.11/
Hamilton, Sil and Piper, Andrew
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
83--93
In this paper, we explore the use of large language models to assess human interpretations of real world events. To do so, we use a language model trained prior to 2020 to artificially generate news articles concerning COVID-19 given the headlines of actual articles written during the pandemic. We then compare stylistic qualities of our artificially generated corpus with a news corpus, in this case 5,082 articles produced by CBC News between January 23 and May 5, 2020. We find our artificially generated articles exhibits a considerably more negative attitude towards COVID and a significantly lower reliance on geopolitical framing. Our methods and results hold importance for researchers seeking to simulate large scale cultural processes via recent breakthroughs in text generation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,348
inproceedings
smith-lee-2022-war
War and Pieces: Comparing Perspectives About World War {I} and {II} Across {W}ikipedia Language Communities
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.12/
Smith, Ana and Lee, Lillian
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
94--104
Wikipedia is widely used to train models for various tasks including semantic association, text generation, and translation. These tasks typically involve aligning and using text from multiple language editions, with the assumption that all versions of the article present the same content. But this assumption may not hold. We introduce a methodology for approximating the extent to which narratives of conflict may diverge in this scenario, focusing on articles about World War I and II battles written by Wikipedia`s communities of editors across four language editions. For simplicity, our unit of analysis representing each language communities' perspectives is based on national entities and their subject-object-relation context, identified using named entity recognition and open-domain information extraction. Using a vector representation of these tuples, we evaluate how similarly different language editions portray how and how often these entities are mentioned in articles. Our results indicate that (1) language editions tend to reference associated countries more and (2) how much one language edition`s depiction overlaps with all others varies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,349
inproceedings
steg-etal-2022-computational
Computational Detection of Narrativity: A Comparison Using Textual Features and Reader Response
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.13/
Steg, Max and Slot, Karlo and Pianzola, Federico
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
105--114
The task of computational textual narrative detection focuses on detecting the presence of narrative parts, or the degree of narrativity in texts. In this work, we focus on detecting the local degree of narrativity in texts, using short text passages. We performed a human annotation experiment on 325 English texts ranging across 20 genres to capture readers' perception by means of three cognitive aspects: suspense, curiosity, and surprise. We then employed a linear regression model to predict narrativity scores for 17,372 texts. When comparing our average annotation scores to similar annotation experiments with different cognitive aspects, we found that Pearson`s r ranges from .63 to .75. When looking at the calculated narrative probabilities, Pearson`s r is .91. We found that it is possible to use suspense, curiosity and surprise to detect narrativity. However, there are still differences between methods. This does not imply that there are inherently correct methods, but rather suggests that the underlying definition of narrativity is a determining factor for the results of the computational models employed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,350
inproceedings
karlinska-etal-2022-towards
Towards a contextualised spatial-diachronic history of literature: mapping emotional representations of the city and the country in {P}olish fiction from 1864 to 1939
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.14/
Karli{\'n}ska, Agnieszka and Rosi{\'n}ski, Cezary and Wieczorek, Jan and Hubar, Patryk and Koco{\'n}, Jan and Kubis, Marek and Wo{\'z}niak, Stanis{\l}aw and Margraf, Arkadiusz and Walentynowicz, Wiktor
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
115--125
In this article, we discuss the conditions surrounding the building of historical and literary corpora. We describe the assumptions and method of making the original corpus of the Polish novel (1864-1939). Then, we present the research procedure aimed at demonstrating the variability of the emotional value of the concept of {\textquotedblleft}the city{\textquotedblright} and {\textquotedblleft}the country{\textquotedblright} in the texts included in our corpus. The proposed method considers the complex socio-political nature of Central and Eastern Europe, especially the fact that there was no unified Polish state during this period. The method can be easily replicated in studies of the literature of countries with similar specificities.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,351
inproceedings
zulaika-etal-2022-measuring
Measuring Presence of Women and Men as Information Sources in News
Degaetano, Stefania and Kazantseva, Anna and Reiter, Nils and Szpakowicz, Stan
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.latechclfl-1.15/
Zulaika, Muitze and Saralegi, Xabier and San Vicente, I{\~n}aki
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
126--134
In the news, statements from information sources are often quoted, made by individuals who interact in the news. Detecting those quotes and the gender of their sources is a key task when it comes to media analysis from a gender perspective. It is a challenging task: the structure of the quotes is variable, gender marks are not present in many languages, and quote authors are often omitted due to frequent use of coreferences. This paper proposes a strategy to measure the presence of women and men as information sources in news. We approach the problem of detecting sentences including quotes and the gender of the speaker as a joint task, by means of a supervised multiclass classifier of sentences. We have created the first datasets for Spanish and Basque by manually annotating quotes and the gender of the associated sources in news items. The results obtained show that BERT based approaches are significantly better than bag-of-words based classical ones, achieving accuracies close to 90{\%}. We also analyse a bilingual learning strategy and generating additional training examples synthetically; both provide improvements up to 3.4{\%} and 5.6{\%}, respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,352
inproceedings
wang-etal-2022-investigating
Investigating associative, switchable and negatable {W}inograd items on renewed {F}rench data sets
Est{\`e}ve, Yannick and Jim{\'e}nez, Tania and Parcollet, Titouan and Zanon Boito, Marcely
6
2022
Avignon, France
ATALA
https://aclanthology.org/2022.jeptalnrecital-taln.13/
Wang, Xiaoou and Seminck, Olga and Amsili, Pascal
Actes de la 29e Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. Volume 1 : conf{\'e}rence principale
136--143
The Winograd Schema Challenge (WSC) consists of a set of anaphora resolution problems resolvable only by reasoning about world knowledge. This article describes the update of the existing French data set and the creation of three subsets allowing for a more robust, fine-grained evaluation protocol of WSC in French (FWSC) : an associative subset (items easily resolvable with lexical co-occurrence), a switchable subset (items where the inversion of two keywords reverses the answer) and a negatable subset (items where applying negation on its verb reverses the answer). Experiences on these data sets with CamemBERT reach SOTA performances. Our evaluation protocol showed in addition that the higher performance could be explained by the existence of associative items in FWSC. Besides, increasing the size of training corpus improves the model`s performance on switchable items while the impact of larger training corpus remains small on negatable items.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,388
inproceedings
ailem-etal-2022-encouraging
Encouraging Neural Machine Translation to Satisfy Terminology Constraints.
Est{\`e}ve, Yannick and Jim{\'e}nez, Tania and Parcollet, Titouan and Zanon Boito, Marcely
6
2022
Avignon, France
ATALA
https://aclanthology.org/2022.jeptalnrecital-taln.44/
Ailem, Melissa and Liu, Jingshu and Qader, Raheel
Actes de la 29e Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. Volume 1 : conf{\'e}rence principale
446--446
Encouraging Neural Machine Translation to Satisfy Terminology Constraints. We present a new approach to encourage neural machine translation to satisfy lexical constraints. Our method acts at the training step and thereby avoiding the introduction of any extra computational overhead at inference step. The proposed method combines three main ingredients. The first one consists in augmenting the training data to specify the constraints. Intuitively, this encourages the model to learn a copy behavior when it encounters constraint terms. Compared to previous work, we use a simplified augmentation strategy without source factors. The second ingredient is constraint token masking, which makes it even easier for the model to learn the copy behavior and generalize better. The third one, is a modification of the standard cross entropy loss to bias the model towards assigning high probabilities to constraint words. Empirical results show that our method improves upon related baselines in terms of both BLEU score and the percentage of generated constraint terms.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,419
inproceedings
wilken-etal-2022-suber
{S}ub{ER} - A Metric for Automatic Evaluation of Subtitle Quality
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.1/
Wilken, Patrick and Georgakopoulou, Panayota and Matusov, Evgeny
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
1--10
This paper addresses the problem of evaluating the quality of automatically generated subtitles, which includes not only the quality of the machine-transcribed or translated speech, but also the quality of line segmentation and subtitle timing. We propose SubER - a single novel metric based on edit distance with shifts that takes all of these subtitle properties into account. We compare it to existing metrics for evaluating transcription, translation, and subtitle quality. A careful human evaluation in a post-editing scenario shows that the new metric has a high correlation with the post-editing effort and direct human assessment scores, outperforming baseline metrics considering only the subtitle text, such as WER and BLEU, and existing methods to integrate segmentation and timing features.
null
null
10.18653/v1/2022.iwslt-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,457
inproceedings
thompson-alshehri-2022-improving
Improving {A}rabic Diacritization by Learning to Diacritize and Translate
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.2/
Thompson, Brian and Alshehri, Ali
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
11--21
We propose a novel multitask learning method for diacritization which trains a model to both diacritize and translate. Our method addresses data sparsity by exploiting large, readily available bitext corpora. Furthermore, translation requires implicit linguistic and semantic knowledge, which is helpful for resolving ambiguities in diacritization. We apply our method to the Penn Arabic Treebank and report a new state-of-the-art word error rate of 4.79{\%}. We also conduct manual and automatic analysis to better understand our method and highlight some of the remaining challenges in diacritization. Our method has applications in text-to-speech, speech-to-speech translation, and other NLP tasks.
null
null
10.18653/v1/2022.iwslt-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,458
inproceedings
kano-etal-2022-simultaneous
Simultaneous Neural Machine Translation with Prefix Alignment
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.3/
Kano, Yasumasa and Sudoh, Katsuhito and Nakamura, Satoshi
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
22--31
Simultaneous translation is a task that requires starting translation before the speaker has finished speaking, so we face a trade-off between latency and accuracy. In this work, we focus on prefix-to-prefix translation and propose a method to extract alignment between bilingual prefix pairs. We use the alignment to segment a streaming input and fine-tune a translation model. The proposed method demonstrated higher BLEU than those of baselines in low latency ranges in our experiments on the IWSLT simultaneous translation benchmark.
null
null
10.18653/v1/2022.iwslt-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,459
inproceedings
petrick-etal-2022-locality
Locality-Sensitive Hashing for Long Context Neural Machine Translation
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.4/
Petrick, Frithjof and Rosendahl, Jan and Herold, Christian and Ney, Hermann
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
32--42
After its introduction the Transformer architecture quickly became the gold standard for the task of neural machine translation. A major advantage of the Transformer compared to previous architectures is the faster training speed achieved by complete parallelization across timesteps due to the use of attention over recurrent layers. However, this also leads to one of the biggest problems of the Transformer, namely the quadratic time and memory complexity with respect to the input length. In this work we adapt the locality-sensitive hashing approach of Kitaev et al. (2020) to self-attention in the Transformer, we extended it to cross-attention and apply this memory efficient framework to sentence- and document-level machine translation. Our experiments show that the LSH attention scheme for sentence-level comes at the cost of slightly reduced translation quality. For document-level NMT we are able to include much bigger context sizes than what is possible with the baseline Transformer. However, more context does neither improve translation quality nor improve scores on targeted test suites.
null
null
10.18653/v1/2022.iwslt-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,460
inproceedings
chang-etal-2022-anticipation
Anticipation-Free Training for Simultaneous Machine Translation
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.5/
Chang, Chih-Chiang and Chuang, Shun-Po and Lee, Hung-yi
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
43--61
Simultaneous machine translation (SimulMT) speeds up the translation process by starting to translate before the source sentence is completely available. It is difficult due to limited context and word order difference between languages. Existing methods increase latency or introduce adaptive read-write policies for SimulMT models to handle local reordering and improve translation quality. However, the long-distance reordering would make the SimulMT models learn translation mistakenly. Specifically, the model may be forced to predict target tokens when the corresponding source tokens have not been read. This leads to aggressive anticipation during inference, resulting in the hallucination phenomenon. To mitigate this problem, we propose a new framework that decompose the translation process into the monotonic translation step and the reordering step, and we model the latter by the auxiliary sorting network (ASN). The ASN rearranges the hidden states to match the order in the target language, so that the SimulMT model could learn to translate more reasonably. The entire model is optimized end-to-end and does not rely on external aligners or data. During inference, ASN is removed to achieve streaming. Experiments show the proposed framework could outperform previous methods with less latency.
null
null
10.18653/v1/2022.iwslt-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,461
inproceedings
gaido-etal-2022-talking
Who Are We Talking About? Handling Person Names in Speech Translation
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.6/
Gaido, Marco and Negri, Matteo and Turchi, Marco
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
62--73
Recent work has shown that systems for speech translation (ST) {--} similarly to automatic speech recognition (ASR) {--} poorly handle person names. This shortcoming does not only lead to errors that can seriously distort the meaning of the input, but also hinders the adoption of such systems in application scenarios (like computer-assisted interpreting) where the translation of named entities, like person names, is crucial. In this paper, we first analyse the outputs of ASR/ST systems to identify the reasons of failures in person name transcription/translation. Besides the frequency in the training data, we pinpoint the nationality of the referred person as a key factor. We then mitigate the problem by creating multilingual models, and further improve our ST systems by forcing them to jointly generate transcripts and translations, prioritising the former over the latter. Overall, our solutions result in a relative improvement in token-level person name accuracy by 47.8{\%} on average for three language pairs (en-{\ensuremath{>}}es,fr,it).
null
null
10.18653/v1/2022.iwslt-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,462
inproceedings
xu-etal-2022-joint
Joint Generation of Captions and Subtitles with Dual Decoding
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.7/
Xu, Jitao and Buet, Fran{\c{c}}ois and Crego, Josep and Bertin-Lem{\'e}e, Elise and Yvon, Fran{\c{c}}ois
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
74--82
As the amount of audio-visual content increases, the need to develop automatic captioning and subtitling solutions to match the expectations of a growing international audience appears as the only viable way to boost throughput and lower the related post-production costs. Automatic captioning and subtitling often need to be tightly intertwined to achieve an appropriate level of consistency and synchronization with each other and with the video signal. In this work, we assess a dual decoding scheme to achieve a strong coupling between these two tasks and show how adequacy and consistency are increased, with virtually no additional cost in terms of model size and training complexity.
null
null
10.18653/v1/2022.iwslt-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,463
inproceedings
wu-etal-2022-mirroralign
{M}irror{A}lign: A Super Lightweight Unsupervised Word Alignment Model via Cross-Lingual Contrastive Learning
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.8/
Wu, Di and Ding, Liang and Yang, Shuo and Li, Mingyang
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
83--91
Word alignment is essential for the downstream cross-lingual language understanding and generation tasks. Recently, the performance of the neural word alignment models has exceeded that of statistical models. However, they heavily rely on sophisticated translation models. In this study, we propose a super lightweight unsupervised word alignment model named MirrorAlign, in which bidirectional symmetric attention trained with a contrastive learning objective is introduced, and an agreement loss is employed to bind the attention maps, such that the alignments follow mirror-like symmetry hypothesis. Experimental results on several public benchmarks demonstrate that our model achieves competitive, if not better, performance compared to the state of the art in word alignment while significantly reducing the training and decoding time on average. Further ablation analysis and case studies show the superiority of our proposed MirrorAlign. Notably, we recognize our model as a pioneer attempt to unify bilingual word embedding and word alignments. Encouragingly, our approach achieves 16.4X speedup against GIZA++, and 50X parameter compression compared with the Transformer-based alignment methods. We release our code to facilitate the community: \url{https://github.com/moore3930/MirrorAlign}.
null
null
10.18653/v1/2022.iwslt-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,464