entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
qiu-etal-2022-towards
Towards Socially Intelligent Agents with Mental State Transition and Human Value
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.16/
Qiu, Liang and Zhao, Yizhou and Liang, Yuan and Lu, Pan and Shi, Weiyan and Yu, Zhou and Zhu, Song-Chun
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
146--158
Building a socially intelligent agent involves many challenges. One of which is to track the agent`s mental state transition and teach the agent to make decisions guided by its value like a human. Towards this end, we propose to incorporate mental state simulation and value modeling into dialogue agents. First, we build a hybrid mental state parser that extracts information from both the dialogue and event observations and maintains a graphical representation of the agent`s mind; Meanwhile, the transformer-based value model learns human preferences from the human value dataset, ValueNet. Empirical results show that the proposed model attains state-of-the-art performance on the dialogue/action/emotion prediction task in the fantasy text-adventure game dataset, LIGHT. We also show example cases to demonstrate: (i) how the proposed mental state parser can assist the agent`s decision by grounding on the context like locations and objects, and (ii) how the value model can help the agent make decisions based on its personal priorities.
null
null
10.18653/v1/2022.sigdial-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,852
inproceedings
younes-etal-2022-automatic
Automatic Verbal Depiction of a Brick Assembly for a Robot Instructing Humans
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.17/
Younes, Rami and Bailly, G{\'e}rard and Elisei, Frederic and Pellier, Damien
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
159--171
Verbal and nonverbal communication skills are essential for human-robot interaction, in particular when the agents are involved in a shared task. We address the specific situation when the robot is the only agent knowing about the plan and the goal of the task and has to instruct the human partner. The case study is a brick assembly. We here describe a multi-layered verbal depictor whose semantic, syntactic and lexical settings have been collected and evaluated via crowdsourcing. One crowdsourced experiment involves a robot instructed pick-and-place task. We show that implicitly referring to achieved subgoals (stairs, pillows, etc) increases performance of human partners.
null
null
10.18653/v1/2022.sigdial-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,853
inproceedings
farzana-parde-2022-interaction
Are Interaction Patterns Helpful for Task-Agnostic Dementia Detection? An Empirical Exploration
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.18/
Farzana, Shahla and Parde, Natalie
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
172--182
Dementia often manifests in dialog through specific behaviors such as requesting clarification, communicating repetitive ideas, and stalling, prompting conversational partners to probe or otherwise attempt to elicit information. Dialog act (DA) sequences can have predictive power for dementia detection through their potential to capture these meaningful interaction patterns. However, most existing work in this space relies on content-dependent features, raising questions about their generalizability beyond small reference sets or across different cognitive tasks. In this paper, we adapt an existing DA annotation scheme for two different cognitive tasks present in a popular dementia detection dataset. We show that a DA tagging model leveraging neural sentence embeddings and other information from previous utterances and speaker tags achieves strong performance for both tasks. We also propose content-free interaction features and show that they yield high utility in distinguishing dementia and control subjects across different tasks. Our study provides a step toward better understanding how interaction patterns in spontaneous dialog affect cognitive modeling across different tasks, which carries implications for the design of non-invasive and low-cost cognitive health monitoring tools for use at scale.
null
null
10.18653/v1/2022.sigdial-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,854
inproceedings
saha-etal-2022-edu
{EDU}-{AP}: Elementary Discourse Unit based Argument Parser
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.19/
Saha, Sougata and Das, Souvik and Srihari, Rohini
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
183--192
Neural approaches to end-to-end argument mining (AM) are often formulated as dependency parsing (DP), which relies on token-level sequence labeling and intricate post-processing for extracting argumentative structures from text. Although such methods yield reasonable results, operating solely with tokens increases the possibility of discontinuous and overly segmented structures due to minor inconsistencies in token level predictions. In this paper, we propose EDU-AP, an end-to-end argument parser, that alleviates such problems in dependency-based methods by exploiting the intrinsic relationship between elementary discourse units (EDUs) and argumentative discourse units (ADUs) and operates at both token and EDU level granularity. Further, appropriately using contextual information, along with optimizing a novel objective function during training, EDU-AP achieves significant improvements across all four tasks of AM compared to existing dependency-based methods.
null
null
10.18653/v1/2022.sigdial-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,855
inproceedings
threlkeld-etal-2022-using
Using Transition Duration to Improve Turn-taking in Conversational Agents
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.20/
Threlkeld, Charles and Umair, Muhammad and de Ruiter, Jp
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
193--203
Smooth turn-taking is an important aspect of natural conversation that allows interlocutors to maintain adequate mutual comprehensibility. In human communication, the timing between utterances is normatively constrained, and deviations convey socially relevant paralinguistic information. However, for spoken dialogue systems, smooth turn-taking continues to be a challenge. This motivates the need for spoken dialogue systems to employ a robust model of turn-taking to ensure that messages are exchanged smoothly and without transmitting unintended paralinguistic information. In this paper, we examine dialogue data from natural human interaction to develop an evidence-based model for turn-timing in spoken dialogue systems. First, we use timing between turns to develop two models of turn-taking: a speaker-agnostic model and a speaker-sensitive model. From the latter model, we derive the propensity of listeners to take the next turn given TRP duration. Finally, we outline how this measure may be incorporated into a spoken dialogue system to improve the naturalness of conversation.
null
null
10.18653/v1/2022.sigdial-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,856
inproceedings
wu-etal-2022-dg2
{DG}2: Data Augmentation Through Document Grounded Dialogue Generation
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.21/
Wu, Qingyang and Feng, Song and Chen, Derek and Joshi, Sachindra and Lastras, Luis and Yu, Zhou
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
204--216
Collecting data for training dialog systems can be extremely expensive due to the involvement of human participants and the need for extensive annotation. Especially in document-grounded dialog systems, human experts need to carefully read the unstructured documents to answer the users' questions. As a result, existing document-grounded dialog datasets are relatively small-scale and obstruct the effective training of dialogue systems. In this paper, we propose an automatic data augmentation technique grounded on documents through a generative dialogue model. The dialogue model consists of a user bot and agent bot that can synthesize diverse dialogues given an input document, which is then used to train a downstream model. When supplementing the original dataset, our method achieves significant improvement over traditional data augmentation methods. We also achieve great performance in the low-resource setting.
null
null
10.18653/v1/2022.sigdial-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,857
inproceedings
li-etal-2022-speak
When can {I} Speak? Predicting initiation points for spoken dialogue agents
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.22/
Li, Siyan and Paranjape, Ashwin and Manning, Christopher
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
217--224
Current spoken dialogue systems initiate their turns after a long period of silence (700-1000ms), which leads to little real-time feedback, sluggish responses, and an overall stilted conversational flow. Humans typically respond within 200ms and successfully predicting initiation points in advance would allow spoken dialogue agents to do the same. In this work, we predict the lead-time to initiation using prosodic features from a pre-trained speech representation model (wav2vec 1.0) operating on user audio and word features from a pre-trained language model (GPT-2) operating on incremental transcriptions. To evaluate errors, we propose two metrics w.r.t. predicted and true lead times. We train and evaluate the models on the Switchboard Corpus and find that our method outperforms features from prior work on both metrics and vastly outperforms the common approach of waiting for 700ms of silence.
null
null
10.18653/v1/2022.sigdial-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,858
inproceedings
ward-2022-using
Using Interaction Style Dimensions to Characterize Spoken Dialog Corpora
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.23/
Ward, Nigel
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
225--230
The construction of spoken dialog systems today relies heavily on appropriate corpora, but corpus selection is more an art than a science. As interaction style properties govern many aspects of dialog, they have the potential to be useful for relating and comparing corpora. This paper overviews a recently-developed model of interaction styles and shows how it can be used to identify relevant corpus differences, estimate corpus similarity, and flag likely outlier dialogs.
null
null
10.18653/v1/2022.sigdial-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,859
inproceedings
yang-etal-2022-multi
Multi-Domain Dialogue State Tracking with Top-K Slot Self Attention
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.24/
Yang, Longfei and Li, Jiyi and Li, Sheng and Shinozaki, Takahiro
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
231--236
As an important component of task-oriented dialogue systems, dialogue state tracking is designed to track the dialogue state through the conversations between users and systems. Multi-domain dialogue state tracking is a challenging task, in which the correlation among different domains and slots needs to consider. Recently, slot self-attention is proposed to provide a data-driven manner to handle it. However, a full-support slot self-attention may involve redundant information interchange. In this paper, we propose a top-k attention-based slot self-attention for multi-domain dialogue state tracking. In the slot self-attention layers, we force each slot to involve information from the other k prominent slots and mask the rest out. The experimental results on two mainstream multi-domain task-oriented dialogue datasets, MultiWOZ 2.0 and MultiWOZ 2.4, present that our proposed approach is effective to improve the performance of multi-domain dialogue state tracking. We also find that the best result is obtained when each slot interchanges information with only a few slots.
null
null
10.18653/v1/2022.sigdial-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,860
inproceedings
xue-etal-2022-building
Building a Knowledge-Based Dialogue System with Text Infilling
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.25/
Xue, Qiang and Takiguchi, Tetsuya and Ariki, Yasuo
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
237--243
In recent years, generation-based dialogue systems using state-of-the-art (SoTA) transformer-based models have demonstrated impressive performance in simulating human-like conversations. To improve the coherence and knowledge utilization capabilities of dialogue systems, knowledge-based dialogue systems integrate retrieved graph knowledge into transformer-based models. However, knowledge-based dialog systems sometimes generate responses without using the retrieved knowledge. In this work, we propose a method in which the knowledge-based dialogue system can constantly utilize the retrieved knowledge using text infilling . Text infilling is the task of predicting missing spans of a sentence or paragraph. We utilize this text infilling to enable dialog systems to fill incomplete responses with the retrieved knowledge. Our proposed dialogue system has been proven to generate significantly more correct responses than baseline dialogue systems.
null
null
10.18653/v1/2022.sigdial-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,861
inproceedings
sastre-martinez-etal-2022-generating
Generating Meaningful Topic Descriptions with Sentence Embeddings and {LDA}
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.26/
Sastre Martinez, Javier Miguel and Gorman, Sean and Nugent, Aisling and Pal, Anandita
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
244--254
A major part of business operations is interacting with customers. Traditionally this was done by human agents, face to face or over telephone calls within customer support centers. There is now a move towards automation in this field using chatbots and virtual assistants, as well as an increased focus on analyzing recorded conversations to gather insights. Determining the different services that a human agent provides and estimating the incurred call handling costs per service are key to prioritizing service automation. We propose a new technique, ELDA (Embedding based LDA), based on a combination of LDA topic modeling and sentence embeddings, that can take a dataset of customer-agent dialogs and extract key utterances instead of key words. The aim is to provide more meaningful and contextual topic descriptions required for interpreting and labeling the topics, reducing the need for manually reviewing dialog transcripts.
null
null
10.18653/v1/2022.sigdial-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,862
inproceedings
stewart-mihalcea-2022-well
How Well Do You Know Your Audience? Toward Socially-aware Question Generation
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.27/
Stewart, Ian and Mihalcea, Rada
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
255--269
When writing, a person may need to anticipate questions from their audience, but different social groups may ask very different types of questions. If someone is writing about a problem they want to resolve, what kind of follow-up question will a domain expert ask, and could the writer better address the expert`s information needs by rewriting their original post? In this paper, we explore the task of socially-aware question generation. We collect a data set of questions and posts from social media, including background information about the question-askers' social groups. We find that different social groups, such as experts and novices, consistently ask different types of questions. We train several text-generation models that incorporate social information, and we find that a discrete social-representation model outperforms the text-only model when different social groups ask highly different questions from one another. Our work provides a framework for developing text generation models that can help writers anticipate the information expectations of highly different social groups.
null
null
10.18653/v1/2022.sigdial-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,863
inproceedings
lin-etal-2022-gentus
{G}en{TUS}: Simulating User Behaviour and Language in Task-oriented Dialogues with Generative Transformers
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.28/
Lin, Hsien-chin and Geishauser, Christian and Feng, Shutong and Lubis, Nurul and van Niekerk, Carel and Heck, Michael and Gasic, Milica
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
270--282
User simulators (USs) are commonly used to train task-oriented dialogue systems via reinforcement learning. The interactions often take place on semantic level for efficiency, but there is still a gap from semantic actions to natural language, which causes a mismatch between training and deployment environment. Incorporating a natural language generation (NLG) module with USs during training can partly deal with this problem. However, since the policy and NLG of USs are optimised separately, these simulated user utterances may not be natural enough in a given context. In this work, we propose a generative transformer-based user simulator (GenTUS). GenTUS consists of an encoder-decoder structure, which means it can optimise both the user policy and natural language generation jointly. GenTUS generates both semantic actions and natural language utterances, preserving interpretability and enhancing language variation. In addition, by representing the inputs and outputs as word sequences and by using a large pre-trained language model we can achieve generalisability in feature representation. We evaluate GenTUS with automatic metrics and human evaluation. Our results show that GenTUS generates more natural language and is able to transfer to an unseen ontology in a zero-shot fashion. In addition, its behaviour can be further shaped with reinforcement learning opening the door to training specialised user simulators.
null
null
10.18653/v1/2022.sigdial-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,864
inproceedings
nekvinda-dusek-2022-aargh
{AARGH}! End-to-end Retrieval-Generation for Task-Oriented Dialog
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.29/
Nekvinda, Tom{\'a}{\v{s}} and Du{\v{s}}ek, Ond{\v{r}}ej
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
283--297
We introduce AARGH, an end-to-end task-oriented dialog system combining retrieval and generative approaches in a single model, aiming at improving dialog management and lexical diversity of outputs. The model features a new response selection method based on an action-aware training objective and a simplified single-encoder retrieval architecture which allow us to build an end-to-end retrieval-enhanced generation model where retrieval and generation share most of the parameters. On the MultiWOZ dataset, we show that our approach produces more diverse outputs while maintaining or improving state tracking and context-to-response generation performance, compared to state-of-the-art baselines.
null
null
10.18653/v1/2022.sigdial-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,865
inproceedings
hedayatnia-etal-2022-systematic
A Systematic Evaluation of Response Selection for Open Domain Dialogue
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.30/
Hedayatnia, Behnam and Jin, Di and Liu, Yang and Hakkani-Tur, Dilek
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
298--311
Recent progress on neural approaches for language processing has triggered a resurgence of interest on building intelligent open-domain chatbots. However, even the state-of-the-art neural chatbots cannot produce satisfying responses for every turn in a dialog. A practical solution is to generate multiple response candidates for the same context, and then perform response ranking/selection to determine which candidate is the best. Previous work in response selection typically trains response rankers using synthetic data that is formed from existing dialogs by using a ground truth response as the single appropriate response and constructing inappropriate responses via random selection or using adversarial methods. In this work, we curated a dataset where responses from multiple response generators produced for the same dialog context are manually annotated as appropriate (positive) and inappropriate (negative). We argue that such training data better matches the actual use case examples, enabling the models to learn to rank responses effectively. With this new dataset, we conduct a systematic evaluation of state-of-the-art methods for response selection, and demonstrate that both strategies of using multiple positive candidates and using manually verified hard negative candidates can bring in significant performance improvement in comparison to using the adversarial training data, e.g., increase of 3{\%} and 13{\%} in Recall@1 score, respectively.
null
null
10.18653/v1/2022.sigdial-1.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,866
inproceedings
sastre-martinez-nugent-2022-inferring
Inferring Ranked Dialog Flows from Human-to-Human Conversations
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.31/
Sastre Martinez, Javier Miguel and Nugent, Aisling
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
312--324
We present a novel technique to infer ranked dialog flows from human-to-human conversations that can be used as an initial conversation design or to analyze the complexities of the conversations in a call center. This technique aims to identify, for a given service, the most common sequences of questions and responses from the human agent. Multiple dialog flows for different ranges of top paths can be produced so they can be reviewed in rank order and be refined in successive iterations until additional flows have the desired level of detail. The system ingests historical conversations and efficiently condenses them into a weighted deterministic finite-state automaton, which is then used to export dialog flow designs that can be readily used by conversational agents. A proof-of-concept experiment was conducted with the MultiWoz data set, a sample output is presented and future directions are outlined.
null
null
10.18653/v1/2022.sigdial-1.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,867
inproceedings
chi-rudnicky-2022-structured
Structured Dialogue Discourse Parsing
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.32/
Chi, Ta-Chung and Rudnicky, Alexander
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
325--335
Dialogue discourse parsing aims to uncover the internal structure of a multi-participant conversation by finding all the discourse \textit{links} and corresponding \textit{relations}. Previous work either treats this task as a series of independent multiple-choice problems, in which the link existence and relations are decoded separately, or the encoding is restricted to only local interaction, ignoring the holistic structural information. In contrast, we propose a principled method that improves upon previous work from two perspectives: encoding and decoding. From the encoding side, we perform structured encoding on the adjacency matrix followed by the matrix-tree learning algorithm, where all discourse links and relations in the dialogue are jointly optimized based on latent tree-level distribution. From the decoding side, we perform structured inference using the modified Chiu-Liu-Edmonds algorithm, which explicitly generates the labeled multi-root non-projective spanning tree that best captures the discourse structure. In addition, unlike in previous work, we do not rely on hand-crafted features; this improves the model`s robustness. Experiments show that our method achieves new state-of-the-art, surpassing the previous model by 2.3 on STAC and 1.5 on Molweni (F1 scores).
null
null
10.18653/v1/2022.sigdial-1.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,868
inproceedings
jacqmin-etal-2022-follow
{\textquotedblleft}Do you follow me?{\textquotedblright}: A Survey of Recent Approaches in Dialogue State Tracking
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.33/
Jacqmin, L{\'e}o and Rojas Barahona, Lina M. and Favre, Benoit
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
336--350
While communicating with a user, a task-oriented dialogue system has to track the user`s needs at each turn according to the conversation history. This process called dialogue state tracking (DST) is crucial because it directly informs the downstream dialogue policy. DST has received a lot of interest in recent years with the text-to-text paradigm emerging as the favored approach. In this review paper, we first present the task and its associated datasets. Then, considering a large number of recent publications, we identify highlights and advances of research in 2021-2022. Although neural approaches have enabled significant progress, we argue that some critical aspects of dialogue systems such as generalizability are still underexplored. To motivate future studies, we propose several research avenues.
null
null
10.18653/v1/2022.sigdial-1.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,869
inproceedings
ye-etal-2022-multiwoz
{M}ulti{WOZ} 2.4: A Multi-Domain Task-Oriented Dialogue Dataset with Essential Annotation Corrections to Improve State Tracking Evaluation
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.34/
Ye, Fanghua and Manotumruksa, Jarana and Yilmaz, Emine
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
351--360
The MultiWOZ 2.0 dataset has greatly stimulated the research of task-oriented dialogue systems. However, its state annotations contain substantial noise, which hinders a proper evaluation of model performance. To address this issue, massive efforts were devoted to correcting the annotations. Three improved versions (i.e., MultiWOZ 2.1-2.3) have then been released. Nonetheless, there are still plenty of incorrect and inconsistent annotations. This work introduces MultiWOZ 2.4, which refines the annotations in the validation set and test set of MultiWOZ 2.1. The annotations in the training set remain unchanged (same as MultiWOZ 2.1) to elicit robust and noise-resilient model training. We benchmark eight state-of-the-art dialogue state tracking models on MultiWOZ 2.4. All of them demonstrate much higher performance than on MultiWOZ 2.1.
null
null
10.18653/v1/2022.sigdial-1.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,870
inproceedings
threlkeld-de-ruiter-2022-duration
The Duration of a Turn Cannot be Used to Predict When It Ends
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.35/
Threlkeld, Charles and de Ruiter, Jp
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
361--367
Turn taking in conversation is a complex process. We still don`t know how listeners are able to anticipate the end of a speaker`s turn. Previous work focuses on prosodic, semantic, and non-verbal cues that a turn is coming to an end. In this paper, we look at simple measures of duration {---} time, word count, and syllable count {---} to see if we can exploit the duration of turns as a cue. We find strong evidence that these metrics are useless.
null
null
10.18653/v1/2022.sigdial-1.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,871
inproceedings
tran-litman-2022-getting
Getting Better Dialogue Context for Knowledge Identification by Leveraging Document-level Topic Shift
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.36/
Tran, Nhat and Litman, Diane
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
368--375
To build a goal-oriented dialogue system that can generate responses given a knowledge base, identifying the relevant pieces of information to be grounded in is vital. When the number of documents in the knowledge base is large, retrieval approaches are typically used to identify the top relevant documents. However, most prior work simply uses an entire dialogue history to guide retrieval, rather than exploiting a dialogue`s topical structure. In this work, we examine the importance of building the proper contextualized dialogue history when document-level topic shifts are present. Our results suggest that excluding irrelevant turns from the dialogue history (e.g., excluding turns not grounded in the same document as the current turn) leads to better retrieval results. We also propose a cascading approach utilizing the topical nature of a knowledge-grounded conversation to further manipulate the dialogue history used as input to the retrieval models.
null
null
10.18653/v1/2022.sigdial-1.36
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,872
inproceedings
chi-etal-2022-neural
Neural Generation Meets Real People: Building a Social, Informative Open-Domain Dialogue Agent
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.37/
Chi, Ethan A. and Paranjape, Ashwin and See, Abigail and Chiam, Caleb and Chang, Trenton and Kenealy, Kathleen and Lim, Swee Kiat and Hardy, Amelia and Rastogi, Chetanya and Li, Haojun and Iyabor, Alexander and He, Yutong and Sowrirajan, Hari and Qi, Peng and Sadagopan, Kaushik Ram and Minh Phu, Nguyet and Soylu, Dilara and Tang, Jillian and Narayan, Avanika and Campagna, Giovanni and Manning, Christopher
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
376--395
We present Chirpy Cardinal, an open-domain social chatbot. Aiming to be both informative and conversational, our bot chats with users in an authentic, emotionally intelligent way. By integrating controlled neural generation with scaffolded, hand-written dialogue, we let both the user and bot take turns driving the conversation, producing an engaging and socially fluent experience. Deployed in the fourth iteration of the Alexa Prize Socialbot Grand Challenge, Chirpy Cardinal handled thousands of conversations per day, placing second out of nine bots with an average user rating of 3.58/5.
null
null
10.18653/v1/2022.sigdial-1.37
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,873
inproceedings
bhatnagar-etal-2022-deepcon
{D}eep{C}on: An End-to-End Multilingual Toolkit for Automatic Minuting of Multi-Party Dialogues
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.38/
Bhatnagar, Aakash and Bhavsar, Nidhir and Singh, Muskaan
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
396--402
In this paper, we present our minuting tool DeepCon, an end-to-end toolkit for minuting the multiparty dialogues of meetings. It provides technological support for (multilingual) communication and collaboration, with a specific focus on Natural Language Processing (NLP) technologies: Automatic Speech Recognition (ASR), Machine Translation (MT), Automatic Minuting (AM), Topic Modelling (TM) and Named Entity Recognition (NER). To the best of our knowledge, there is no such tool available. Further, this tool follows a microservice architecture, and we release the tool as open-source, deployed on Amazon Web Services (AWS). We release our tool open-source here \url{http://www.deepcon.in}.
null
null
10.18653/v1/2022.sigdial-1.38
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,874
inproceedings
mitra-etal-2022-icm
{ICM} : Intent and Conversational Mining from Conversation Logs
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.39/
Mitra, Sayantan and Ramnani, Roshni and Ranjan, Sumit and Sengupta, Shubhashis
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
403--406
Building conversation agents requires a large amount of manual effort in creating training data for intents / entities as well as mapping out extensive conversation flows. In this demonstration, we present ICM (Intent and conversation Mining), a tool which can be used to analyze existing conversation logs and help a bot designer analyze customer intents, train a custom intent model as well as map and optimize conversation flows. The tool can be used for first time deployment or subsequent deployments of chatbots.
null
null
10.18653/v1/2022.sigdial-1.39
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,875
inproceedings
liu-chen-2022-entity
Entity-based De-noising Modeling for Controllable Dialogue Summarization
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.40/
Liu, Zhengyuan and Chen, Nancy
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
407--418
Although fine-tuning pre-trained backbones produces fluent and grammatically-correct text in various language generation tasks, factual consistency in abstractive summarization remains challenging. This challenge is especially thorny for dialogue summarization, where neural models often make inaccurate associations between personal named entities and their respective actions. To tackle this type of hallucination, we present an entity-based de-noising model via text perturbation on reference summaries. We then apply this proposed approach in beam search validation, conditional training augmentation, and inference post-editing. Experimental results on the SAMSum corpus show that state-of-the-art models equipped with our proposed method achieve generation quality improvement in both automatic evaluation and human assessment.
null
null
10.18653/v1/2022.sigdial-1.40
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,876
inproceedings
svikhnushina-etal-2022-ieval
i{E}val: Interactive Evaluation Framework for Open-Domain Empathetic Chatbots
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.41/
Svikhnushina, Ekaterina and Filippova, Anastasiia and Pu, Pearl
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
419--431
Building an empathetic chatbot is an important objective in dialog generation research, with evaluation being one of the most challenging parts. By empathy, we mean the ability to understand and relate to the speakers' emotions, and respond to them appropriately. Human evaluation has been considered as the current standard for measuring the performance of open-domain empathetic chatbots. However, existing evaluation procedures suffer from a number of limitations we try to address in our current work. In this paper, we describe iEval, a novel interactive evaluation framework where the person chatting with the bots also rates them on different conversational aspects, as well as ranking them, resulting in greater consistency of the scores. We use iEval to benchmark several state-of-the-art empathetic chatbots, allowing us to discover some intricate details in their performance in different emotional contexts. Based on these results, we present key implications for further improvement of such chatbots. To facilitate other researchers using the iEval framework, we will release our dataset consisting of collected chat logs and human scores.
null
null
10.18653/v1/2022.sigdial-1.41
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,877
inproceedings
adiba-etal-2022-unsupervised
Unsupervised Domain Adaptation on Question-Answering System with Conversation Data
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.42/
Adiba, Amalia and Homma, Takeshi and Sogawa, Yasuhiro
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
432--441
Machine reading comprehension (MRC) is a task for question answering that finds answers to questions from documents of knowledge. Most studies on the domain adaptation of MRC require documents describing knowledge of the target domain. However, it is sometimes difficult to prepare such documents. The goal of this study was to transfer an MRC model to another domain without documents in an unsupervised manner. Therefore, unlike previous studies, we propose a domain-adaptation framework of MRC under the assumption that the only available data in the target domain are human conversations between a user asking questions and an expert answering the questions. The framework consists of three processes: (1) training an MRC model on the source domain, (2) converting conversations into documents using document generation (DG), a task we developed for retrieving important information from several human conversations and converting it to an abstractive document text, and (3) transferring the MRC model to the target domain with unsupervised domain adaptation. To the best of our knowledge, our research is the first to use conversation data to train MRC models in an unsupervised manner. We show that the MRC model successfully obtains question-answering ability from conversations in the target domain.
null
null
10.18653/v1/2022.sigdial-1.42
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,878
inproceedings
chen-etal-2022-unidu
{U}ni{DU}: Towards A Unified Generative Dialogue Understanding Framework
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.43/
Chen, Zhi and Chen, Lu and Chen, Bei and Qin, Libo and Liu, Yuncong and Zhu, Su and Lou, Jian-Guang and Yu, Kai
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
442--455
With the development of pre-trained language models, remarkable success has been witnessed in dialogue understanding (DU). However, current DU approaches usually employ independent models for each distinct DU task, without considering shared knowledge across different DU tasks. In this paper, we propose a unified generative dialogue understanding framework, named UniDU, to achieve effective information exchange across diverse DU tasks. Here, we reformulate all DU tasks into a unified prompt-based generative model paradigm. More importantly, a novel model-agnostic multi-task training strategy (MATS) is introduced to dynamically adapt the weights of diverse tasks for best knowlege sharing during training, based on the nature and available data of each task. Experiments on ten DU datasets covering five fundamental DU tasks show that the proposed UniDU framework largely outperforms task-specific well-designed methods on all tasks. MATS also reveals the knowledge sharing structure of these tasks. Finally, UniDU obtains promising performance on unseen dialogue domain, showing great potential of generalization.
null
null
10.18653/v1/2022.sigdial-1.43
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,879
inproceedings
cai-etal-2022-advancing
Advancing Semi-Supervised Task Oriented Dialog Systems by {JSA} Learning of Discrete Latent Variable Models
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.44/
Cai, Yucheng and Liu, Hong and Ou, Zhijian and Huang, Yi and Feng, Junlan
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
456--467
Developing semi-supervised task-oriented dialog (TOD) systems by leveraging unlabeled dialog data has attracted increasing interests. For semi-supervised learning of latent state TOD models, variational learning is often used, but suffers from the annoying high-variance of the gradients propagated through discrete latent variables and the drawback of indirectly optimizing the target log-likelihood. Recently, an alternative algorithm, called joint stochastic approximation (JSA), has emerged for learning discrete latent variable models with impressive performances. In this paper, we propose to apply JSA to semi-supervised learning of the latent state TOD models, which is referred to as JSA-TOD. To our knowledge, JSA-TOD represents the first work in developing JSA based semi-supervised learning of discrete latent variable conditional models for such long sequential generation problems like in TOD systems. Extensive experiments show that JSA-TOD significantly outperforms its variational learning counterpart. Remarkably, semi-supervised JSA-TOD using 20{\%} labels performs close to the full-supervised baseline on MultiWOZ2.1.
null
null
10.18653/v1/2022.sigdial-1.44
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,880
inproceedings
larson-leach-2022-redwood
Redwood: Using Collision Detection to Grow a Large-Scale Intent Classification Dataset
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.45/
Larson, Stefan and Leach, Kevin
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
468--477
Dialog systems must be capable of incorporating new skills via updates over time in order to reflect new use cases or deployment scenarios. Similarly, developers of such ML-driven systems need to be able to add new training data to an already-existing dataset to support these new skills. In intent classification systems, problems can arise if training data for a new skill`s intent overlaps semantically with an already-existing intent. We call such cases collisions. This paper introduces the task of intent collision detection between multiple datasets for the purposes of growing a system`s skillset. We introduce several methods for detecting collisions, and evaluate our methods on real datasets that exhibit collisions. To highlight the need for intent collision detection, we show that model performance suffers if new data is added in such a way that does not arbitrate colliding intents. Finally, we use collision detection to construct and benchmark a new dataset, Redwood, which is composed of 451 categories from 13 original intent classification datasets, making it the largest publicly available intent classification benchmark.
null
null
10.18653/v1/2022.sigdial-1.45
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,881
inproceedings
lubis-etal-2022-dialogue
Dialogue Evaluation with Offline Reinforcement Learning
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.46/
Lubis, Nurul and Geishauser, Christian and Lin, Hsien-chin and van Niekerk, Carel and Heck, Michael and Feng, Shutong and Gasic, Milica
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
478--489
Task-oriented dialogue systems aim to fulfill user goals through natural language interactions. They are ideally evaluated with human users, which however is unattainable to do at every iteration of the development phase. Simulated users could be an alternative, however their development is nontrivial. Therefore, researchers resort to offline metrics on existing human-human corpora, which are more practical and easily reproducible. They are unfortunately limited in reflecting real performance of dialogue systems. BLEU for instance is poorly correlated with human judgment, and existing corpus-based metrics such as success rate overlook dialogue context mismatches. There is still a need for a reliable metric for task-oriented systems with good generalization and strong correlation with human judgements. In this paper, we propose the use of offline reinforcement learning for dialogue evaluation based on static data. Such an evaluator is typically called a critic and utilized for policy optimization. We go one step further and show that offline RL critics can be trained for any dialogue system as external evaluators, allowing dialogue performance comparisons across various types of systems. This approach has the benefit of being corpus- and model-independent, while attaining strong correlation with human judgements, which we confirm via an interactive user trial.
null
null
10.18653/v1/2022.sigdial-1.46
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,882
inproceedings
park-etal-2022-disruptive
Disruptive Talk Detection in Multi-Party Dialogue within Collaborative Learning Environments with a Regularized User-Aware Network
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.47/
Park, Kyungjin and Sohn, Hyunwoo and Min, Wookhee and Mott, Bradford and Glazewski, Krista and Hmelo-Silver, Cindy E. and Lester, James
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
490--499
Accurate detection and appropriate handling of disruptive talk in multi-party dialogue is essential for users to achieve shared goals. In collaborative game-based learning environments, detecting and attending to disruptive talk holds significant potential since it can cause distraction and produce negative learning experiences for students. We present a novel attention-based user-aware neural architecture for disruptive talk detection that uses a sequence dropout-based regularization mechanism. The disruptive talk detection models are evaluated with multi-party dialogue collected from 72 middle school students who interacted with a collaborative game-based learning environment. Our proposed disruptive talk detection model significantly outperforms competitive baseline approaches and shows significant potential for helping to support effective collaborative learning experiences.
null
null
10.18653/v1/2022.sigdial-1.47
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,883
inproceedings
stevens-guille-etal-2022-generating
Generating Discourse Connectives with Pre-trained Language Models: Conditioning on Discourse Relations Helps Reconstruct the {PDTB}
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.48/
Stevens-Guille, Symon and Maskharashvili, Aleksandre and Li, Xintong and White, Michael
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
500--515
We report results of experiments using BART (Lewis et al., 2019) and the Penn Discourse Tree Bank (Webber et al., 2019) (PDTB) to generate texts with correctly realized discourse relations. We address a question left open by previous research (Yung et al., 2021; Ko and Li, 2020) concerning whether conditioning the model on the intended discourse relation{---}which corresponds to adding explicit discourse relation information into the input to the model{---}improves its performance. Our results suggest that including discourse relation information in the input of the model significantly improves the consistency with which it produces a correctly realized discourse relation in the output. We compare our models' performance to known results concerning the discourse structures found in written text and their possible explanations in terms of discourse interpretation strategies hypothesized in the psycholinguistics literature. Our findings suggest that natural language generation models based on current pre-trained Transformers will benefit from infusion with discourse level information if they aim to construct discourses with the intended relations.
null
null
10.18653/v1/2022.sigdial-1.48
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,884
inproceedings
zhang-etal-2022-toward-self
Toward Self-Learning End-to-End Task-oriented Dialog Systems
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.49/
Zhang, Xiaoying and Peng, Baolin and Gao, Jianfeng and Meng, Helen
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
516--530
End-to-end task bots are typically learned over a static and usually limited-size corpus. However, when deployed in dynamic, changing, and open environments to interact with users, task bots tend to fail when confronted with data that deviate from the training corpus, i.e., out-of-distribution samples. In this paper, we study the problem of automatically adapting task bots to changing environments by learning from human-bot interactions with minimum or zero human annotations. We propose SL-Agent, a novel self-learning framework for building end-to-end task bots. SL-Agent consists of a dialog model and a pre-trained reward model to predict the quality of an agent response. It enables task bots to automatically adapt to changing environments by learning from the unlabeled human-bot dialog logs accumulated after deployment via reinforcement learning with the incorporated reward model. Experimental results on four well-studied dialog tasks show the effectiveness of SL-Agent to automatically adapt to changing environments, using both automatic and human evaluations. We will release code and data for further research.
null
null
10.18653/v1/2022.sigdial-1.49
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,885
inproceedings
stoyanchev-etal-2022-combining
Combining Structured and Unstructured Knowledge in an Interactive Search Dialogue System
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.50/
Stoyanchev, Svetlana and Pandey, Suraj and Keizer, Simon and Braunschweiler, Norbert and Doddipatla, Rama Sanand
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
531--540
Users of interactive search dialogue systems specify their preferences with natural language utterances. However, a schema-driven system is limited to handling the preferences that correspond to the predefined database content. In this work, we present a methodology for extending a schema-driven interactive search dialogue system with the ability to handle unconstrained user preferences. Using unsupervised semantic similarity metrics and the text snippets associated with the search items, the system identifies suitable items for the user`s unconstrained natural language query. In crowd-sourced evaluation, the users chat with our extended restaurant search system. Based on objective metrics and subjective user ratings, we demonstrate the feasibility of using an unsupervised low latency approach to extend a schema-driven search dialogue system to handle unconstrained user preferences.
null
null
10.18653/v1/2022.sigdial-1.50
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,886
inproceedings
ekstedt-skantze-2022-much
How Much Does Prosody Help Turn-taking? Investigations using Voice Activity Projection Models
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.51/
Ekstedt, Erik and Skantze, Gabriel
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
541--551
Turn-taking is a fundamental aspect of human communication and can be described as the ability to take turns, project upcoming turn shifts, and supply backchannels at appropriate locations throughout a conversation. In this work, we investigate the role of prosody in turn-taking using the recently proposed Voice Activity Projection model, which incrementally models the upcoming speech activity of the interlocutors in a self-supervised manner, without relying on explicit annotation of turn-taking events, or the explicit modeling of prosodic features. Through manipulation of the speech signal, we investigate how these models implicitly utilize prosodic information. We show that these systems learn to utilize various prosodic aspects of speech both on aggregate quantitative metrics of long-form conversations and on single utterances specifically designed to depend on prosody.
null
null
10.18653/v1/2022.sigdial-1.51
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,887
inproceedings
karadzhov-etal-2022-makes
What makes you change your mind? An empirical investigation in online group decision-making conversations
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.52/
Karadzhov, Georgi and Stafford, Tom and Vlachos, Andreas
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
552--563
People leverage group discussions to collaborate in order to solve complex tasks, e.g. in project meetings or hiring panels. By doing so, they engage in a variety of conversational strategies where they try to convince each other of the best approach and ultimately reach a decision. In this work, we investigate methods for detecting what makes someone change their mind. To this end, we leverage a recently introduced dataset containing group discussions of people collaborating to solve a task. To find out what makes someone change their mind, we incorporate various techniques such as neural text classification and language-agnostic change point detection. Evaluation of these methods shows that while the task is not trivial, the best way to approach it is using a language-aware model with learning-to-rank training. Finally, we examine the cues that the models develop as indicative of the cause of a change of mind.
null
null
10.18653/v1/2022.sigdial-1.52
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,888
inproceedings
vukovic-etal-2022-dialogue
Dialogue Term Extraction using Transfer Learning and Topological Data Analysis
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.53/
Vukovic, Renato and Heck, Michael and Ruppik, Benjamin and van Niekerk, Carel and Zibrowius, Marcus and Gasic, Milica
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
564--581
Goal oriented dialogue systems were originally designed as a natural language interface to a fixed data-set of entities that users might inquire about, further described by domain, slots and values. As we move towards adaptable dialogue systems where knowledge about domains, slots and values may change, there is an increasing need to automatically extract these terms from raw dialogues or related non-dialogue data on a large scale. In this paper, we take an important step in this direction by exploring different features that can enable systems to discover realisations of domains, slots and values in dialogues in a purely data-driven fashion. The features that we examine stem from word embeddings, language modelling features, as well as topological features of the word embedding space. To examine the utility of each feature set, we train a seed model based on the widely used MultiWOZ data-set. Then, we apply this model to a different corpus, the Schema-guided dialogue data-set. Our method outperforms the previously proposed approach that relies solely on word embeddings. We also demonstrate that each of the features is responsible for discovering different kinds of content. We believe our results warrant further research towards ontology induction, and continued harnessing of topological data analysis for dialogue and natural language processing research.
null
null
10.18653/v1/2022.sigdial-1.53
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,889
inproceedings
khojah-etal-2022-evaluating
Evaluating N-best Calibration of Natural Language Understanding for Dialogue Systems
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.54/
Khojah, Ranim and Berman, Alexander and Larsson, Staffan
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
582--594
A Natural Language Understanding (NLU) component can be used in a dialogue system to perform intent classification, returning an N-best list of hypotheses with corresponding confidence estimates. We perform an in-depth evaluation of 5 NLUs, focusing on confidence estimation. We measure and visualize calibration for the 10 best hypotheses on model level and rank level, and also measure classification performance. The results indicate a trade-off between calibration and performance. In particular, Rasa (with Sklearn classifier) had the best calibration but the lowest performance scores, while Watson Assistant had the best performance but a poor calibration.
null
null
10.18653/v1/2022.sigdial-1.54
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,890
inproceedings
mehri-etal-2022-lad
{LAD}: Language Models as Data for Zero-Shot Dialog
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.55/
Mehri, Shikib and Altun, Yasemin and Eskenazi, Maxine
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
595--604
To facilitate zero-shot generalization in task-oriented dialog, this paper proposes Language Models as Data (LAD). LAD is a paradigm for creating diverse and accurate synthetic data which conveys the necessary structural constraints and can be used to train a downstream neural dialog model. LAD leverages GPT-3 to induce linguistic diversity. LAD achieves significant performance gains in zero-shot settings on intent prediction (+15{\%}), slot filling (+31.4 F-1) and next action prediction (+10 F-1). Furthermore, an interactive human evaluation shows that training with LAD is competitive with training on human dialogs.
null
null
10.18653/v1/2022.sigdial-1.55
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,891
inproceedings
jin-etal-2022-improving
Improving Bot Response Contradiction Detection via Utterance Rewriting
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.56/
Jin, Di and Liu, Sijia and Liu, Yang and Hakkani-Tur, Dilek
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
605--614
Though chatbots based on large neural models can often produce fluent responses in open domain conversations, one salient error type is contradiction or inconsistency with the preceding conversation turns. Previous work has treated contradiction detection in bot responses as a task similar to natural language inference, e.g., detect the contradiction between a pair of bot utterances. However, utterances in conversations may contain co-references or ellipsis, and using these utterances as is may not always be sufficient for identifying contradictions. This work aims to improve the contradiction detection via rewriting all bot utterances to restore co-references and ellipsis. We curated a new dataset for utterance rewriting and built a rewriting model on it. We empirically demonstrate that this model can produce satisfactory rewrites to make bot utterances more complete. Furthermore, using rewritten utterances improves contradiction detection performance significantly, e.g., the AUPR and joint accuracy scores (detecting contradiction along with evidence) increase by 6.5{\%} and 4.5{\%} (absolute increase), respectively.
null
null
10.18653/v1/2022.sigdial-1.56
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,892
inproceedings
asano-etal-2022-comparison
Comparison of Lexical Alignment with a Teachable Robot in Human-Robot and Human-Human-Robot Interactions
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.57/
Asano, Yuya and Litman, Diane and Yu, Mingzhi and Lobczowski, Nikki and Nokes-Malach, Timothy and Kovashka, Adriana and Walker, Erin
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
615--622
Speakers build rapport in the process of aligning conversational behaviors with each other. Rapport engendered with a teachable agent while instructing domain material has been shown to promote learning. Past work on lexical alignment in the field of education suffers from limitations in both the measures used to quantify alignment and the types of interactions in which alignment with agents has been studied. In this paper, we apply alignment measures based on a data-driven notion of shared expressions (possibly composed of multiple words) and compare alignment in one-on-one human-robot (H-R) interactions with the H-R portions of collaborative human-human-robot (H-H-R) interactions. We find that students in the H-R setting align with a teachable robot more than in the H-H-R setting and that the relationship between lexical alignment and rapport is more complex than what is predicted by previous theoretical and empirical work.
null
null
10.18653/v1/2022.sigdial-1.57
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,893
inproceedings
lin-etal-2022-trend
{TREND}: Trigger-Enhanced Relation-Extraction Network for Dialogues
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.58/
Lin, Po-Wei and Su, Shang-Yu and Chen, Yun-Nung
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
623--629
The goal of dialogue relation extraction (DRE) is to identify the relation between two entities in a given dialogue. During conversations, speakers may expose their relations to certain entities by explicit or implicit clues, such evidences called {\textquotedblleft}triggers{\textquotedblright}. However, trigger annotations may not be always available for the target data, so it is challenging to leverage such information for enhancing the performance. Therefore, this paper proposes to learn how to identify triggers from the data with trigger annotations and then transfers the trigger-finding capability to other datasets for better performance. The experiments show that the proposed approach is capable of improving relation extraction performance of unseen relations and also demonstrate the transferability of our proposed trigger-finding model across different domains and datasets.
null
null
10.18653/v1/2022.sigdial-1.58
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,894
inproceedings
pan-etal-2022-user
User Satisfaction Modeling with Domain Adaptation in Task-oriented Dialogue Systems
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.59/
Pan, Yan and Ma, Mingyang and Pflugfelder, Bernhard and Groh, Georg
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
630--636
User Satisfaction Estimation (USE) is crucial in helping measure the quality of a task-oriented dialogue system. However, the complex nature of implicit responses poses challenges in detecting user satisfaction, and most datasets are limited in size or not available to the public due to user privacy policies. Unlike task-oriented dialogue, large-scale annotated chitchat with emotion labels is publicly available. Therefore, we present a novel user satisfaction model with domain adaptation (USMDA) to utilize this chitchat. We adopt a dialogue Transformer encoder to capture contextual features from the dialogue. And we reduce domain discrepancy to learn dialogue-related invariant features. Moreover, USMDA jointly learns satisfaction signals in the chitchat context with user satisfaction estimation, and user actions in task-oriented dialogue with dialogue action recognition. Experimental results on two benchmarks show that our proposed framework for the USE task outperforms existing unsupervised domain adaptation methods. To the best of our knowledge, this is the first work to study user satisfaction estimation with unsupervised domain adaptation from chitchat to task-oriented dialogue.
null
null
10.18653/v1/2022.sigdial-1.59
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,895
inproceedings
sato-etal-2022-n
N-best Response-based Analysis of Contradiction-awareness in Neural Response Generation Models
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.60/
Sato, Shiki and Akama, Reina and Ouchi, Hiroki and Tokuhisa, Ryoko and Suzuki, Jun and Inui, Kentaro
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
637--644
Avoiding the generation of responses that contradict the preceding context is a significant challenge in dialogue response generation. One feasible method is post-processing, such as filtering out contradicting responses from a resulting n-best response list. In this scenario, the quality of the n-best list considerably affects the occurrence of contradictions because the final response is chosen from this n-best list. This study quantitatively analyzes the contextual contradiction-awareness of neural response generation models using the consistency of the n-best lists. Particularly, we used polar questions as stimulus inputs for concise and quantitative analyses. Our tests illustrate the contradiction-awareness of recent neural response generation models and methodologies, followed by a discussion of their properties and limitations.
null
null
10.18653/v1/2022.sigdial-1.60
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,896
inproceedings
gunson-etal-2022-visually
A Visually-Aware Conversational Robot Receptionist
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.61/
Gunson, Nancie and Hernandez Garcia, Daniel and Siei{\'n}ska, Weronika and Addlesee, Angus and Dondrup, Christian and Lemon, Oliver and Part, Jose L. and Yu, Yanchao
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
645--648
Socially Assistive Robots (SARs) have the potential to play an increasingly important role in a variety of contexts including healthcare, but most existing systems have very limited interactive capabilities. We will demonstrate a robot receptionist that not only supports task-based and social dialogue via natural spoken conversation but is also capable of visually grounded dialogue; able to perceive and discuss the shared physical environment (e.g. helping users to locate personal belongings or objects of interest). Task-based dialogues include check-in, navigation and FAQs about facilities, alongside social features such as chit-chat, access to the latest news and a quiz game to play while waiting. We also show how visual context (objects and their spatial relations) can be combined with linguistic representations of dialogue context, to support visual dialogue and question answering. We will demonstrate the system on a humanoid ARI robot, which is being deployed in a hospital reception area.
null
null
10.18653/v1/2022.sigdial-1.61
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,897
inproceedings
suglia-etal-2022-demonstrating
Demonstrating {EMMA}: Embodied {M}ulti{M}odal Agent for Language-guided Action Execution in 3{D} Simulated Environments
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.62/
Suglia, Alessandro and Hemanthage, Bhathiya and Nikandrou, Malvina and Pantazopoulos, Georgios and Parekh, Amit and Eshghi, Arash and Greco, Claudio and Konstas, Ioannis and Lemon, Oliver and Rieser, Verena
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
649--653
We demonstrate EMMA, an embodied multimodal agent which has been developed for the Alexa Prize SimBot challenge. The agent acts within a 3D simulated environment for household tasks. EMMA is a unified and multimodal generative model aimed at solving embodied tasks. In contrast to previous work, our approach treats multiple multimodal tasks as a single multimodal conditional text generation problem, where a model learns to output text given both language and visual input. Furthermore, we showcase that a single generative agent can solve tasks with visual inputs of varying length, such as answering questions about static images, or executing actions given a sequence of previous frames and dialogue utterances. The demo system will allow users to interact conversationally with EMMA in embodied dialogues in different 3D environments from the TEACh dataset.
null
null
10.18653/v1/2022.sigdial-1.62
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,898
inproceedings
gemmell-etal-2022-grillbot
{GRILLB}ot: A multi-modal conversational agent for complex real-world tasks
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.63/
Gemmell, Carlos and Rossetto, Federico and Mackie, Iain and Owoicho, Paul and Fischer, Sophie and Dalton, Jeff
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
654--658
We present GRILLBot, an open-source multi-modal task-oriented voice assistant to help users perform complex tasks, focusing on the domains of cooking and home improvement. GRILLBot curates and leverages web information extraction to build coverage over a broad range of tasks for which a user can receive guidance. To represent each task, we propose TaskGraphs as a dynamic graph unifying steps, requirements, and curated domain knowledge enabling contextual question answering, and detailed explanations. Multi-modal elements play a key role in GRILLBot both helping the user navigate through the task and enriching the experience with helpful videos and images that are automatically linked throughout the task. We leverage a contextual neural semantic parser to enable flexible navigation when interacting with the system by jointly encoding stateful information with the conversation history. GRILLBot enables dynamic and adaptable task planning and assistance for complex tasks by combining elements of task representations that incorporate text and structure, combined with neural models for search, question answering, and dialogue state management. GRILLBot competed in the Alexa prize TaskBot Challenge as one of the finalists.
null
null
10.18653/v1/2022.sigdial-1.63
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,899
inproceedings
kane-etal-2022-system
A System For Robot Concept Learning Through Situated Dialogue
Lemon, Oliver and Hakkani-Tur, Dilek and Li, Junyi Jessy and Ashrafzadeh, Arash and Garcia, Daniel Hern{\'a}ndez and Alikhani, Malihe and Vandyke, David and Du{\v{s}}ek, Ond{\v{r}}ej
sep
2022
Edinburgh, UK
Association for Computational Linguistics
https://aclanthology.org/2022.sigdial-1.64/
Kane, Benjamin and Gervits, Felix and Scheutz, Matthias and Marge, Matthew
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
659--662
Robots operating in unexplored environments with human teammates will need to learn unknown concepts on the fly. To this end, we demonstrate a novel system that combines a computational model of question generation with a cognitive robotic architecture. The model supports dynamic production of back-and-forth dialogue for concept learning given observations of an environment, while the architecture supports symbolic reasoning, action representation, one-shot learning and other capabilities for situated interaction. The system is able to learn about new concepts including objects, locations, and actions, using an underlying approach that is generalizable and scalable. We evaluate the system by comparing learning efficiency to a human baseline in a collaborative reference resolution task and show that the system is effective and efficient in learning new concepts, and that it can informatively generate explanations about its behavior.
null
null
10.18653/v1/2022.sigdial-1.64
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,900
inproceedings
kim-etal-2022-oh
Oh My Mistake!: Toward Realistic Dialogue State Tracking including Turnback Utterances
Ou, Zhijian and Feng, Junlan and Li, Juanzi
dec
2022
Abu Dhabi, Beijing (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.seretod-1.1/
Kim, Takyoung and Lee, Yukyung and Yoon, Hoonsang and Kang, Pilsung and Bang, Junseong and Kim, Misuk
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
1--12
The primary purpose of dialogue state tracking(DST), a critical component of an end-toend conversational system, is to build a model that responds well to real-world situations. Although we often change our minds from time to time during ordinary conversations, current benchmark datasets do not adequately reflect such occurrences and instead consist of over-simplified conversations, in which no one changes their mind during a conversation. As the main question inspiring the present study, {\textquotedblleft}Are current benchmark datasets sufficiently diverse to handle casual conversations in which one changes their mind after a certain topic is over?{\textquotedblright} We found that the answer is {\textquotedblleft}No{\textquotedblright} because DST models cannot refer to previous user preferences when template-based turnback utterances are injected into the dataset. Even in the the simplest mind-changing (turnback) scenario, the performance of DST models significantly degenerated. However, we found that this performance degeneration can be recovered when the turnback scenarios are explicitly designed in the training set, implying that the problem is not with the DST models but rather with the construction of the benchmark dataset.
null
null
10.18653/v1/2022.seretod-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,902
inproceedings
j-wang-etal-2022-globalpointer
A {G}lobal{P}ointer based Robust Approach for Information Extraction from Dialog Transcripts
Ou, Zhijian and Feng, Junlan and Li, Juanzi
dec
2022
Abu Dhabi, Beijing (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.seretod-1.2/
Wang, Yanbo J. and Chen, Sheng and Cai, Hengxing and Wei, Wei and Yan, Kuo and Sun, Zhe and Qin, Hui and Li, Yuming and Cai, Xiaochen
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
13--18
With the widespread popularisation of intelligent technology, task-based dialogue systems (TOD) are increasingly being applied to a wide variety of practical scenarios. As the key tasks in dialogue systems, named entity recognition and slot filling play a crucial role in the completeness and accuracy of information extraction. This paper is an evaluation paper for Sere-TOD 2022 Workshop challenge (Track 1 Information extraction from dialog transcripts). We proposed a multi-model fusion approach based on GlobalPointer, combined with some optimisation tricks, finally achieved an entity F1 of 60.73, an entity-slot-value triple F1 of 56, and an average F1 of 58.37, and got the highest score in SereTOD 2022 Workshop challenge
null
null
10.18653/v1/2022.seretod-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,903
inproceedings
wang-etal-2022-token
A Token-pair Framework for Information Extraction from Dialog Transcripts in {S}ere{TOD} Challenge
Ou, Zhijian and Feng, Junlan and Li, Juanzi
dec
2022
Abu Dhabi, Beijing (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.seretod-1.3/
Wang, Chenyue and Kong, Xiangxing and Huang, Mengzuo and Li, Feng and Xing, Jian and Zhang, Weidong and Zou, Wuhe
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
19--23
This paper describes our solution for Sere- TOD Challenge Track 1: Information extraction from dialog transcripts. We propose a token-pair framework to simultaneously identify entity and value mentions and link them into corresponding triples. As entity mentions are usually coreferent, we adopt a baseline model for coreference resolution. We exploit both annotated transcripts and unsupervised dialogs for training. With model ensemble and post-processing strategies, our system significantly outperforms the baseline solution and ranks first in triple f1 and third in entity f1.
null
null
10.18653/v1/2022.seretod-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,904
inproceedings
sreedhar-parisien-2022-prompt
Prompt Learning for Domain Adaptation in Task-Oriented Dialogue
Ou, Zhijian and Feng, Junlan and Li, Juanzi
dec
2022
Abu Dhabi, Beijing (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.seretod-1.4/
Sreedhar, Makesh Narsimhan and Parisien, Christopher
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
24--30
Conversation designers continue to face significant obstacles when creating productionquality task-oriented dialogue systems. The complexity and cost involved in schema development and data collection is often a major barrier for such designers, limiting their ability to create natural, user-friendly experiences. We frame the classification of user intent as the generation of a canonical form, a lightweight semantic representation using natural language. We show that canonical forms offer a promising alternative to traditional methods for intent classification. By tuning soft prompts for a frozen large language model, we show that canonical forms generalize very well to new, unseen domains in a zero- or few-shot setting. The method is also sample-efficient, reducing the complexity and effort of developing new task-oriented dialogue domains.
null
null
10.18653/v1/2022.seretod-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,905
inproceedings
wu-etal-2022-disentangling
Disentangling Confidence Score Distribution for Out-of-Domain Intent Detection with Energy-Based Learning
Ou, Zhijian and Feng, Junlan and Li, Juanzi
dec
2022
Abu Dhabi, Beijing (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.seretod-1.5/
Wu, Yanan and Zeng, Zhiyuan and He, Keqing and Mou, Yutao and Wang, Pei and Yan, Yuanmeng and Xu, Weiran
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
31--38
Detecting Out-of-Domain (OOD) or unknown intents from user queries is essential in a taskoriented dialog system. Traditional softmaxbased confidence scores are susceptible to the overconfidence issue. In this paper, we propose a simple but strong energy-based score function to detect OOD where the energy scores of OOD samples are higher than IND samples. Further, given a small set of labeled OOD samples, we introduce an energy-based margin objective for supervised OOD detection to explicitly distinguish OOD samples from INDs. Comprehensive experiments and analysis prove our method helps disentangle confidence score distributions of IND and OOD data.
null
null
10.18653/v1/2022.seretod-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,906
inproceedings
zeng-etal-2022-semi
Semi-Supervised Knowledge-Grounded Pre-training for Task-Oriented Dialog Systems
Ou, Zhijian and Feng, Junlan and Li, Juanzi
dec
2022
Abu Dhabi, Beijing (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.seretod-1.6/
Zeng, Weihao and He, Keqing and Wang, Zechen and Fu, Dayuan and Dong, Guanting and Geng, Ruotong and Wang, Pei and Wang, Jingang and Sun, Chaobo and Wu, Wei and Xu, Weiran
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
39--47
Recent advances in neural approaches greatly improve task-oriented dialogue (TOD) systems which assist users to accomplish their goals. However, such systems rely on costly manually labeled dialogs which are not available in practical scenarios. In this paper, we present our models for Track 2 of the SereTOD 2022 challenge, which is the first challenge of building semisupervised and reinforced TOD systems on a large-scale real-world Chinese TOD dataset MobileCS. We build a knowledge-grounded dialog model to formulate dialog history and local KB as input and predict the system response. And we perform semi-supervised pretraining both on the labeled and unlabeled data. Our system achieves the first place both in the automatic evaluation and human interaction, especially with higher BLEU (+7.64) and Success (+13.6{\%}) than the second place.
null
null
10.18653/v1/2022.seretod-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,907
inproceedings
huang-etal-2022-cmcc
{CMCC}: A Comprehensive and Large-Scale Human-Human Dataset for Dialogue Systems
Ou, Zhijian and Feng, Junlan and Li, Juanzi
dec
2022
Abu Dhabi, Beijing (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.seretod-1.7/
Huang, Yi and Wu, Xiaoting and Chen, Si and Hu, Wei and Zhu, Qing and Feng, Junlan and Deng, Chao and Ou, Zhijian and Zhao, Jiangjiang
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
48--61
Dialogue modeling problems severely limit the real-world deployment of neural conversational models and building a human-like dialogue agent is an extremely challenging task. Recently, data-driven models become more and more prevalent which need a huge amount of conversation data. In this paper, we release around 100,000 dialogue, which come from real-world dialogue transcripts between real users and customer-service staffs. We call this dataset as CMCC (China Mobile Customer Care) dataset, which differs from existing dialogue datasets in both size and nature significantly. The dataset reflects several characteristics of human-human conversations, e.g., task-driven, care-oriented, and long-term dependency among the context. It also covers various dialogue types including task-oriented, chitchat and conversational recommendation in real-world scenarios. To our knowledge, CMCC is the largest real human-human spoken dialogue dataset and has dozens of times the data scale of others, which shall significantly promote the training and evaluation of dialogue modeling methods. The results of extensive experiments indicate that CMCC is challenging and needs further effort. We hope that this resource will allow for more effective models across various dialogue sub-problems to be built in the future.
null
null
10.18653/v1/2022.seretod-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,908
inproceedings
huang-etal-2022-state
State-Aware Adversarial Training for Utterance-Level Dialogue Generation
Ou, Zhijian and Feng, Junlan and Li, Juanzi
dec
2022
Abu Dhabi, Beijing (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.seretod-1.8/
Huang, Yi and Wu, Xiaoting and Hu, Wei and Feng, Junlan and Deng, Chao
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
62--74
Dialogue generation is a challenging problem because it not only requires us to model the context in a conversation but also to exploit it to generate a coherent and fluent utterance. This paper, aiming for a specific topic of this field, proposes an adversarial training based framework for utterance-level dialogue generation. Technically, we train an encoder-decoder generator simultaneously with a discriminative classifier that make the utterance approximate to the state-aware inputs. Experiments on MultiWoZ 2.0 and MultiWoZ 2.1 datasets show that our method achieves advanced improvements on both automatic and human evaluations, and on the effectiveness of our framework facing low-resource. We further explore the effect of fine-grained augmentations for downstream dialogue state tracking (DST) tasks. Experimental results demonstrate the high-quality data generated by our proposed framework improves the performance over state-of-the-art models.
null
null
10.18653/v1/2022.seretod-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,909
inproceedings
liu-etal-2022-information
Information Extraction and Human-Robot Dialogue towards Real-life Tasks A Baseline Study with the {M}obile{CS} Dataset
Ou, Zhijian and Feng, Junlan and Li, Juanzi
dec
2022
Abu Dhabi, Beijing (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.seretod-1.9/
Liu, Hong and Peng, Hao and Ou, Zhijian and Li, Juanzi and Huang, Yi and Feng, Junlan
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
75--84
Recently, there have merged a class of taskoriented dialogue (TOD) datasets collected through Wizard-of-Oz simulated games. However, the Wizard-of-Oz data are in fact simulated data and thus are fundamentally different from real-life conversations, which are more noisy and casual. Recently, the SereTOD challenge is organized and releases the MobileCS dataset, which consists of real-world dialog transcripts between real users and customerservice staffs from China Mobile. Based on the MobileCS dataset, the SereTOD challenge has two tasks, not only evaluating the construction of the dialogue system itself, but also examining information extraction from dialog transcripts, which is crucial for building the knowledge base for TOD. This paper mainly presents a baseline study of the two tasks with the MobileCS dataset. We introduce how the two baselines are constructed, the problems encountered, and the results. We anticipate that the baselines can facilitate exciting future research to build human-robot dialogue systems for real-life tasks.
null
null
10.18653/v1/2022.seretod-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,910
inproceedings
liu-etal-2022-generative
A Generative User Simulator with {GPT}-based Architecture and Goal State Tracking for Reinforced Multi-Domain Dialog Systems
Ou, Zhijian and Feng, Junlan and Li, Juanzi
dec
2022
Abu Dhabi, Beijing (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.seretod-1.10/
Liu, Hong and Cai, Yucheng and Ou, Zhijian and Huang, Yi and Feng, Junlan
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
85--97
Building user simulators (USs) for reinforcement learning (RL) of task-oriented dialog systems (DSs) has gained more and more attention, which, however, still faces several fundamental challenges. First, it is unclear whether we can leverage pretrained language models to design, for example, GPT-2 based USs, to catch up and interact with the recently advanced GPT- 2 based DSs. Second, an important ingredient in a US is that the user goal can be effectively incorporated and tracked; but how to flexibly integrate goal state tracking and develop an end-to-end trainable US for multi-domains has remained to be a challenge. In this work, we propose a generative user simulator (GUS) with GPT-2 based architecture and goal state tracking towards addressing the above two challenges. Extensive experiments are conducted on MultiWOZ2.1. Different DSs are trained via RL with GUS, the classic agenda-based user simulator (ABUS) and other ablation simulators respectively, and are compared for crossmodel evaluation, corpus-based evaluation and human evaluation. The GUS achieves superior results in all three evaluation tasks.
null
null
10.18653/v1/2022.seretod-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,911
inproceedings
chi-etal-2022-offline
Offline-to-Online Co-Evolutional User Simulator and Dialogue System
Ou, Zhijian and Feng, Junlan and Li, Juanzi
dec
2022
Abu Dhabi, Beijing (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.seretod-1.11/
Chi, Dafeng and Zhuang, Yuzheng and Mu, Yao and Wang, Bin and Bao, Jianzhu and Wang, Yasheng and Dong, Yuhan and Jiang, Xin and Liu, Qun and Hao, Jianye
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
98--113
Reinforcement learning (RL) has emerged as a promising approach to fine-tune offline pretrained GPT-2 model in task-oriented dialogue (TOD) systems. In order to obtain human-like online interactions while extending the usage of RL, building pretrained user simulators (US) along with dialogue systems (DS) and facilitating jointly fine-tuning via RL becomes prevalent. However, joint training brings distributional shift problem caused by compounding exposure bias. Existing methods usually iterative update US and DS to ameliorate the ensued non-stationarity problem, which could lead to sub-optimal policy and less sample efficiency. To take a step further for tackling the problem, we introduce an Offline-to-oNline Co-Evolutional (ONCE) framework, which enables bias-aware concurrent joint update for RL-based fine-tuning whilst takes advantages from GPT-2 based end-to-end modeling on US and DS. Extensive experiments demonstrate that ONCE builds high-quality loops of policy learning and dialogues data collection, and achieves state-of-the-art online and offline evaluation results on MultiWOZ2.1 dataset. Opensourced code will be implemented with Mindspore (MS, 2022) and released on our homepage.
null
null
10.18653/v1/2022.seretod-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,912
inproceedings
mickus-etal-2022-semeval
{S}emeval-2022 Task 1: {CODWOE} {--} Comparing Dictionaries and Word Embeddings
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.1/
Mickus, Timothee and Van Deemter, Kees and Constant, Mathieu and Paperno, Denis
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
1--14
Word embeddings have advanced the state of the art in NLP across numerous tasks. Understanding the contents of dense neural representations is of utmost interest to the computational semantics community. We propose to focus on relating these opaque word vectors with human-readable definitions, as found in dictionaries This problem naturally divides into two subtasks: converting definitions into embeddings, and converting embeddings into definitions. This task was conducted in a multilingual setting, using comparable sets of embeddings trained homogeneously.
null
null
10.18653/v1/2022.semeval-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,914
inproceedings
wang-etal-2022-1cademy
1{C}ademy at {S}emeval-2022 Task 1: Investigating the Effectiveness of Multilingual, Multitask, and Language-Agnostic Tricks for the Reverse Dictionary Task
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.2/
Wang, Zhiyong and Zhang, Ge and Lashkarashvili, Nineli
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
15--22
This paper describes our system for the Se- mEval2022 task of matching dictionary glosses to word embeddings. We focus on the Reverse Dictionary Track of the competition, which maps multilingual glosses to reconstructed vector representations. More specifically, models convert the input of sentences to three types of embeddings: SGNS, Char, and Electra. We pro- pose several experiments for applying neural network cells, general multilingual and multi-task structures, and language-agnostic tricks to the task. We also provide comparisons over different types of word embeddings and ablation studies to suggest helpful strategies. Our initial transformer-based model achieves relatively low performance. However, trials on different retokenization methodologies indicate improved performance. Our proposed Elmo- based monolingual model achieves the highest outcome, and its multitask, and multilingual varieties show competitive results as well.
null
null
10.18653/v1/2022.semeval-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,915
inproceedings
kong-etal-2022-blcu
{BLCU}-{ICALL} at {S}em{E}val-2022 Task 1: Cross-Attention Multitasking Framework for Definition Modeling
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.3/
Kong, Cunliang and Wang, Yujie and Chong, Ruining and Yang, Liner and Zhang, Hengyuan and Yang, Erhong and Huang, Yaping
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
23--28
This paper describes the BLCU-ICALL system used in the SemEval-2022 Task 1 Comparing Dictionaries and Word Embeddings, the Definition Modeling subtrack, achieving 1st on Italian, 2nd on Spanish and Russian, and 3rd on English and French. We propose a transformer-based multitasking framework to explore the task. The framework integrates multiple embedding architectures through the cross-attention mechanism, and captures the structure of glosses through a masking language model objective. Additionally, we also investigate a simple but effective model ensembling strategy to further improve the robustness. The evaluation results show the effectiveness of our solution. We release our code at: \url{https://github.com/blcuicall/SemEval2022-Task1-DM}.
null
null
10.18653/v1/2022.semeval-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,916
inproceedings
li-etal-2022-lingjing
{L}ing{J}ing at {S}em{E}val-2022 Task 1: Multi-task Self-supervised Pre-training for Multilingual Reverse Dictionary
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.4/
Li, Bin and Weng, Yixuan and Xia, Fei and He, Shizhu and Sun, Bin and Li, Shutao
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
29--35
This paper introduces the approach of Team LingJing`s experiments on SemEval-2022 Task 1 Comparing Dictionaries and Word Embeddings (CODWOE). This task aims at comparing two types of semantic descriptions and including two sub-tasks: the definition modeling and reverse dictionary track. Our team focuses on the reverse dictionary track and adopts the multi-task self-supervised pre-training for multilingual reverse dictionaries. Specifically, the randomly initialized mDeBERTa-base model is used to perform multi-task pre-training on the multilingual training datasets. The pre-training step is divided into two stages, namely the MLM pre-training stage and the contrastive pre-training stage. The experimental results show that the proposed method has achieved good performance in the reverse dictionary track, where we rank the 1-st in the Sgns targets of the EN and RU languages. All the experimental codes are open-sourced at \url{https://github.com/WENGSYX/Semeval}.
null
null
10.18653/v1/2022.semeval-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,917
inproceedings
korencic-grubisic-2022-irb
{IRB}-{NLP} at {S}em{E}val-2022 Task 1: Exploring the Relationship Between Words and Their Semantic Representations
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.5/
Koren{\v{c}}i{\'c}, Damir and Grubisic, Ivan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
36--59
What is the relation between a word and its description, or a word and its embedding? Both descriptions and embeddings are semantic representations of words. But, what information from the original word remains in these representations? Or more importantly, which information about a word do these two representations share? Definition Modeling and Reverse Dictionary are two opposite learning tasks that address these questions. The goal of the Definition Modeling task is to investigate the power of information laying inside a word embedding to express the meaning of the word in a humanly understandable way {--} as a dictionary definition. Conversely, the Reverse Dictionary task explores the ability to predict word embeddings directly from its definition. In this paper, by tackling these two tasks, we are exploring the relationship between words and their semantic representations. We present our findings based on the descriptive, exploratory, and predictive data analysis conducted on the CODWOE dataset. We give a detailed overview of the systems that we designed for Definition Modeling and Reverse Dictionary tasks, and that achieved top scores on SemEval-2022 CODWOE challenge in several subtasks. We hope that our experimental results concerning the predictive models and the data analyses we provide will prove useful in future explorations of word representations and their relationships.
null
null
10.18653/v1/2022.semeval-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,918
inproceedings
srivastava-vemulapati-2022-tldr
{TLDR} at {S}em{E}val-2022 Task 1: Using Transformers to Learn Dictionaries and Representations
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.6/
Srivastava, Aditya and Vemulapati, Harsha Vardhan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
60--67
We propose a pair of deep learning models, which employ unsupervised pretraining, attention mechanisms and contrastive learning for representation learning from dictionary definitions, and definition modeling from such representations. Our systems, the Transformers for Learning Dictionaries and Representations (TLDR), were submitted to the SemEval 2022 Task 1: Comparing Dictionaries and Word Embeddings (CODWOE), where they officially ranked first on the definition modeling subtask, and achieved competitive performance on the reverse dictionary subtask. In this paper we describe our methodology and analyse our system design hypotheses.
null
null
10.18653/v1/2022.semeval-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,919
inproceedings
ardoiz-etal-2022-mmg
{MMG} at {S}em{E}val-2022 Task 1: A Reverse Dictionary approach based on a review of the dataset from a lexicographic perspective
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.7/
Ardoiz, Alfonso and Ortega-Mart{\'i}n, Miguel and Garc{\'i}a-Sierra, {\'O}scar and {\'A}lvarez, Jorge and Arranz, Ignacio and Alonso, Adri{\'a}n
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
68--74
This paper presents a novel and linguistic-driven system for the Spanish Reverse Dictionary task of SemEval-2022 Task 1. The aim of this task is the automatic generation of a word using its gloss. The conclusion is that this task results could improve if the quality of the dataset did as well by incorporating high-quality lexicographic data. Therefore, in this paper we analyze the main gaps in the proposed dataset and describe how these limitations could be tackled.
null
null
10.18653/v1/2022.semeval-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,920
inproceedings
chen-zhao-2022-edinburgh
{E}dinburgh at {S}em{E}val-2022 Task 1: Jointly Fishing for Word Embeddings and Definitions
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.8/
Chen, Pinzhen and Zhao, Zheng
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
75--81
This paper presents a winning submission to the SemEval 2022 Task 1 on two sub-tasks: reverse dictionary and definition modelling. We leverage a recently proposed unified model with multi-task training. It utilizes data symmetrically and learns to tackle both tracks concurrently. Analysis shows that our system performs consistently on diverse languages, and works the best with sgns embeddings. Yet, char and electra carry intriguing properties. The two tracks' best results are always in differing subsets grouped by linguistic annotations. In this task, the quality of definition generation lags behind, and BLEU scores might be misleading.
null
null
10.18653/v1/2022.semeval-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,921
inproceedings
mukans-etal-2022-riga
{RIGA} at {S}em{E}val-2022 Task 1: Scaling Recurrent Neural Networks for {CODWOE} Dictionary Modeling
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.9/
Mukans, Eduards and Strazds, Gus and Barzdins, Guntis
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
82--87
Described are our two entries {\textquotedblleft}emukans{\textquotedblright} and {\textquotedblleft}guntis{\textquotedblright} for the definition modeling track of CODWOE SemEval-2022 Task 1. Our approach is based on careful scaling of a GRU recurrent neural network, which exhibits double descent of errors, corresponding to significant improvements also per human judgement. Our results are in the middle of the ranking table per official automatic metrics.
null
null
10.18653/v1/2022.semeval-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,922
inproceedings
cerniavski-stymne-2022-uppsala
{U}ppsala {U}niversity at {S}em{E}val-2022 Task 1: Can Foreign Entries Enhance an {E}nglish Reverse Dictionary?
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.10/
Cerniavski, Rafal and Stymne, Sara
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
88--93
We present the Uppsala University system for SemEval-2022 Task 1: Comparing Dictionaries and Word Embeddings (CODWOE). We explore the performance of multilingual reverse dictionaries as well as the possibility of utilizing annotated data in other languages to improve the quality of a reverse dictionary in the target language. We mainly focus on character-based embeddings.In our main experiment, we train multilingual models by combining the training data from multiple languages. In an additional experiment, using resources beyond the shared task, we use the training data in Russian and French to improve the English reverse dictionary using unsupervised embeddings alignment and machine translation. The results show that multilingual models occasionally but not consistently can outperform the monolingual baselines. In addition, we demonstrate an improvement of an English reverse dictionary using translated entries from the Russian training data set.
null
null
10.18653/v1/2022.semeval-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,923
inproceedings
bendahman-etal-2022-bl
{BL}.{R}esearch at {S}em{E}val-2022 Task 1: Deep networks for Reverse Dictionary using embeddings and {LSTM} autoencoders
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.11/
Bendahman, Nihed and Breton, Julien and Nicolaieff, Lina and Billami, Mokhtar Boumedyen and Bortolaso, Christophe and Miloudi, Youssef
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
94--100
This paper describes our two deep learning systems that competed at SemEval-2022 Task 1 {\textquotedblleft}CODWOE: Comparing Dictionaries and WOrd Embeddings{\textquotedblright}. We participated in the subtask for the reverse dictionary which consists in generating vectors from glosses. We use sequential models that integrate several neural networks, starting from Embeddings networks until the use of Dense networks, Bidirectional Long Short-Term Memory (BiLSTM) networks and LSTM networks. All glosses have been preprocessed in order to consider the best representation form of the meanings for all words that appears. We achieved very competitive results in reverse dictionary with a second position in English and French languages when using contextualized embeddings, and the same position for English, French and Spanish languages when using char embeddings.
null
null
10.18653/v1/2022.semeval-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,924
inproceedings
tran-etal-2022-jsi
{JSI} at {S}em{E}val-2022 Task 1: {CODWOE} - Reverse Dictionary: Monolingual and cross-lingual approaches
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.12/
Tran, Thi Hong Hanh and Martinc, Matej and Purver, Matthew and Pollak, Senja
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
101--106
The reverse dictionary task is a sequence-to-vector task in which a gloss is provided as input, and the output must be a semantically matching word vector. The reverse dictionary is useful in practical applications such as solving the tip-of-the-tongue problem, helping new language learners, etc. In this paper, we evaluate the effect of a Transformer-based model with cross-lingual zero-shot learning to improve the reverse dictionary performance. Our experiments are conducted in five languages in the CODWOE dataset, including English, French, Italian, Spanish, and Russian. Even if we did not achieve a good ranking in the CODWOE competition, we show that our work partially improves the current baseline from the organizers with a hypothesis on the impact of LSTM in monolingual, multilingual, and zero-shot learning. All the codes are available at \url{https://github.com/honghanhh/codwoe2021}.
null
null
10.18653/v1/2022.semeval-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,925
inproceedings
tayyar-madabushi-etal-2022-semeval
{S}em{E}val-2022 Task 2: Multilingual Idiomaticity Detection and Sentence Embedding
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.13/
Tayyar Madabushi, Harish and Gow-Smith, Edward and Garcia, Marcos and Scarton, Carolina and Idiart, Marco and Villavicencio, Aline
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
107--121
This paper presents the shared task on Multilingual Idiomaticity Detection and Sentence Embedding, which consists of two subtasks: (a) a binary classification task aimed at identifying whether a sentence contains an idiomatic expression, and (b) a task based on semantic text similarity which requires the model to adequately represent potentially idiomatic expressions in context. Each subtask includes different settings regarding the amount of training data. Besides the task description, this paper introduces the datasets in English, Portuguese, and Galician and their annotation procedure, the evaluation metrics, and a summary of the participant systems and their results. The task had close to 100 registered participants organised into twenty five teams making over 650 and 150 submissions in the practice and evaluation phases respectively.
null
null
10.18653/v1/2022.semeval-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,926
inproceedings
itkonen-etal-2022-helsinki
{H}elsinki-{NLP} at {S}em{E}val-2022 Task 2: A Feature-Based Approach to Multilingual Idiomaticity Detection
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.14/
Itkonen, Sami and Tiedemann, J{\"org and Creutz, Mathias
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
122--134
This paper describes the University of Helsinki submission to the SemEval 2022 task on multilingual idiomaticity detection. Our system utilizes several models made available by HuggingFace, along with the baseline BERT model for the task. We focus on feature engineering based on properties that typically characterize idiomatic expressions. The additional features lead to improvements over the baseline and the final submission achieves 15th place out of 20 submissions. The paper provides error analysis of our model including visualisations of the contributions of individual features.
null
null
10.18653/v1/2022.semeval-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,927
inproceedings
yamaguchi-etal-2022-hitachi
Hitachi at {S}em{E}val-2022 Task 2: On the Effectiveness of Span-based Classification Approaches for Multilingual Idiomaticity Detection
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.15/
Yamaguchi, Atsuki and Morio, Gaku and Ozaki, Hiroaki and Sogawa, Yasuhiro
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
135--144
In this paper, we describe our system for SemEval-2022 Task 2: Multilingual Idiomaticity Detection and Sentence Embedding. The task aims at detecting idiomaticity in an input sequence (Subtask A) and modeling representation of sentences that contain potential idiomatic multiword expressions (MWEs) (Subtask B) in three languages. We focus on the zero-shot setting of Subtask A and propose two span-based idiomaticity classification methods: MWE span-based classification and idiomatic MWE span prediction-based classification. We use several cross-lingual pre-trained language models (InfoXLM, XLM-R, and others) as our backbone network. Our best-performing system, fine-tuned with the span-based idiomaticity classification, ranked fifth in the zero-shot setting of Subtask A and exhibited a macro F1 score of 0.7466.
null
null
10.18653/v1/2022.semeval-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,928
inproceedings
hauer-etal-2022-ualberta
{UA}lberta at {S}em{E}val 2022 Task 2: Leveraging Glosses and Translations for Multilingual Idiomaticity Detection
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.16/
Hauer, Bradley and Jaura, Seeratpal and Omarov, Talgat and Kondrak, Grzegorz
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
145--150
We describe the University of Alberta systems for the SemEval-2022 Task 2 on multilingual idiomaticity detection. Working under the assumption that idiomatic expressions are noncompositional, our first method integrates information on the meanings of the individual words of an expression into a binary classifier. Further hypothesizing that literal and idiomatic expressions translate differently, our second method translates an expression in context, and uses a lexical knowledge base to determine if the translation is literal. Our approaches are grounded in linguistic phenomena, and leverage existing sources of lexical knowledge. Our results offer support for both approaches, particularly the former.
null
null
10.18653/v1/2022.semeval-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,929
inproceedings
joung-kim-2022-hyu
{HYU} at {S}em{E}val-2022 Task 2: Effective Idiomaticity Detection with Consideration at Different Levels of Contextualization
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.17/
Joung, Youngju and Kim, Taeuk
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
151--157
We propose a unified framework that enables us to consider various aspects of contextualization at different levels to better identify the idiomaticity of multi-word expressions. Through extensive experiments, we demonstrate that our approach based on the inter- and inner-sentence context of a target MWE is effective in improving the performance of related models. We also share our experience in detail on the task of SemEval-2022 Tasks 2 such that future work on the same task can be benefited from this.
null
null
10.18653/v1/2022.semeval-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,930
inproceedings
phelps-2022-drsphelps
drsphelps at {S}em{E}val-2022 Task 2: Learning idiom representations using {BERTRAM}
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.18/
Phelps, Dylan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
158--164
This paper describes our system for SemEval-2022 Task 2 Multilingual Idiomaticity Detection and Sentence Embedding sub-task B. We modify a standard BERT sentence transformer by adding embeddings for each idiom, which are created using BERTRAM and a small number of contexts. We show that this technique increases the quality of idiom representations and leads to better performance on the task. We also perform analysis on our final results and show that the quality of the produced idiom embeddings is highly sensitive to the quality of the input contexts.
null
null
10.18653/v1/2022.semeval-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,931
inproceedings
jakhotiya-etal-2022-jarvix
{JARV}ix at {S}em{E}val-2022 Task 2: It Takes One to Know One? Idiomaticity Detection using Zero and One-Shot Learning
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.19/
Jakhotiya, Yash and Kumar, Vaibhav and Pathak, Ashwin and Shah, Raj
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
165--168
Large Language Models have been successful in a wide variety of Natural Language Processing tasks by capturing the compositionality of the text representations. In spite of their great success, these vector representations fail to capture meaning of idiomatic multi-word expressions (MWEs). In this paper, we focus on the detection of idiomatic expressions by using binary classification. We use a dataset consisting of the literal and idiomatic usage of MWEs in English and Portuguese. Thereafter, we perform the classification in two different settings: zero shot and one shot, to determine if a given sentence contains an idiom or not. N shot classification for this task is defined by N number of common idioms between the training and testing sets. In this paper, we train multiple Large Language Models in both the settings and achieve an F1 score (macro) of 0.73 for the zero shot setting and an F1 score (macro) of 0.85 for the one shot setting. An implementation of our work can be found at \url{https://github.com/ashwinpathak20/Idiomaticity_Detection_Using_Few_Shot_Learning}.
null
null
10.18653/v1/2022.semeval-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,932
inproceedings
boisson-etal-2022-cardiffnlp
{C}ardiff{NLP}-Metaphor at {S}em{E}val-2022 Task 2: Targeted Fine-tuning of Transformer-based Language Models for Idiomaticity Detection
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.20/
Boisson, Joanne and Camacho-Collados, Jose and Espinosa-Anke, Luis
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
169--177
This paper describes the experiments ran for SemEval-2022 Task 2, subtask A, zero-shot and one-shot settings for idiomaticity detection. Our main approach is based on fine-tuning transformer-based language models as a baseline to perform binary classification. Our system, CardiffNLP-Metaphor, ranked 8th and 7th (respectively on zero- and one-shot settings on this task. Our main contribution lies in the extensive evaluation of transformer-based language models and various configurations, showing, among others, the potential of large multilingual models over base monolingual models. Moreover, we analyse the impact of various input parameters, which offer interesting insights on how language models work in practice.
null
null
10.18653/v1/2022.semeval-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,933
inproceedings
oh-2022-kpfriends
kpfriends at {S}em{E}val-2022 Task 2: {NEAMER} - Named Entity Augmented Multi-word Expression Recognizer
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.21/
Oh, Minsik
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
178--185
We present NEAMER - Named Entity Augmented Multi-word Expression Recognizer. This system is inspired by non-compositionality characteristics shared between Named Entity and Idiomatic Expressions. We utilize transfer learning and locality features to enhance idiom classification task. This system is our submission for SemEval Task 2: Multilingual Idiomaticity Detection and Sentence Embedding Subtask A OneShot shared task. We achieve SOTA with F1 0.9395 during post-evaluation phase. We also observe improvement in training stability. Lastly, we experiment with non-compositionality knowledge transfer, cross-lingual fine-tuning and locality features, which we also introduce in this paper.
null
null
10.18653/v1/2022.semeval-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,934
inproceedings
lu-2022-daminglu123
daminglu123 at {S}em{E}val-2022 Task 2: Using {BERT} and {LSTM} to Do Text Classification
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.22/
Lu, Daming
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
186--189
Multiword expressions (MWEs) or idiomaticity are common phenomenon in natural languages. Current pre-trained language models cannot effectively capture the meaning of these MWEs. The reason is that two normal words, after combining together, could have an abruptly different meaning than the compositionality of the meanings of each word, whereas pre-trained language models reply on words compositionality. We proposed an improved method of adding an LSTM layer to the BERT model in order to get better results on a text classification task (Subtask A). Our result is slightly better than the baseline. We also tried adding TextCNN to BERT and adding both LSTM and TextCNN to BERT. We find that adding only LSTM gives the best performance.
null
null
10.18653/v1/2022.semeval-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,935
inproceedings
tan-2022-hijonlp
{H}i{J}o{NLP} at {S}em{E}val-2022 Task 2: Detecting Idiomaticity of Multiword Expressions using Multilingual Pretrained Language Models
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.23/
Tan, Minghuan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
190--196
This paper describes an approach to detect idiomaticity only from the contextualized representation of a MWE over multilingual pretrained language models. Our experiments find that larger models are usually more effective in idiomaticity detection. However, using a higher layer of the model may not guarantee a better performance. In multilingual scenarios, the convergence of different languages are not consistent and rich-resource languages have big advantages over other languages.
null
null
10.18653/v1/2022.semeval-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,936
inproceedings
cui-etal-2022-zhichunroad
{Z}hichun{R}oad at {S}em{E}val-2022 Task 2: Adversarial Training and Contrastive Learning for Multiword Representations
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.24/
Cui, Xuange and Xiong, Wei and Wang, Songlin
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
197--203
This paper presents our contribution to the SemEval-2022 Task 2: Multilingual Idiomaticity Detection and Sentence Embedding.We explore the impact of three different pre-trained multilingual language models in the SubTaskA.By enhancing the model generalization and robustness, we use the exponential moving average (EMA) method and the adversarial attack strategy. In SubTaskB, we add an effective cross-attention module for modeling the relationships of two sentences. We jointly train the model with a contrastive learning objective and employ a momentum contrast to enlarge the number of negative pairs. Additionally, we use the alignment and uniformity properties to measure the quality of sentence embeddings.Our approach obtained competitive results in both subtasks.
null
null
10.18653/v1/2022.semeval-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,937
inproceedings
tedeschi-navigli-2022-ner4id
{NER}4{ID} at {S}em{E}val-2022 Task 2: Named Entity Recognition for Idiomaticity Detection
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.25/
Tedeschi, Simone and Navigli, Roberto
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
204--210
Idioms are lexically-complex phrases whose meaning cannot be derived by compositionally interpreting their components. Although the automatic identification and understanding of idioms is essential for a wide range of Natural Language Understanding tasks, they are still largely under-investigated. This motivated the organization of the SemEval-2022 Task 2, which is divided into two multilingual subtasks: one about idiomaticity detection, and the other about sentence embeddings. In this work, we focus on the first subtask and propose a Transformer-based dual-encoder architecture to compute the semantic similarity between a potentially-idiomatic expression and its context and, based on this, predict idiomaticity. Then, we show how and to what extent Named Entity Recognition can be exploited to reduce the degree of confusion of idiom identification systems and, therefore, improve performance. Our model achieves 92.1 F1 in the one-shot setting and shows strong robustness towards unseen idioms achieving 77.4 F1 in the zero-shot setting. We release our code at \url{https://github.com/Babelscape/ner4id}.
null
null
10.18653/v1/2022.semeval-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,938
inproceedings
liu-etal-2022-ynu
{YNU}-{HPCC} at {S}em{E}val-2022 Task 2: Representing Multilingual Idiomaticity based on Contrastive Learning
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.26/
Liu, Kuanghong and Wang, Jin and Zhang, Xuejie
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
211--216
This paper will present the methods we use as the YNU-HPCC team in the SemEval-2022 Task 2, Multilingual Idiomaticity Detection and Sentence Embedding. We are involved in two subtasks, including four settings. In subtask B of sentence representation, we used novel approaches with ideas of contrastive learning to optimize model, where method of CoSENT was used in the pre-train setting, and triplet loss and multiple negatives ranking loss functions in fine-tune setting. We had achieved very competitive results on the final released test datasets. However, for subtask A of idiomaticity detection, we simply did a few explorations and experiments based on the xlm-RoBERTa model. Sentence concatenated with additional MWE as inputs did well in a one-shot setting. Sentences containing context had a poor performance on final released test data in zero-shot setting even if we attempted to extract effective information from CLS tokens of hidden layers.
null
null
10.18653/v1/2022.semeval-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,939
inproceedings
pereira-kobayashi-2022-ochadai
{OCHADAI} at {S}em{E}val-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.27/
Pereira, Lis and Kobayashi, Ichiro
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
217--220
We propose a multilingual adversarial training model for determining whether a sentence contains an idiomatic expression. Given that a key challenge with this task is the limited size of annotated data, our model relies on pre-trained contextual representations from different multi-lingual state-of-the-art transformer-based language models (i.e., multilingual BERT and XLM-RoBERTa), and on adversarial training, a training method for further enhancing model generalization and robustness. Without relying on any human-crafted features, knowledgebase, or additional datasets other than the target datasets, our model achieved competitive results and ranked 6thplace in SubTask A (zero-shot) setting and 15thplace in SubTask A (one-shot) setting
null
null
10.18653/v1/2022.semeval-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,940
inproceedings
chu-etal-2022-hit
{HIT} at {S}em{E}val-2022 Task 2: Pre-trained Language Model for Idioms Detection
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.28/
Chu, Zheng and Yang, Ziqing and Cui, Yiming and Chen, Zhigang and Liu, Ming
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
221--227
The same multi-word expressions may have different meanings in different sentences. They can be mainly divided into two categories, which are literal meaning and idiomatic meaning. Non-contextual-based methods perform poorly on this problem, and we need contextual embedding to understand the idiomatic meaning of multi-word expressions correctly. We use a pre-trained language model, which can provide a context-aware sentence embedding, to detect whether multi-word expression in the sentence is idiomatic usage.
null
null
10.18653/v1/2022.semeval-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,941
inproceedings
zamparelli-etal-2022-semeval
{S}em{E}val-2022 Task 3: {P}re{TENS}-Evaluating Neural Networks on Presuppositional Semantic Knowledge
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.29/
Zamparelli, Roberto and Chowdhury, Shammur and Brunato, Dominique and Chesi, Cristiano and Dell{'}Orletta, Felice and Hasan, Md. Arid and Venturi, Giulia
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
228--238
We report the results of the SemEval 2022 Task 3, PreTENS, on evaluation the acceptability of simple sentences containing constructions whose two arguments are presupposed to be or not to be in an ordered taxonomic relation. The task featured two sub-tasks articulated as: \textit{(i)} binary prediction task and \textit{(ii)} regression task, predicting the acceptability in a continuous scale. The sentences were artificially generated in three languages (English, Italian and French). 21 systems, with 8 system papers were submitted for the task, all based on various types of fine-tuned transformer systems, often with ensemble methods and various data augmentation techniques. The best systems reached an F1-macro score of 94.49 (sub-task1) and a Spearman correlation coefficient of 0.80 (sub-task2), with interesting variations in specific constructions and/or languages.
null
null
10.18653/v1/2022.semeval-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,942
inproceedings
xia-etal-2022-lingjing
{L}ing{J}ing at {S}em{E}val-2022 Task 3: Applying {D}e{BERT}a to Lexical-level Presupposed Relation Taxonomy with Knowledge Transfer
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.30/
Xia, Fei and Li, Bin and Weng, Yixuan and He, Shizhu and Sun, Bin and Li, Shutao and Liu, Kang and Zhao, Jun
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
239--246
This paper presents the results and main findings of our system on SemEval-2022 Task 3 Presupposed Taxonomies: Evaluating Neural Network Semantics (PreTENS). This task aims at semantic competence with specific attention on the evaluation of language models, which is a task with respect to the recognition of appropriate taxonomic relations between two nominal arguments. Two sub-tasks including binary classification and regression are designed for the evaluation. For the classification sub-task, we adopt the DeBERTa-v3 pre-trained model for fine-tuning datasets of different languages. Due to the small size of the training datasets of the regression sub-task, we transfer the knowledge of classification model (i.e., model parameters) to the regression task. The experimental results show that the proposed method achieves the best results on both sub-tasks. Meanwhile, we also report negative results of multiple training strategies for further discussion. All the experimental codes are open-sourced at \url{https://github.com/WENGSYX/Semeval}.
null
null
10.18653/v1/2022.semeval-1.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,943
inproceedings
poelman-etal-2022-rug
{RUG}-1-Pegasussers at {S}em{E}val-2022 Task 3: Data Generation Methods to Improve Recognizing Appropriate Taxonomic Word Relations
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.31/
van den Berg, Frank and Danoe, Gijs and Ploeger, Esther and Poelman, Wessel and Edman, Lukas and Caselli, Tommaso
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
247--254
This paper describes our system created for the SemEval 2022 Task 3: Presupposed Taxonomies - Evaluating Neural-network Semantics. This task is focused on correctly recognizing taxonomic word relations in English, French and Italian. We developed various datageneration techniques that expand the originally provided train set and show that all methods increase the performance of modelstrained on these expanded datasets. Our final system outperformed the baseline system from the task organizers by achieving an average macro F1 score of 79.6 on all languages, compared to the baseline`s 67.4.
null
null
10.18653/v1/2022.semeval-1.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,944
inproceedings
aziz-etal-2022-csecu
{CSECU}-{DSG} at {S}em{E}val-2022 Task 3: Investigating the Taxonomic Relationship Between Two Arguments using Fusion of Multilingual Transformer Models
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.32/
Aziz, Abdul and Hossain, Md. Akram and Chy, Abu Nowshed
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
255--259
Recognizing lexical relationships between words is one of the formidable tasks in computational linguistics. It plays a vital role in the improvement of various NLP tasks. However, the diversity of word semantics, sentence structure as well as word order information make it challenging to distill the relationship effectively. To address these challenges, SemEval-2022 Task 3 introduced a shared task PreTENS focusing on semantic competence to determine the taxonomic relations between two nominal arguments. This paper presents our participation in this task where we proposed an approach through exploiting an ensemble of multilingual transformer methods. We employed two fine-tuned multilingual transformer models including XLM-RoBERTa and mBERT to train our model. To enhance the performance of individual models, we fuse the predicted probability score of these two models using weighted arithmetic mean to generate a unified probability score. The experimental results showed that our proposed method achieved competitive performance among the participants' methods.
null
null
10.18653/v1/2022.semeval-1.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,945
inproceedings
markchom-etal-2022-uor
{U}o{R}-{NCL} at {S}em{E}val-2022 Task 3: Fine-Tuning the {BERT}-Based Models for Validating Taxonomic Relations
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.33/
Markchom, Thanet and Liang, Huizhi and Chen, Jiaoyan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
260--265
In human languages, there are many presuppositional constructions that impose a constrain on the taxonomic relations between two nouns depending on their order. These constructions create a challenge in validating taxonomic relations in real-world contexts. In SemEval2022-Task3 Presupposed Taxonomies: Evaluating Neural Network Semantics (PreTENS), the organizers introduced a task regarding validating the taxonomic relations within a variety of presuppositional constructions. This task is divided into two subtasks: classification and regression. Each subtask contains three datasets in multiple languages, i.e., English, Italian and French. To tackle this task, this work proposes to fine-tune different BERT-based models pre-trained on different languages. According to the experimental results, the fine-tuned BERT-based models are effective compared to the baselines in classification. For regression, the fine-tuned models show promising performance with the possibility of improvement.
null
null
10.18653/v1/2022.semeval-1.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,946
inproceedings
zhou-etal-2022-spdb
{SPDB} Innovation Lab at {S}em{E}val-2022 Task 3: Recognize Appropriate Taxonomic Relations Between Two Nominal Arguments with {ERNIE}-{M} Model
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.34/
Zhou, Yue and Wei, Bowei and Liu, Jianyu and Yang, Yang
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
266--270
Synonym and antonym practice are the most common practices in our early childhood. It correlated our known words to a better place deep in our intuition. At the beginning of life for a machine, we would like to treat the machine as a baby and built a similar training for it as well to present a qualified performance. In this paper, we present an ensemble model for sentence logistics classification, which outperforms the state-of-art methods. Our approach essentially builds on two models including ERNIE-M and DeBERTaV3. With cross validation and random seeds tuning, we select the top performance models for the last soft ensemble and make them vote for the final answer, achieving the top 6 performance.
null
null
10.18653/v1/2022.semeval-1.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,947
inproceedings
sarhan-etal-2022-uu
{UU}-Tax at {S}em{E}val-2022 Task 3: Improving the generalizability of language models for taxonomy classification through data augmentation
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.35/
Sarhan, Injy and Mosteiro, Pablo and Spruit, Marco
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
271--281
This paper presents our strategy to address the SemEval-2022 Task 3 PreTENS: Presupposed Taxonomies Evaluating Neural Network Semantics. The goal of the task is to identify if a sentence is deemed acceptable or not, depending on the taxonomic relationship that holds between a noun pair contained in the sentence. For sub-task 1{---}binary classification{---}we propose an effective way to enhance the robustness and the generalizability of language models for better classification on this downstream task. We design a two-stage fine-tuning procedure on the ELECTRA language model using data augmentation techniques. Rigorous experiments are carried out using multi-task learning and data-enriched fine-tuning. Experimental results demonstrate that our proposed model, UU-Tax, is indeed able to generalize well for our downstream task. For sub-task 2 {---}regression{---}we propose a simple classifier that trains on features obtained from Universal Sentence Encoder (USE). In addition to describing the submitted systems, we discuss other experiments that employ pre-trained language models and data augmentation techniques. For both sub-tasks, we perform error analysis to further understand the behaviour of the proposed models. We achieved a global F1$Binary$ score of 91.25{\%} in sub-task 1 and a rho score of 0.221 in sub-task 2.
null
null
10.18653/v1/2022.semeval-1.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,948
inproceedings
vetter-etal-2022-kamikla
{K}a{M}i{K}la at {S}em{E}val-2022 Task 3: {A}l{BERT}o, {BERT}, and {C}amem{BERT}{---}{B}e(r)tween Taxonomy Detection and Prediction
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.36/
Vetter, Karl and Segiet, Miriam and Lennermann, Klara
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
282--290
This paper describes our system submitted for SemEval Task 3: Presupposed Taxonomies: Evaluating Neural Network Semantics (Zamparelli et al., 2022). We participated in both the binary classification and the regression subtask. Target sentences are classified according to their taxonomical relation in subtask 1 and according to their acceptability judgment in subtask 2. Our approach in both subtasks is based on a neural network BERT model. We used separate models for the three languages covered by the task, English, French, and Italian. For the second subtask, we used median averaging to construct an ensemble model. We ranked 15th out of 21 groups for subtask 1 (F1-score: 77.38{\%}) and 11th out of 17 groups for subtask 2 (RHO: 0.078).
null
null
10.18653/v1/2022.semeval-1.36
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,949
inproceedings
li-etal-2022-hw-tsc
{HW}-{TSC} at {S}em{E}val-2022 Task 3: A Unified Approach Fine-tuned on Multilingual Pretrained Model for {P}re{TENS}
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.37/
Li, Yinglu and Zhang, Min and Qiao, Xiaosong and Wang, Minghan
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
291--297
In the paper, we describe a unified system for task 3 of SemEval-2022. The task aims to recognize the semantic structures of sentences by providing two nominal arguments and to evaluate the degree of taxonomic relations. We utilise the strategy that adding language prefix tag in the training set, which is effective for the model. We split the training set to avoid the translation information to be learnt by the model. For the task, we propose a unified model fine-tuned on the multilingual pretrained model, XLM-RoBERTa. The model performs well in subtask 1 (the binary classification subtask). In order to verify whether our model could also perform better in subtask 2 (the regression subtask), the ranking score is transformed into classification labels by an up-sampling strategy. With the ensemble strategy, the performance of our model can be also improved. As a result, the model obtained the second place for subtask 1 and subtask 2 in the competition evaluation.
null
null
10.18653/v1/2022.semeval-1.37
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,950
inproceedings
perez-almendros-etal-2022-semeval
{S}em{E}val-2022 Task 4: Patronizing and Condescending Language Detection
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.38/
Perez-Almendros, Carla and Espinosa-Anke, Luis and Schockaert, Steven
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
298--307
This paper presents an overview of Task 4 at SemEval-2022, which was focused on detecting Patronizing and Condescending Language (PCL) towards vulnerable communities. Two sub-tasks were considered: a binary classification task, where participants needed to classify a given paragraph as containing PCL or not, and a multi-label classification task, where participants needed to identify which types of PCL are present (if any). The task attracted more than 300 participants, 77 teams and 229 valid submissions. We provide an overview of how the task was organized, discuss the techniques that were employed by the different participants, and summarize the main resulting insights about PCL detection and categorization.
null
null
10.18653/v1/2022.semeval-1.38
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,951
inproceedings
makahleh-etal-2022-just
{JUST}-{DEEP} at {S}em{E}val-2022 Task 4: Using Deep Learning Techniques to Reveal Patronizing and Condescending Language
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.39/
Makahleh, Mohammad and Bani Yaseen, Naba and Abdullah, Malak
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
308--312
Classification of language that favors or condones vulnerable communities (e.g., refugees, homeless, widows) has been considered a challenging task and a critical step in NLP applications. Moreover, the spread of this language among people and on social media harms society and harms the people concerned. Therefore, the classification of this language is considered a significant challenge for researchers in the world. In this paper, we propose JUST-DEEP architecture to classify a text and determine if it contains any form of patronizing and condescending language (Task 4- Subtask 1). The architecture uses state-of-art pre-trained models and empowers ensembling techniques that outperform the baseline (RoBERTa) in the SemEval-2022 task4 with a 0.502 F1 score.
null
null
10.18653/v1/2022.semeval-1.39
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,952
inproceedings
wang-etal-2022-pingan
{PINGAN} Omini-Sinitic at {S}em{E}val-2022 Task 4: Multi-prompt Training for Patronizing and Condescending Language Detection
Emerson, Guy and Schluter, Natalie and Stanovsky, Gabriel and Kumar, Ritesh and Palmer, Alexis and Schneider, Nathan and Singh, Siddharth and Ratan, Shyam
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.semeval-1.40/
Wang, Ye and Wang, Yanmeng and Ling, Baishun and Liao, Zexiang and Wang, Shaojun and Xiao, Jing
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
313--318
This paper describes the second-placed system for subtask 2 and the ninth-placed system for subtask 1 in SemEval 2022 Task 4: Patronizing and Condescending Language Detection. We propose an ensemble of prompt training and label attention mechanism for multi-label classification tasks. Transfer learning is introduced to transfer the knowledge from binary classification to multi-label classification. The experimental results proved the effectiveness of our proposed method. The ablation study is also conducted to show the validity of each technique.
null
null
10.18653/v1/2022.semeval-1.40
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,953