entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | patankar-etal-2022-optimize | {O}ptimize{\_}{P}rime@{D}ravidian{L}ang{T}ech-{ACL}2022: Abusive Comment Detection in {T}amil | Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dravidianlangtech-1.36/ | Patankar, Shantanu and Gokhale, Omkar and Litake, Onkar and Mandke, Aditya and Kadam, Dipali | Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages | 235--239 | This paper tries to address the problem of abusive comment detection in low-resource indic languages. Abusive comments are statements that are offensive to a person or a group of people. These comments are targeted toward individuals belonging to specific ethnicities, genders, caste, race, sexuality, etc. Abusive Comment Detection is a significant problem, especially with the recent rise in social media users. This paper presents the approach used by our team {---} Optimize{\_}Prime, in the ACL 2022 shared task {\textquotedblleft}Abusive Comment Detection in Tamil.{\textquotedblright} This task detects and classifies YouTube comments in Tamil and Tamil-English Codemixed format into multiple categories. We have used three methods to optimize our results: Ensemble models, Recurrent Neural Networks, and Transformers. In the Tamil data, MuRIL and XLM-RoBERTA were our best performing models with a macro-averaged f1 score of 0.43. Furthermore, for the Code-mixed data, MuRIL and M-BERT provided sublime results, with a macro-averaged f1 score of 0.45. | null | null | 10.18653/v1/2022.dravidianlangtech-1.36 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,213 |
inproceedings | ravikiran-chakravarthi-2022-zero | Zero-shot Code-Mixed Offensive Span Identification through Rationale Extraction | Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dravidianlangtech-1.37/ | Ravikiran, Manikandan and Chakravarthi, Bharathi Raja | Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages | 240--247 | This paper investigates the effectiveness of sentence-level transformers for zero-shot offensive span identification on a code-mixed Tamil dataset. More specifically, we evaluate rationale extraction methods of Local Interpretable Model Agnostic Explanations (LIME) (CITATION) and Integrated Gradients (IG) (CITATION) for adapting transformer based offensive language classification models for zero-shot offensive span identification. To this end, we find that LIME and IG show baseline $F_{1}$ of 26.35{\%} and 44.83{\%}, respectively. Besides, we study the effect of data set size and training process on the overall accuracy of span identification. As a result, we find both LIME and IG to show significant improvement with Masked Data Augmentation and Multilabel Training, with $F_{1}$ of 50.23{\%} and 47.38{\%} respectively. \textit{Disclaimer : This paper contains examples that may be considered profane, vulgar, or offensive. The examples do not represent the views of the authors or their employers/graduate schools towards any person(s), group(s), practice(s), or entity/entities. Instead they are used to emphasize only the linguistic research challenges.} | null | null | 10.18653/v1/2022.dravidianlangtech-1.37 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,214 |
inproceedings | rajalakshmi-etal-2022-dlrg-tamilnlp | {DLRG}@{T}amil{NLP}-{ACL}2022: Offensive Span Identification in {T}amil using{B}i{LSTM}-{CRF} approach | Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dravidianlangtech-1.38/ | Rajalakshmi, Ratnavel and More, Mohit and Shrikriti, Bhamatipati and Saharan, Gitansh and Samyuktha, Hanchate and Nandy, Sayantan | Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages | 248--253 | Identifying offensive speech is an exciting andessential area of research, with ample tractionin recent times. This paper presents our sys-tem submission to the subtask 1, focusing onusing supervised approaches for extracting Of-fensive spans from code-mixed Tamil-Englishcomments. To identify offensive spans, wedeveloped the Bidirectional Long Short-TermMemory (BiLSTM) model with Glove Em-bedding. To this end, the developed systemachieved an overall F1 of 0.1728. Addition-ally, for comments with less than 30 characters,the developed system shows an F1 of 0.3890,competitive with other submissions. | null | null | 10.18653/v1/2022.dravidianlangtech-1.38 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,215 |
inproceedings | b-etal-2022-findings | Findings of the Shared Task on Multimodal Sentiment Analysis and Troll Meme Classification in {D}ravidian Languages | Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dravidianlangtech-1.39/ | B, Premjith and Chakravarthi, Bharathi Raja and Subramanian, Malliga and B, Bharathi and Kp, Soman and V, Dhanalakshmi and K, Sreelakshmi and Pandian, Arunaggiri and Kumaresan, Prasanna | Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages | 254--260 | This paper presents the findings of the shared task on Multimodal Sentiment Analysis and Troll meme classification in Dravidian languages held at ACL 2022. Multimodal sentiment analysis deals with the identification of sentiment from video. In addition to video data, the task requires the analysis of corresponding text and audio features for the classification of movie reviews into five classes. We created a dataset for this task in Malayalam and Tamil. The Troll meme classification task aims to classify multimodal Troll memes into two categories. This task assumes the analysis of both text and image features for making better predictions. The performance of the participating teams was analysed using the F1-score. Only one team submitted their results in the Multimodal Sentiment Analysis task, whereas we received six submissions in the Troll meme classification task. The only team that participated in the Multimodal Sentiment Analysis shared task obtained an F1-score of 0.24. In the Troll meme classification task, the winning team achieved an F1-score of 0.596. | null | null | 10.18653/v1/2022.dravidianlangtech-1.39 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,216 |
inproceedings | ravikiran-etal-2022-findings | Findings of the Shared Task on Offensive Span Identification from{C}ode-Mixed {T}amil-{E}nglish Comments | Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dravidianlangtech-1.40/ | Ravikiran, Manikandan and Chakravarthi, Bharathi Raja and Madasamy, Anand Kumar and S, Sangeetha and Rajalakshmi, Ratnavel and Thavareesan, Sajeetha and Ponnusamy, Rahul and Mahadevan, Shankar | Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages | 261--270 | Offensive content moderation is vital in social media platforms to support healthy online discussions. However, their prevalence in code-mixed Dravidian languages is limited to classifying whole comments without identifying part of it contributing to offensiveness. Such limitation is primarily due to the lack of annotated data for offensive spans. Accordingly, in this shared task, we provide Tamil-English code-mixed social comments with offensive spans. This paper outlines the dataset so released, methods, and results of the submitted systems. | null | null | 10.18653/v1/2022.dravidianlangtech-1.40 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,217 |
inproceedings | madasamy-etal-2022-overview | Overview of the Shared Task on Machine Translation in {D}ravidian Languages | Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dravidianlangtech-1.41/ | Madasamy, Anand Kumar and Hegde, Asha and Banerjee, Shubhanker and Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Shashirekha, Hosahalli and McCrae, John | Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages | 271--278 | This paper presents an outline of the shared task on translation of under-resourced Dravidian languages at DravidianLangTech-2022 workshop to be held jointly with ACL 2022. A description of the datasets used, approach taken for analysis of submissions and the results have been illustrated in this paper. Five sub-tasks organized as a part of the shared task include the following translation pairs: Kannada to Tamil, Kannada to Telugu, Kannada to Sanskrit, Kannada to Malayalam and Kannada to Tulu. Training, development and test datasets were provided to all participants and results were evaluated on the gold standard datasets. A total of 16 research groups participated in the shared task and a total of 12 submission runs were made for evaluation. Bilingual Evaluation Understudy (BLEU) score was used for evaluation of the translations. | null | null | 10.18653/v1/2022.dravidianlangtech-1.41 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,218 |
inproceedings | sampath-etal-2022-findings | Findings of the Shared Task on Emotion Analysis in {T}amil | Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dravidianlangtech-1.42/ | Sampath, Anbukkarasi and Durairaj, Thenmozhi and Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Cn, Subalalitha and Shanmugavadivel, Kogilavani and Thavareesan, Sajeetha and Thangasamy, Sathiyaraj and Krishnamurthy, Parameswari and Hande, Adeep and Benhur, Sean and Ponnusamy, Kishore and Pandiyan, Santhiya | Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages | 279--285 | This paper presents the overview of the shared task on emotional analysis in Tamil. The result of the shared task is presented at the workshop. This paper presents the dataset used in the shared task, task description, and the methodology used by the participants and the evaluation results of the submission. This task is organized as two Tasks. Task A is carried with 11 emotions annotated data for social media comments in Tamil and Task B is organized with 31 fine-grained emotion annotated data for social media comments in Tamil. For conducting experiments, training and development datasets were provided to the participants and results are evaluated for the unseen data. Totally we have received around 24 submissions from 13 teams. For evaluating the models, Precision, Recall, micro average metrics are used. | null | null | 10.18653/v1/2022.dravidianlangtech-1.42 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,219 |
inproceedings | chakravarthi-etal-2022-findings | Findings of the Shared Task on Multi-task Learning in {D}ravidian Languages | Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dravidianlangtech-1.43/ | Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Cn, Subalalitha and S, Sangeetha and Subramanian, Malliga and Shanmugavadivel, Kogilavani and Krishnamurthy, Parameswari and Hande, Adeep and U Hegde, Siddhanth and Nayak, Roshan and Valli, Swetha | Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages | 286--291 | We present our findings from the first shared task on Multi-task Learning in Dravidian Languages at the second Workshop on Speech and Language Technologies for Dravidian Languages. In this task, a sentence in any of three Dravidian Languages is required to be classified into two closely related tasks namely \textit{Sentiment Analyis} (\textbf{SA}) and \textit{Offensive Language Identification} (\textbf{OLI}). The task spans over three Dravidian Languages, namely, Kannada, Malayalam, and Tamil. It is one of the first shared tasks that focuses on Multi-task Learning for closely related tasks, especially for a very low-resourced language family such as the Dravidian language family. In total, 55 people signed up to participate in the task, and due to the intricate nature of the task, especially in its first iteration, 3 submissions have been received. | null | null | 10.18653/v1/2022.dravidianlangtech-1.43 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,220 |
inproceedings | priyadharshini-etal-2022-overview | Overview of Abusive Comment Detection in {T}amil-{ACL} 2022 | Chakravarthi, Bharathi Raja and Priyadharshini, Ruba and Madasamy, Anand Kumar and Krishnamurthy, Parameswari and Sherly, Elizabeth and Mahesan, Sinnathamby | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dravidianlangtech-1.44/ | Priyadharshini, Ruba and Chakravarthi, Bharathi Raja and Cn, Subalalitha and Durairaj, Thenmozhi and Subramanian, Malliga and Shanmugavadivel, Kogilavani and U Hegde, Siddhanth and Kumaresan, Prasanna | Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages | 292--298 | The social media is one of the significantdigital platforms that create a huge im-pact in peoples of all levels. The commentsposted on social media is powerful enoughto even change the political and businessscenarios in very few hours. They alsotend to attack a particular individual ora group of individuals. This shared taskaims at detecting the abusive comments in-volving, Homophobia, Misandry, Counter-speech, Misogyny, Xenophobia, Transpho-bic. The hope speech is also identified. Adataset collected from social media taggedwith the above said categories in Tamiland Tamil-English code-mixed languagesare given to the participants. The par-ticipants used different machine learningand deep learning algorithms. This paperpresents the overview of this task compris-ing the dataset details and results of theparticipants. | null | null | 10.18653/v1/2022.dravidianlangtech-1.44 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,221 |
inproceedings | li-flanigan-2022-improving | Improving Neural Machine Translation with the {A}bstract {M}eaning {R}epresentation by Combining Graph and Sequence Transformers | Wu, Lingfei and Liu, Bang and Mihalcea, Rada and Pei, Jian and Zhang, Yue and Li, Yunyao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.dlg4nlp-1.2/ | Li, Changmao and Flanigan, Jeffrey | Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022) | 12--21 | Previous studies have shown that the Abstract Meaning Representation (AMR) can improve Neural Machine Translation (NMT). However, there has been little work investigating incorporating AMR graphs into Transformer models. In this work, we propose a novel encoder-decoder architecture which augments the Transformer model with a Heterogeneous Graph Transformer (Yao et al., 2020) which encodes source sentence AMR graphs. Experimental results demonstrate the proposed model outperforms the Transformer model and previous non-Transformer based models on two different language pairs in both the high resource setting and low resource setting. Our source code, training corpus and released models are available at \url{https://github.com/jlab-nlp/amr-nmt}. | null | null | 10.18653/v1/2022.dlg4nlp-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,224 |
inproceedings | guo-etal-2022-continuous | Continuous Temporal Graph Networks for Event-Based Graph Data | Wu, Lingfei and Liu, Bang and Mihalcea, Rada and Pei, Jian and Zhang, Yue and Li, Yunyao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.dlg4nlp-1.3/ | Guo, Jin and Han, Zhen and Zhou, Su and Li, Jiliang and Tresp, Volker and Wang, Yuyi | Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022) | 22--29 | There has been an increasing interest in modeling continuous-time dynamics of temporal graph data. Previous methods encode time-evolving relational information into a low-dimensional representation by specifying discrete layers of neural networks, while real-world dynamic graphs often vary continuously over time. Hence, we propose Continuous Temporal Graph Networks (CTGNs) to capture continuous dynamics of temporal graph data. We use both the link starting timestamps and link duration as evolving information to model continuous dynamics of nodes. The key idea is to use neural ordinary differential equations (ODE) to characterize the continuous dynamics of node representations over dynamic graphs. We parameterize ordinary differential equations using a novel graph neural network. The existing dynamic graph networks can be considered as a specific discretization of CTGNs. Experiment results on both transductive and inductive tasks demonstrate the effectiveness of our proposed approach over competitive baselines. | null | null | 10.18653/v1/2022.dlg4nlp-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,225 |
inproceedings | choi-etal-2022-scene | Scene Graph Parsing via {A}bstract {M}eaning {R}epresentation in Pre-trained Language Models | Wu, Lingfei and Liu, Bang and Mihalcea, Rada and Pei, Jian and Zhang, Yue and Li, Yunyao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.dlg4nlp-1.4/ | Choi, Woo Suk and Heo, Yu-Jung and Punithan, Dharani and Zhang, Byoung-Tak | Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022) | 30--35 | In this work, we propose the application of abstract meaning representation (AMR) based semantic parsing models to parse textual descriptions of a visual scene into scene graphs, which is the first work to the best of our knowledge. Previous works examined scene graph parsing from textual descriptions using dependency parsing and left the AMR parsing approach as future work since sophisticated methods are required to apply AMR. Hence, we use pre-trained AMR parsing models to parse the region descriptions of visual scenes (i.e. images) into AMR graphs and pre-trained language models (PLM), BART and T5, to parse AMR graphs into scene graphs. The experimental results show that our approach explicitly captures high-level semantics from textual descriptions of visual scenes, such as objects, attributes of objects, and relationships between objects. Our textual scene graph parsing approach outperforms the previous state-of-the-art results by 9.3{\%} in the SPICE metric score. | null | null | 10.18653/v1/2022.dlg4nlp-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,226 |
inproceedings | bouhandi-etal-2022-graph | Graph Neural Networks for Adapting Off-the-shelf General Domain Language Models to Low-Resource Specialised Domains | Wu, Lingfei and Liu, Bang and Mihalcea, Rada and Pei, Jian and Zhang, Yue and Li, Yunyao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.dlg4nlp-1.5/ | Bouhandi, Merieme and Morin, Emmanuel and Hamon, Thierry | Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022) | 36--42 | Language models encode linguistic proprieties and are used as input for more specific models. Using their word representations as-is for specialised and low-resource domains might be less efficient. Methods of adapting them exist, but these models often overlook global information about how words, terms, and concepts relate to each other in a corpus due to their strong reliance on attention. We consider that global information can influence the results of the downstream tasks, and combination with contextual information is performed using graph convolution networks or GCN built on vocabulary graphs. By outperforming baselines, we show that this architecture is profitable for domain-specific tasks. | null | null | 10.18653/v1/2022.dlg4nlp-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,227 |
inproceedings | maharana-bansal-2022-grada | {G}ra{DA}: Graph Generative Data Augmentation for Commonsense Reasoning | Wu, Lingfei and Liu, Bang and Mihalcea, Rada and Pei, Jian and Zhang, Yue and Li, Yunyao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.dlg4nlp-1.6/ | Maharana, Adyasha and Bansal, Mohit | Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022) | 43--59 | Recent advances in commonsense reasoning have been fueled by the availability of large-scale human annotated datasets. Manual annotation of such datasets, many of which are based on existing knowledge bases, is expensive and not scalable. Moreover, it is challenging to build augmentation data for commonsense reasoning because the synthetic questions need to adhere to real-world scenarios. Hence, we present GraDA, a graph-generative data augmentation framework to synthesize factual data samples from knowledge graphs for commonsense reasoning datasets. First, we train a graph-to-text model for conditional generation of questions from graph entities and relations. Then, we train a generator with GAN loss to generate distractors for synthetic questions. Our approach improves performance for SocialIQA, CODAH, HellaSwag and CommonsenseQA, and works well for generative tasks like ProtoQA. We show improvement in robustness to semantic adversaries after training with GraDA and provide human evaluation of the quality of synthetic datasets in terms of factuality and answerability. Our work provides evidence and encourages future research into graph-based generative data augmentation. | null | null | 10.18653/v1/2022.dlg4nlp-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,228 |
inproceedings | li-etal-2022-ligcn | {L}i{GCN}: Label-interpretable Graph Convolutional Networks for Multi-label Text Classification | Wu, Lingfei and Liu, Bang and Mihalcea, Rada and Pei, Jian and Zhang, Yue and Li, Yunyao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.dlg4nlp-1.7/ | Li, Irene and Feng, Aosong and Wu, Hao and Li, Tianxiao and Suzumura, Toyotaro and Dong, Ruihai | Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022) | 60--70 | Multi-label text classification (MLTC) is an attractive and challenging task in natural language processing (NLP). Compared with single-label text classification, MLTC has a wider range of applications in practice. In this paper, we propose a label-interpretable graph convolutional network model to solve the MLTC problem by modeling tokens and labels as nodes in a heterogeneous graph. In this way, we are able to take into account multiple relationships including token-level relationships. Besides, the model allows better interpretability for predicted labels as the token-label edges are exposed. We evaluate our method on four real-world datasets and it achieves competitive scores against selected baseline methods. Specifically, this model achieves a gain of 0.14 on the F1 score in the small label set MLTC, and 0.07 in the large label set scenario. | null | null | 10.18653/v1/2022.dlg4nlp-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,229 |
inproceedings | deng-etal-2022-explicit | Explicit Graph Reasoning Fusing Knowledge and Contextual Information for Multi-hop Question Answering | Wu, Lingfei and Liu, Bang and Mihalcea, Rada and Pei, Jian and Zhang, Yue and Li, Yunyao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.dlg4nlp-1.8/ | Deng, Zhenyun and Zhu, Yonghua and Qi, Qianqian and Witbrock, Michael and Riddle, Patricia | Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022) | 71--80 | Current graph-neural-network-based (GNN-based) approaches to multi-hop questions integrate clues from scattered paragraphs in an entity graph, achieving implicit reasoning by synchronous update of graph node representations using information from neighbours; this is poorly suited for explaining how clues are passed through the graph in hops. In this paper, we describe a structured Knowledge and contextual Information Fusion GNN (KIFGraph) whose explicit multi-hop graph reasoning mimics human step by step reasoning. Specifically, we first integrate clues at multiple levels of granularity (question, paragraph, sentence, entity) as nodes in the graph, connected by edges derived using structured semantic knowledge, then use a contextual encoder to obtain the initial node representations, followed by step-by-step two-stage graph reasoning that asynchronously updates node representations. Each node can be related to its neighbour nodes through fused structured knowledge and contextual information, reliably integrating their answer clues. Moreover, a masked attention mechanism (MAM) filters out noisy or redundant nodes and edges, to avoid ineffective clue propagation in graph reasoning. Experimental results show performance competitive with published models on the HotpotQA dataset. | null | null | 10.18653/v1/2022.dlg4nlp-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,230 |
inproceedings | gamonal-2022-descriptive | A Descriptive Study of Metaphors and Frames in the Multilingual Shared Annotation Task | Baker, Collin F. | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.distcurate-1.1/ | Gamonal, Maucha | Proceedings of the Workshop on Dimensions of Meaning: Distributional and Curated Semantics (DistCurate 2022) | 1--7 | This work assumes that languages are structured by semantic frames, which are schematic representations of concepts. Metaphors, on the other hand, are cognitive projections between domains, which are the result of our interaction in the world, through experiences, expectations and human biology itself. In this work, we use both semantic frames and metaphors in multilingual contrast (Brazilian Portuguese, English and German). The aim is to present a descriptive study of metaphors and frames in the multilingual shared annotation task of Multilingual FrameNet, a task which consisted of using frames from Berkeley FrameNet to annotate a parallel corpora. The result shows parameters for cross-linguistic annotation considering frames and metaphors. | null | null | 10.18653/v1/2022.distcurate-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,232 |
inproceedings | lekkas-etal-2022-multi | Multi-sense Language Modelling | Baker, Collin F. | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.distcurate-1.2/ | Lekkas, Andrea and Schneider-Kamp, Peter and Augenstein, Isabelle | Proceedings of the Workshop on Dimensions of Meaning: Distributional and Curated Semantics (DistCurate 2022) | 8--18 | The effectiveness of a language model is influenced by its token representations, which must encode contextual information and handle the same word form having a plurality of meanings (polysemy). Currently, none of the common language modelling architectures explicitly model polysemy. We propose a language model which not only predicts the next word, but also its sense in context. We argue that this higher prediction granularity may be useful for end tasks such as assistive writing, and allow for more a precise linking of language models with knowledge bases. We find that multi-sense language modelling requires architectures that go beyond standard language models, and here propose a localized prediction framework that decomposes the task into a word followed by a sense prediction task. To aid sense prediction, we utilise a Graph Attention Network, which encodes definitions and example uses of word senses. Overall, we find that multi-sense language modelling is a highly challenging task, and suggest that future work focus on the creation of more annotated training datasets. | null | null | 10.18653/v1/2022.distcurate-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,233 |
inproceedings | lawley-schubert-2022-logical | Logical Story Representations via {F}rame{N}et + Semantic Parsing | Baker, Collin F. | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.distcurate-1.3/ | Lawley, Lane and Schubert, Lenhart | Proceedings of the Workshop on Dimensions of Meaning: Distributional and Curated Semantics (DistCurate 2022) | 19--23 | We propose a means of augmenting FrameNet parsers with a formal logic parser to obtain rich semantic representations of events. These schematic representations of the frame events, which we call Episodic Logic (EL) schemas, abstract constants to variables, preserving their types and relationships to other individuals in the same text. Due to the temporal semantics of the chosen logical formalism, all identified schemas in a text are also assigned temporally bound {\textquotedblleft}episodes{\textquotedblright} and related to one another in time. The semantic role information from the FrameNet frames is also incorporated into the schema`s type constraints. We describe an implementation of this method using a neural FrameNet parser, and discuss the approach`s possible applications to question answering and open-domain event schema learning. | null | null | 10.18653/v1/2022.distcurate-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,234 |
inproceedings | baker-etal-2022-comparing | Comparing Distributional and Curated Approaches for Cross-lingual Frame Alignment | Baker, Collin F. | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.distcurate-1.4/ | Baker, Collin F. and Ellsworth, Michael and Petruck, Miriam R. L. and Lorenzi, Arthur | Proceedings of the Workshop on Dimensions of Meaning: Distributional and Curated Semantics (DistCurate 2022) | 24--30 | Despite advances in statistical approaches to the modeling of meaning, many ques- tions about the ideal way of exploiting both knowledge-based (e.g., FrameNet, WordNet) and data-based methods (e.g., BERT) remain unresolved. This workshop focuses on these questions with three session papers that run the gamut from highly distributional methods (Lekkas et al., 2022), to highly curated methods (Gamonal, 2022), and techniques with statistical methods producing structured semantics (Lawley and Schubert, 2022). In addition, we begin the workshop with a small comparison of cross-lingual techniques for frame semantic alignment for one language pair (Spanish and English). None of the distributional techniques consistently aligns the 1-best frame match from English to Spanish, all failing in at least one case. Predicting which techniques will align which frames cross-linguistically is not possible from any known characteristic of the alignment technique or the frames. Although distributional techniques are a rich source of semantic information for many tasks, at present curated, knowledge-based semantics remains the only technique that can consistently align frames across languages. | null | null | 10.18653/v1/2022.distcurate-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,235 |
inproceedings | dolatian-etal-2022-free | A Free/Open-Source Morphological Transducer for {W}estern {A}rmenian | Khurshudyan, Victoria and Tomeh, Nadi and Nouvel, Damien and Donabedian, Anaid and Vidal-Gorene, Chahan | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.digitam-1.1/ | Dolatian, Hossep and Swanson, Daniel and Washington, Jonathan | Proceedings of the Workshop on Processing Language Variation: Digital Armenian (DigitAm) within the 13th Language Resources and Evaluation Conference | 1--7 | We present a free/open-source morphological transducer for Western Armenian, an endangered and low-resource Indo-European language. The transducer has virtually complete coverage of the language`s inflectional morphology. We built the lexicon by scraping online dictionaries. As of submission, the transducer has a lexicon of 75K words. It has over 90{\%} naive coverage on different Western Armenian corpora, and high precision. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,237 |
inproceedings | avetisyan-2022-dialects | Dialects Identification of {A}rmenian Language | Khurshudyan, Victoria and Tomeh, Nadi and Nouvel, Damien and Donabedian, Anaid and Vidal-Gorene, Chahan | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.digitam-1.2/ | Avetisyan, Karen | Proceedings of the Workshop on Processing Language Variation: Digital Armenian (DigitAm) within the 13th Language Resources and Evaluation Conference | 8--12 | The Armenian language has many dialects that differ from each other syntactically, morphologically, and phonetically. In this work, we implement and evaluate models that determine the dialect of a given passage of text. The proposed models are evaluated for the three major variations of the Armenian language: Eastern, Western, and Classical. Previously, there were no instruments of dialect identification in the Armenian language. The paper presents three approaches: a statistical which relies on a stop words dictionary, a modified statistical one with a dictionary of most frequently encountered words, and the third one that is based on Facebook`s fastText language identification neural network model. Two types of neural network models were trained, one with the usage of pre-trained word embeddings and the other without. Approaches were tested on sentence-level and document-level data. The results show that the neural network-based method works sufficiently better than the statistical ones, achieving almost 98{\%} accuracy at the sentence level and nearly 100{\%} at the document level. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,238 |
inproceedings | kindt-kepeklian-2022-analyse | Analyse Automatique de l`Ancien Arm{\'e}nien. {\'E}valuation d`une m{\'e}thode hybride {\guillemotleft} dictionnaire {\guillemotright} et {\guillemotleft} r{\'e}seau de neurones {\guillemotright} sur un Extrait de l`Adversus Haereses d`Ir{\'e}n{\'e}e de Lyon | Khurshudyan, Victoria and Tomeh, Nadi and Nouvel, Damien and Donabedian, Anaid and Vidal-Gorene, Chahan | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.digitam-1.3/ | Kindt, Bastien and Kepeklian, Gabriel | Proceedings of the Workshop on Processing Language Variation: Digital Armenian (DigitAm) within the 13th Language Resources and Evaluation Conference | 13--20 | The aim of this paper is to evaluate a lexical analysis (mainly lemmatization and POS-tagging) of a sample of the Ancient Armenian version of the Adversus Haereses by Irenaeus of Lyons (2nd c.) by using hybrid approach based on digital dictionaries on the one hand, and on Recurrent Neural Network (RNN) on the other hand. The quality of the results is checked by comparing data obtained by implementing these two methods with data manually checked. In the present case, 98,37{\%} of the results are correct by using the first (lexical) approach, and 74,64{\%} by using the second (RNN). But, in fact, both methods present advantages and disadvantages and argue for the hybrid method. The linguistic resources implemented here are jointly developed and tested by GREgORI and Calfa. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,239 |
inproceedings | kindt-van-elverdinghe-2022-describing | Describing Language Variation in the Colophons of {A}rmenian Manuscripts | Khurshudyan, Victoria and Tomeh, Nadi and Nouvel, Damien and Donabedian, Anaid and Vidal-Gorene, Chahan | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.digitam-1.4/ | Kindt, Bastien and Van Elverdinghe, Emmanuel | Proceedings of the Workshop on Processing Language Variation: Digital Armenian (DigitAm) within the 13th Language Resources and Evaluation Conference | 20--27 | The colophons of Armenian manuscripts constitute a large textual corpus spanning a millennium of written culture. These texts are highly diverse and rich in terms of linguistic variation. This poses a challenge to NLP tools, especially considering the fact that linguistic resources designed or suited for Armenian are still scarce. In this paper, we deal with a sub-corpus of colophons written to commemorate the rescue of a manuscript and dating from 1286 to ca. 1450, a thematic group distinguished by a particularly high concentration of words exhibiting linguistic variation. The text is processed (lemmatization, POS-tagging, and inflectional tagging) using the tools of the GREgORI Project and evaluated. Through a selection of examples, we show how variation is dealt with at each linguistic level (phonology, orthography, flexion, vocabulary, syntax). Complex variation, at the level of tokens or lemmata, is considered as well. The results of this work are used to enrich and refine the linguistic resources of the GREgORI project, which in turn benefits the processing of other texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,240 |
inproceedings | khurshudyan-etal-2022-eastern | {E}astern {A}rmenian National Corpus: State of the Art and Perspectives | Khurshudyan, Victoria and Tomeh, Nadi and Nouvel, Damien and Donabedian, Anaid and Vidal-Gorene, Chahan | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.digitam-1.5/ | Khurshudyan, Victoria and Arkhangelskiy, Timofey and Daniel, Misha and Plungian, Vladimir and Levonian, Dmitri and Polyakov, Alex and Rubakov, Sergei | Proceedings of the Workshop on Processing Language Variation: Digital Armenian (DigitAm) within the 13th Language Resources and Evaluation Conference | 28--37 | Eastern Armenian National Corpus (EANC) is a comprehensive corpus of Modern Eastern Armenian with about 110 million tokens, covering written and oral discourses from the mid-19th century to the present. The corpus is provided with morphological, semantic and metatext annotation, as well as English translations. EANC is open access and available at www.eanc.net. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,241 |
inproceedings | chakmakjian-wang-2022-towards | Towards a Unified {ASR} System for the {A}rmenian Standards | Khurshudyan, Victoria and Tomeh, Nadi and Nouvel, Damien and Donabedian, Anaid and Vidal-Gorene, Chahan | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.digitam-1.6/ | Chakmakjian, Samuel and Wang, Ilaine | Proceedings of the Workshop on Processing Language Variation: Digital Armenian (DigitAm) within the 13th Language Resources and Evaluation Conference | 38--42 | Armenian is a traditionally under-resourced language, which has seen a recent uptick in interest in the development of its tools and presence in the digital domain. Some of this recent interest has centred around the development of Automatic Speech Recognition (ASR) technologies. However, the language boasts two standard variants which diverge on multiple typological and structural levels. In this work, we examine some of the available bodies of data for ASR construction, present the challenges in the processing of these data and propose a methodology going forward. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,242 |
inproceedings | feng-etal-2022-msamsum | {MSAMS}um: Towards Benchmarking Multi-lingual Dialogue Summarization | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.1/ | Feng, Xiachong and Feng, Xiaocheng and Qin, Bing | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 1--12 | Dialogue summarization helps users capture salient information from various types of dialogues has received much attention recently. However, current works mainly focus on English dialogue summarization, leaving other languages less well explored. Therefore, we present a multi-lingual dialogue summarization dataset, namely MSAMSum, which covers dialogue-summary pairs in six languages. Specifically, we derive MSAMSum from the standard SAMSum using sophisticated translation techniques and further employ two methods to ensure the integral translation quality and summary factual consistency. Given the proposed MSAMum, we systematically set up five multi-lingual settings for this task, including a novel mix-lingual dialogue summarization setting. To illustrate the utility of our dataset, we benchmark various experiments with pre-trained models under different settings and report results in both supervised and zero-shot manners. We also discuss some future works towards this task to motivate future researches. | null | null | 10.18653/v1/2022.dialdoc-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,244 |
inproceedings | zhao-etal-2022-unids | {U}ni{DS}: A Unified Dialogue System for Chit-Chat and Task-oriented Dialogues | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.2/ | Zhao, Xinyan and He, Bin and Wang, Yasheng and Li, Yitong and Mi, Fei and Liu, Yajiao and Jiang, Xin and Liu, Qun and Chen, Huanhuan | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 13--22 | With the advances in deep learning, tremendous progress has been made with chit-chat dialogue systems and task-oriented dialogue systems. However, these two systems are often tackled separately in current methods. To achieve more natural interaction with humans, dialogue systems need to be capable of both chatting and accomplishing tasks. To this end, we propose a unified dialogue system (UniDS) with the two aforementioned skills. In particular, we design a unified dialogue data schema, compatible for both chit-chat and task-oriented dialogues. Besides, we propose a two-stage training method to train UniDS based on the unified dialogue data schema. UniDS does not need to adding extra parameters to existing chit-chat dialogue systems. Experimental results demonstrate that the proposed UniDS works comparably well as the state-of-the-art chit-chat dialogue systems and task-oriented dialogue systems. More importantly, UniDS achieves better robustness than pure dialogue systems and satisfactory switch ability between two types of dialogues. | null | null | 10.18653/v1/2022.dialdoc-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,245 |
inproceedings | gerhard-young-etal-2022-low | Low-Resource Adaptation of Open-Domain Generative Chatbots | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.3/ | Gerhard-Young, Greyson and Anantha, Raviteja and Chappidi, Srinivas and Hoffmeister, Bjorn | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 23--30 | Recent work building open-domain chatbots has demonstrated that increasing model size improves performance (Adiwardana et al., 2020; Roller et al., 2020). On the other hand, latency and connectivity considerations dictate the move of digital assistants on the device (Verge, 2021). Giving a digital assistant like Siri, Alexa, or Google Assistant the ability to discuss just about anything leads to the need for reducing the chatbot model size such that it fits on the user`s device. We demonstrate that low parameter models can simultaneously retain their general knowledge conversational abilities while improving in a specific domain. Additionally, we propose a generic framework that accounts for variety in question types, tracks reference throughout multi-turn conversations, and removes inconsistent and potentially toxic responses. Our framework seamlessly transitions between chatting and performing transactional tasks, which will ultimately make interactions with digital assistants more human-like. We evaluate our framework on 1 internal and 4 public benchmark datasets using both automatic (Perplexity) and human (SSA {--} Sensibleness and Specificity Average) evaluation metrics and establish comparable performance while reducing model parameters by 90{\%}. | null | null | 10.18653/v1/2022.dialdoc-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,246 |
inproceedings | nakano-etal-2022-pseudo | Pseudo Ambiguous and Clarifying Questions Based on Sentence Structures Toward Clarifying Question Answering System | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.4/ | Nakano, Yuya and Kawano, Seiya and Yoshino, Koichiro and Sudoh, Katsuhito and Nakamura, Satoshi | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 31--40 | Question answering (QA) with disambiguation questions is essential for practical QA systems because user questions often do not contain information enough to find their answers. We call this task clarifying question answering, a task to find answers to ambiguous user questions by disambiguating their intents through interactions. There are two major problems in building a clarifying question answering system: data preparation of possible ambiguous questions and the generation of clarifying questions. In this paper, we tackle these problems by sentence generation methods using sentence structures. Ambiguous questions are generated by eliminating a part of a sentence considering the sentence structure. Clarifying the question generation method based on case frame dictionary and sentence structure is also proposed. Our experimental results verify that our pseudo ambiguous question generation successfully adds ambiguity to questions. Moreover, the proposed clarifying question generation recovers the performance drop by asking the user for missing information. | null | null | 10.18653/v1/2022.dialdoc-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,247 |
inproceedings | pal-etal-2022-parameter | Parameter-Efficient Abstractive Question Answering over Tables or Text | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.5/ | Pal, Vaishali and Kanoulas, Evangelos and de Rijke, Maarten | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 41--53 | A long-term ambition of information seeking QA systems is to reason over multi-modal contexts and generate natural answers to user queries. Today, memory intensive pre-trained language models are adapted to downstream tasks such as QA by fine-tuning the model on QA data in a specific modality like unstructured text or structured tables. To avoid training such memory-hungry models while utilizing a uniform architecture for each modality, parameter-efficient adapters add and train small task-specific bottle-neck layers between transformer layers. In this work, we study parameter-efficient abstractive QA in encoder-decoder models over structured tabular data and unstructured textual data using only 1.5{\%} additional parameters for each modality. We also ablate over adapter layers in both encoder and decoder modules to study the efficiency-performance trade-off and demonstrate that reducing additional trainable parameters down to 0.7{\%}-1.0{\%} leads to comparable results. Our models out-perform current state-of-the-art models on tabular QA datasets such as Tablesum and FeTaQA, and achieve comparable performance on a textual QA dataset such as NarrativeQA using significantly less trainable parameters than fine-tuning. | null | null | 10.18653/v1/2022.dialdoc-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,248 |
inproceedings | li-etal-2022-conversation | Conversation- and Tree-Structure Losses for Dialogue Disentanglement | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.6/ | Li, Tianda and Gu, Jia-Chen and Ling, Zhen-Hua and Liu, Quan | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 54--64 | When multiple conversations occur simultaneously, a listener must decide which conversation each utterance is part of in order to interpret and respond to it appropriately. This task is referred as dialogue disentanglement. A significant drawback of previous studies on disentanglement lies in that they only focus on pair-wise relationships between utterances while neglecting the conversation structure which is important for conversation structure modeling. In this paper, we propose a hierarchical model, named Dialogue BERT (DIALBERT), which integrates the local and global semantics in the context range by using BERT to encode each message-pair and using BiLSTM to aggregate the chronological context information into the output of BERT. In order to integrate the conversation structure information into the model, two types of loss of conversation-structure loss and tree-structure loss are designed. In this way, our model can implicitly learn and leverage the conversation structures without being restricted to the lack of explicit access to such structures during the inference stage. Experimental results on two large datasets show that our method outperforms previous methods by substantial margins, achieving great performance on dialogue disentanglement. | null | null | 10.18653/v1/2022.dialdoc-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,249 |
inproceedings | mass-etal-2022-conversational | Conversational Search with Mixed-Initiative - Asking Good Clarification Questions backed-up by Passage Retrieval | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.7/ | Mass, Yosi and Cohen, Doron and Yehudai, Asaf and Konopnicki, David | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 65--71 | We deal with the scenario of conversational search, where user queries are under-specified or ambiguous. This calls for a mixed-initiative setup. User-asks (queries) and system-answers, as well as system-asks (clarification questions) and user response, in order to clarify her information needs. We focus on the task of selecting the next clarification question, given conversation context. Our method leverages passage retrieval from background content to fine-tune two deep-learning models for ranking candidate clarification questions. We evaluated our method on two different use-cases. The first is an open domain conversational search in a large web collection. The second is a task-oriented customer-support setup. We show that our method performs well on both use-cases. | null | null | 10.18653/v1/2022.dialdoc-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,250 |
inproceedings | wang-komatani-2022-graph | Graph-combined Coreference Resolution Methods on Conversational Machine Reading Comprehension with Pre-trained Language Model | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.8/ | Wang, Zhaodong and Komatani, Kazunori | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 72--82 | Coreference resolution such as for anaphora has been an essential challenge that is commonly found in conversational machine reading comprehension (CMRC). This task aims to determine the referential entity to which a pronoun refers on the basis of contextual information. Existing approaches based on pre-trained language models (PLMs) mainly rely on an end-to-end method, which still has limitations in clarifying referential dependency. In this study, a novel graph-based approach is proposed to integrate the coreference of given text into graph structures (called coreference graphs), which can pinpoint a pronoun`s referential entity. We propose two graph-combined methods, evidence-enhanced and the fusion model, for CMRC to integrate coreference graphs from different levels of the PLM architecture. Evidence-enhanced refers to textual level methods that include an evidence generator (for generating new text to elaborate a pronoun) and enhanced question (for rewriting a pronoun in a question) as PLM input. The fusion model is a structural level method that combines the PLM with a graph neural network. We evaluated these approaches on a CoQA pronoun-containing dataset and the whole CoQA dataset. The result showed that our methods can outperform baseline PLM methods with BERT and RoBERTa. | null | null | 10.18653/v1/2022.dialdoc-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,251 |
inproceedings | kodama-etal-2022-construction | Construction of Hierarchical Structured Knowledge-based Recommendation Dialogue Dataset and Dialogue System | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.9/ | Kodama, Takashi and Tanaka, Ribeka and Kurohashi, Sadao | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 83--92 | We work on a recommendation dialogue system to help a user understand the appealing points of some target (e.g., a movie). In such dialogues, the recommendation system needs to utilize structured external knowledge to make informative and detailed recommendations. However, there is no dialogue dataset with structured external knowledge designed to make detailed recommendations for the target. Therefore, we construct a dialogue dataset, Japanese Movie Recommendation Dialogue (JMRD), in which the recommender recommends one movie in a long dialogue (23 turns on average). The external knowledge used in this dataset is hierarchically structured, including title, casts, reviews, and plots. Every recommender`s utterance is associated with the external knowledge related to the utterance. We then create a movie recommendation dialogue system that considers the structure of the external knowledge and the history of the knowledge used. Experimental results show that the proposed model is superior in knowledge selection to the baseline models. | null | null | 10.18653/v1/2022.dialdoc-1.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,252 |
inproceedings | xu-etal-2022-retrieval | Retrieval-Free Knowledge-Grounded Dialogue Response Generation with Adapters | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.10/ | Xu, Yan and Ishii, Etsuko and Cahyawijaya, Samuel and Liu, Zihan and Winata, Genta Indra and Madotto, Andrea and Su, Dan and Fung, Pascale | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 93--107 | To diversify and enrich generated dialogue responses, knowledge-grounded dialogue has been investigated in recent years. The existing methods tackle the knowledge grounding challenge by retrieving the relevant sentences over a large corpus and augmenting the dialogues with explicit extra information. Despite their success, however, the existing works have drawbacks on the inference efficiency. This paper proposes KnowExpert, an end-to-end framework to bypass the explicit retrieval process and inject knowledge into the pre-trained language models with lightweight adapters and adapt to the knowledge-grounded dialogue task. To the best of our knowledge, this is the first attempt to tackle this challenge without retrieval in this task under an open-domain chit-chat scenario. The experimental results show that KnowExpert performs comparably with some retrieval-based baselines while being time-efficient in inference, demonstrating the effectiveness of our proposed method. | null | null | 10.18653/v1/2022.dialdoc-1.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,253 |
inproceedings | zhang-etal-2022-g4 | G4: Grounding-guided Goal-oriented Dialogues Generation with Multiple Documents | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.11/ | Zhang, Shiwei and Du, Yiyang and Liu, Guanzhong and Yan, Zhao and Cao, Yunbo | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 108--114 | Goal-oriented dialogues generation grounded in multiple documents(MultiDoc2Dial) is a challenging and realistic task. Unlike previous works which treat document-grounded dialogue modeling as a machine reading comprehension task from single document, MultiDoc2Dial task faces challenges of both seeking information from multiple documents and generating conversation response simultaneously. This paper summarizes our entries to agent response generation subtask in MultiDoc2Dial dataset. We propose a three-stage solution, Grounding-guided goal-oriented dialogues generation(G4), which predicts groundings from retrieved passages to guide the generation of the final response. Our experiments show that G4 achieves SacreBLEU score of 31.24 and F1 score of 44.6 which is 60.7{\%} higher than the baseline model. | null | null | 10.18653/v1/2022.dialdoc-1.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,254 |
inproceedings | jiang-etal-2022-ugent | {U}{G}ent-{T2K} at the 2nd {D}ial{D}oc Shared Task: A Retrieval-Focused Dialog System Grounded in Multiple Documents | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.12/ | Jiang, Yiwei and Hadifar, Amir and Deleu, Johannes and Demeester, Thomas and Develder, Chris | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 115--122 | This work presents the contribution from the Text-to-Knowledge team of Ghent University (UGent-T2K) to the MultiDoc2Dial shared task on modeling dialogs grounded in multiple documents. We propose a pipeline system, comprising (1) document retrieval, (2) passage retrieval, and (3) response generation. We engineered these individual components mainly by, for (1)-(2), combining multiple ranking models and adding a final LambdaMART reranker, and, for (3), by adopting a Fusion-in-Decoder (FiD) model. We thus significantly boost the baseline system`s performance (over +10 points for both F1 and SacreBLEU). Further, error analysis reveals two major failure cases, to be addressed in future work: (i) in case of topic shift within the dialog, retrieval often fails to select the correct grounding document(s), and (ii) generation sometimes fails to use the correctly retrieved grounding passage. Our code is released at this link. | null | null | 10.18653/v1/2022.dialdoc-1.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,255 |
inproceedings | li-etal-2022-grounded | Grounded Dialogue Generation with Cross-encoding Re-ranker, Grounding Span Prediction, and Passage Dropout | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.13/ | Li, Kun and Zhang, Tianhua and Tang, Liping and Li, Junan and Lu, Hongyuan and Wu, Xixin and Meng, Helen | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 123--129 | MultiDoc2Dial presents an important challenge on modeling dialogues grounded with multiple documents. This paper proposes a pipeline system of {\textquotedblleft}retrieve, re-rank, and generate{\textquotedblright}, where each component is individually optimized. This enables the passage re-ranker and response generator to fully exploit training with ground-truth data. Furthermore, we use a deep cross-encoder trained with localized hard negative passages from the retriever. For the response generator, we use grounding span prediction as an auxiliary task to be jointly trained with the main task of response generation. We also adopt a passage dropout and regularization technique to improve response generation performance. Experimental results indicate that the system clearly surpasses the competitive baseline and our team CPII-NLP ranked 1st among the public submissions on ALL four leaderboards based on the sum of F1, SacreBLEU, METEOR and RougeL scores. | null | null | 10.18653/v1/2022.dialdoc-1.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,256 |
inproceedings | zhu-etal-2022-knowledge-storage | A Knowledge storage and semantic space alignment Method for Multi-documents dialogue generation | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.14/ | Zhu, Minjun and Li, Bin and Weng, Yixuan and Xia, Fei | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 130--135 | Question Answering (QA) is a Natural Language Processing (NLP) task that can measure language and semantics understanding ability, it requires a system not only to retrieve relevant documents from a large number of articles but also to answer corresponding questions according to documents. However, various language styles and sources of human questions and evidence documents form the different embedding semantic spaces, which may bring some errors to the downstream QA task. To alleviate these problems, we propose a framework for enhancing downstream evidence retrieval by generating evidence, aiming at improving the performance of response generation. Specifically, we take the pre-training language model as a knowledge base, storing documents' information and knowledge into model parameters. With the Child-Tuning approach being designed, the knowledge storage and evidence generation avoid catastrophic forgetting for response generation. Extensive experiments carried out on the multi-documents dataset show that the proposed method can improve the final performance, which demonstrates the effectiveness of the proposed framework. | null | null | 10.18653/v1/2022.dialdoc-1.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,257 |
inproceedings | jang-etal-2022-improving | Improving Multiple Documents Grounded Goal-Oriented Dialog Systems via Diverse Knowledge Enhanced Pretrained Language Model | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.15/ | Jang, Yunah and Lee, Dongryeol and Park, Hyung Joo and Kang, Taegwan and Lee, Hwanhee and Bae, Hyunkyung and Jung, Kyomin | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 136--141 | In this paper, we mainly discuss about our submission to MultiDoc2Dial task, which aims to model the goal-oriented dialogues grounded in multiple documents. The proposed task is split into grounding span prediction and agent response generation. The baseline for the task is the retrieval augmented generation model, which consists of a dense passage retrieval model for the retrieval part and the BART model for the generation part. The main challenge of this task is that the system requires a great amount of pre-trained knowledge to generate answers grounded in multiple documents. To overcome this challenge, we adopt model pretraining, fine-tuning, and multi-task learning to enhance our model`s coverage of pretrained knowledge. We experimented with various settings of our method to show the effectiveness of our approaches. | null | null | 10.18653/v1/2022.dialdoc-1.15 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,258 |
inproceedings | alavian-etal-2022-docalog | Docalog: Multi-document Dialogue System using Transformer-based Span Retrieval | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.16/ | Alavian, Sayed Hesam and Satvaty, Ali and Sabouri, Sadra and Asgari, Ehsaneddin and Sameti, Hossein | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 142--147 | Information-seeking dialogue systems, including knowledge identification and response generation, aim to respond to users with fluent, coherent, and informative answers based on users' needs. This paper discusses our proposed approach, $Docalog$, for the DialDoc-22 (MultiDoc2Dial) shared task. $Docalog$ identifies the most relevant knowledge in the associated document, in a multi-document setting. $Docalog$, is a three-stage pipeline consisting of \textit{(1) a document retriever model (DR. TEIT)}, \textit{(2) an answer span prediction model}, and \textit{(3) an ultimate span picker} deciding on the most likely answer span, out of all predicted spans. In the test phase of MultiDoc2Dial 2022, $Docalog$ achieved f1-scores of 36.07{\%} and 28.44{\%} and SacreBLEU scores of 23.70{\%} and 20.52{\%}, respectively on the \textit{MDD-SEEN} and \textit{MDD-UNSEEN} folds. | null | null | 10.18653/v1/2022.dialdoc-1.16 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,259 |
inproceedings | bansal-etal-2022-r3 | R3 : Refined Retriever-Reader pipeline for Multidoc2dial | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.17/ | Bansal, Srijan and Tripathi, Suraj and Agarwal, Sumit and Gururaja, Sireesh and Veerubhotla, Aditya Srikanth and Dutt, Ritam and Mitamura, Teruko and Nyberg, Eric | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 148--154 | In this paper, we present our submission to the DialDoc shared task based on the MultiDoc2Dial dataset. MultiDoc2Dial is a conversational question answering dataset that grounds dialogues in multiple documents. The task involves grounding a user`s query in a document followed by generating an appropriate response. We propose several improvements over the baseline`s retriever-reader architecture to aid in modeling goal-oriented dialogues grounded in multiple documents. Our proposed approach employs sparse representations for passage retrieval, a passage re-ranker, the fusion-in-decoder architecture for generation, and a curriculum learning training paradigm. Our approach shows a 12 point improvement in BLEU score compared to the baseline RAG model. | null | null | 10.18653/v1/2022.dialdoc-1.17 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,260 |
inproceedings | feng-etal-2022-dialdoc | {D}ial{D}oc 2022 Shared Task: Open-Book Document-grounded Dialogue Modeling | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.18/ | Feng, Song and Patel, Siva and Wan, Hui | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 155--160 | The paper presents the results of the Shared Task hosted by the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering co-located at ACL 2022. The primary goal of this Shared Task is to build goal-oriented information-seeking conversation systems that are grounded in the domain documents, where each dialogue could correspond to multiple subtasks that are based on different documents. The task is to generate agent responses in natural language given the dialogue and document contexts. There are two task settings and leaderboards based on (1) the same sets of domains (SEEN) and (2) one unseen domain (UNSEEN). There are over 20 teams participating in Dev Phase and 8 teams participating in both Dev and Test Phases. Multiple submissions significantly outperform the baseline. The best-performing system achieves 52.06 F1 and the total of 191.30 on the SEEN task; and 34.65 F1 and the total of 130.79 on the UNSEEN task. | null | null | 10.18653/v1/2022.dialdoc-1.18 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,261 |
inproceedings | honovich-etal-2022-true | {TRUE}: Re-evaluating Factual Consistency Evaluation | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.19/ | Honovich, Or and Aharoni, Roee and Herzig, Jonathan and Taitelbaum, Hagai and Kukliansy, Doron and Cohen, Vered and Scialom, Thomas and Szpektor, Idan and Hassidim, Avinatan and Matias, Yossi | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 161--175 | Grounded text generation systems often generate text that contains factual inconsistencies, hindering their real-world applicability. Automatic factual consistency evaluation may help alleviate this limitation by accelerating evaluation cycles, filtering inconsistent outputs and augmenting training data. While attracting increasing attention, such evaluation metrics are usually developed and evaluated in silo for a single task or dataset, slowing their adoption. Moreover, previous meta-evaluation protocols focused on system-level correlations with human annotations, which leave the example-level accuracy of such metrics unclear. In this work, we introduce TRUE: a comprehensive study of factual consistency metrics on a standardized collection of existing texts from diverse tasks, manually annotated for factual consistency. Our standardization enables an example-level meta-evaluation protocol that is more actionable and interpretable than previously reported correlations, yielding clearer quality measures. Across diverse state-of-the-art metrics and 11 datasets we find that large-scale NLI and question generation-and-answering-based approaches achieve strong and complementary results. We recommend those methods as a starting point for model and metric developers, and hope TRUE will foster progress towards even better methods. | null | null | 10.18653/v1/2022.dialdoc-1.19 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,262 |
inproceedings | nouri-toxtli-2022-handling | Handling Comments in Documents through Interactions | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.20/ | Nouri, Elnaz and Toxtli, Carlos | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 176--186 | Comments are widely used by users in collaborative documents every day. The documents' comments enable collaborative editing and review dynamics, transforming each document into a context-sensitive communication channel. Understanding the role of comments in communication dynamics within documents is the first step towards automating their management. In this paper we propose the first ever taxonomy for different types of in-document comments based on analysis of a large scale dataset of public documents from the web. We envision that the next generation of intelligent collaborative document experiences allow interactive creation and consumption of content, there We also introduce the components necessary for developing novel tools that automate the handling of comments through natural language interaction with the documents. We identify the commands that users would use to respond to various types of comments. We train machine learning algorithms to recognize the different types of comments and assess their feasibility. We conclude by discussing some of the implications for the design of automatic document management tools. | null | null | 10.18653/v1/2022.dialdoc-1.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,263 |
inproceedings | strathearn-gkatzia-2022-task2dial | {T}ask2{D}ial: A Novel Task and Dataset for Commonsense-enhanced Task-based Dialogue Grounded in Documents | Feng, Song and Wan, Hui and Yuan, Caixia and Yu, Han | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.dialdoc-1.21/ | Strathearn, Carl and Gkatzia, Dimitra | Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering | 187--196 | This paper proposes a novel task on commonsense-enhanced task-based dialogue grounded in documents and describes the Task2Dial dataset, a novel dataset of document-grounded task-based dialogues, where an Information Giver (IG) provides instructions (by consulting a document) to an Information Follower (IF), so that the latter can successfully complete the task. In this unique setting, the IF can ask clarification questions which may not be grounded in the underlying document and require commonsense knowledge to be answered. The Task2Dial dataset poses new challenges: (1) its human reference texts show more lexical richness and variation than other document-grounded dialogue datasets; (2) generating from this set requires paraphrasing as instructional responses might have been modified from the underlying document; (3) requires commonsense knowledge, since questions might not necessarily be grounded in the document; (4) generating requires planning based on context, as task steps need to be provided in order. The Task2Dial dataset contains dialogues with an average 18.15 number of turns and 19.79 tokens per turn, as compared to 12.94 and 12 respectively in existing datasets. As such, learning from this dataset promises more natural, varied and less template-like system utterances. | null | null | 10.18653/v1/2022.dialdoc-1.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,264 |
inproceedings | zevallos-etal-2022-introducing | Introducing {Q}u{BERT}: A Large Monolingual Corpus and {BERT} Model for {S}outhern {Q}uechua | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.1/ | Zevallos, Rodolfo and Ortega, John and Chen, William and Castro, Richard and Bel, N{\'u}ria and Yoshikawa, Cesar and Venturas, Renzo and Aradiel, Hilario and Melgarejo, Nelsi | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 1--13 | The lack of resources for languages in the Americas has proven to be a problem for the creation of digital systems such as machine translation, search engines, chat bots, and more. The scarceness of digital resources for a language causes a higher impact on populations where the language is spoken by millions of people. We introduce the first official large combined corpus for deep learning of an indigenous South American low-resource language spoken by millions called Quechua. Specifically, our curated corpus is created from text gathered from the southern region of Peru where a dialect of Quechua is spoken that has not traditionally been used for digital systems as a target dialect in the past. In order to make our work repeatable by others, we also offer a public, pre-trained, BERT model called QuBERT which is the largest linguistic model ever trained for any Quechua type, not just the southern region dialect. We furthermore test our corpus and its corresponding BERT model on two major tasks: (1) named-entity recognition (NER) and (2) part-of-speech (POS) tagging by using state-of-the-art techniques where we achieve results comparable to other work on higher-resource languages. In this article, we describe the methodology, challenges, and results from the creation of QuBERT which is on par with other state-of-the-art multilingual models for natural language processing achieving between 71 and 74{\%} F1 score on NER and 84{--}87{\%} on POS tasks. | null | null | 10.18653/v1/2022.deeplo-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,266 |
inproceedings | vania-lee-and-andrea-pierleoni-2022-improving | Improving Distantly Supervised Document-Level Relation Extraction Through Natural Language Inference | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.2/ | Vania, Clara and Lee, Grace and Pierleoni, Andrea | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 14--20 | The distant supervision (DS) paradigm has been widely used for relation extraction (RE) to alleviate the need for expensive annotations. However, it suffers from noisy labels, which leads to worse performance than models trained on human-annotated data, even when trained using hundreds of times more data. We present a systematic study on the use of natural language inference (NLI) to improve distantly supervised document-level RE. We apply NLI in three scenarios: (i) as a filter for denoising DS labels, (ii) as a filter for model prediction, and (iii) as a standalone RE model. Our results show that NLI filtering consistently improves performance, reducing the performance gap with a model trained on human-annotated data by 2.3 F1. | null | null | 10.18653/v1/2022.deeplo-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,267 |
inproceedings | antverg-ben-david-and-yonatan-belinkov-2022-idani | {IDANI}: Inference-time Domain Adaptation via Neuron-level Interventions | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.3/ | Antverg, Omer and Ben-David, Eyal and Belinkov, Yonatan | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 21--29 | Large pre-trained models are usually fine-tuned on downstream task data, and tested on unseen data. When the train and test data come from different domains, the model is likely to struggle, as it is not adapted to the test domain. We propose a new approach for domain adaptation (DA), using neuron-level interventions: We modify the representation of each test example in specific neurons, resulting in a counterfactual example from the source domain, which the model is more familiar with. The modified example is then fed back into the model. While most other DA methods are applied during training time, ours is applied during inference only, making it more efficient and applicable. Our experiments show that our method improves performance on unseen domains. | null | null | 10.18653/v1/2022.deeplo-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,268 |
inproceedings | boulanger-lavergne-and-sophie-rosset-2022-generating | Generating unlabelled data for a tri-training approach in a low resourced {NER} task | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.4/ | Boulanger, Hugo and Lavergne, Thomas and Rosset, Sophie | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 30--37 | Training a tagger for Named Entity Recognition (NER) requires a substantial amount of labeled data in the task domain. Manual labeling is a tedious and complicated task. Semisupervised learning methods can reduce the quantity of labeled data necessary to train a model. However, these methods require large quantities of unlabeled data, which remains an issue in many cases.We address this problem by generating unlabeled data. Large language models have proven to be powerful tools for text generation. We use their generative capacity to produce new sentences and variations of the sentences of our available data. This generation method, combined with a semi-supervised method, is evaluated on CoNLL and I2B2. We prepare both of these corpora to simulate a low resource setting. We obtain significant improvements for semisupervised learning with synthetic data against supervised learning on natural data. | null | null | 10.18653/v1/2022.deeplo-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,269 |
inproceedings | chivers-etal-2022-ants | {ANTS}: A Framework for Retrieval of Text Segments in Unstructured Documents | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.5/ | Chivers, Brian and Jiang, Mason P. and Lee, Wonhee and Ng, Amy and Rapstine, Natalya I. and Storer, Alex | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 38--47 | Text segmentation and extraction from unstructured documents can provide business researchers with a wealth of new information on firms and their behaviors. However, the most valuable text is often difficult to extract consistently due to substantial variations in how content can appear from document to document. Thus, the most successful way to extract this content has been through costly crowdsourcing and training of manual workers. We propose the Assisted Neural Text Segmentation (ANTS) framework to identify pertinent text in unstructured documents from a small set of labeled examples. ANTS leverages deep learning and transfer learning architectures to empower researchers to identify relevant text with minimal manual coding. Using a real world sample of accounting documents, we identify targeted sections 96{\%} of the time using only 5 training examples. | null | null | 10.18653/v1/2022.deeplo-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,270 |
inproceedings | a-rubino-etal-2022-cross | Cross-{TOP}: Zero-Shot Cross-Schema Task-Oriented Parsing | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.6/ | Rubino, Melanie and Guenon des Mesnards, Nicolas and Shah, Uday and Jiang, Nanjiang and Sun, Weiqi and Arkoudas, Konstantine | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 48--60 | Deep learning methods have enabled taskoriented semantic parsing of increasingly complex utterances. However, a single model is still typically trained and deployed for each task separately, requiring labeled training data for each, which makes it challenging to support new tasks, even within a single business vertical (e.g., food-ordering or travel booking). In this paper we describe Cross-TOP (Cross-Schema Task-Oriented Parsing), a zero-shot method for complex semantic parsing in a given vertical. By leveraging the fact that user requests from the same vertical share lexical and semantic similarities, a single cross-schema parser is trained to service an arbitrary number of tasks, seen or unseen, within a vertical. We show that Cross-TOP can achieve high accuracy on a previously unseen task without requiring any additional training data, thereby providing a scalable way to bootstrap semantic parsers for new tasks. As part of this work we release the FoodOrdering dataset, a task-oriented parsing dataset in the food-ordering vertical, with utterances and annotations derived from five schemas, each from a different restaurant menu. | null | null | 10.18653/v1/2022.deeplo-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,271 |
inproceedings | hamalainen-alnajjar-and-tuuli-tuisk-2022-help | Help from the Neighbors: {E}stonian Dialect Normalization Using a {F}innish Dialect Generator | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.7/ | H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Tuisk, Tuuli | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 61--66 | While standard Estonian is not a low-resourced language, the different dialects of the language are under-resourced from the point of view of NLP, given that there are no vast hand normalized resources available for training a machine learning model to normalize dialectal Estonian to standard Estonian. In this paper, we crawl a small corpus of parallel dialectal Estonian - standard Estonian sentences. In addition, we take a savvy approach of generating more synthetic training data for the normalization task by using an existing dialect generator model built for Finnish to {\textquotedblleft}dialectalize{\textquotedblright} standard Estonian sentences from the Universal Dependencies tree banks. Our BERT based normalization model achieves a word error rate that is 26.49 points lower when using both the synthetic data and Estonian data in comparison to training the model with only the available Estonian data. Our results suggest that synthetic data generated by a model trained on a more resourced related language can indeed boost the results for a less resourced language. | null | null | 10.18653/v1/2022.deeplo-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,272 |
inproceedings | burchell-birch-and-kenneth-heafield-2022-exploring | Exploring diversity in back translation for low-resource machine translation | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.8/ | Burchell, Laurie and Birch, Alexandra and Heafield, Kenneth | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 67--79 | Back translation is one of the most widely used methods for improving the performance of neural machine translation systems. Recent research has sought to enhance the effectiveness of this method by increasing the {\textquoteleft}diversity' of the generated translations. We argue that the definitions and metrics used to quantify {\textquoteleft}diversity' in previous work have been insufficient. This work puts forward a more nuanced framework for understanding diversity in training data, splitting it into lexical diversity and syntactic diversity. We present novel metrics for measuring these different aspects of diversity and carry out empirical analysis into the effect of these types of diversity on final neural machine translation model performance for low-resource English{\ensuremath{\leftrightarrow}}Turkish and mid-resource English{\ensuremath{\leftrightarrow}}Icelandic. Our findings show that generating back translation using nucleus sampling results in higher final model performance, and that this method of generation has high levels of both lexical and syntactic diversity. We also find evidence that lexical diversity is more important than syntactic for back translation performance. | null | null | 10.18653/v1/2022.deeplo-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,273 |
inproceedings | zhu-etal-2022-punctuation | Punctuation Restoration in {S}panish Customer Support Transcripts using Transfer Learning | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.9/ | Zhu, Xiliang and Gardiner, Shayna and Rossouw, David and Rold{\'a}n, Tere and Corston-Oliver, Simon | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 80--89 | Automatic Speech Recognition (ASR) systems typically produce unpunctuated transcripts that have poor readability. In addition, building a punctuation restoration system is challenging for low-resource languages, especially for domain-specific applications. In this paper, we propose a Spanish punctuation restoration system designed for a real-time customer support transcription service. To address the data sparsity of Spanish transcripts in the customer support domain, we introduce two transferlearning-based strategies: 1) domain adaptation using out-of-domain Spanish text data; 2) crosslingual transfer learning leveraging in-domain English transcript data. Our experiment results show that these strategies improve the accuracy of the Spanish punctuation restoration system. | null | null | 10.18653/v1/2022.deeplo-1.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,274 |
inproceedings | micallef-etal-2022-pre | Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and {BERT} Models for {M}altese | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.10/ | Micallef, Kurt and Gatt, Albert and Tanti, Marc and van der Plas, Lonneke and Borg, Claudia | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 90--101 | Multilingual language models such as mBERT have seen impressive cross-lingual transfer to a variety of languages, but many languages remain excluded from these models. In this paper, we analyse the effect of pre-training with monolingual data for a low-resource language that is not included in mBERT {--} Maltese {--} with a range of pre-training set ups. We conduct evaluations with the newly pre-trained models on three morphosyntactic tasks {--} dependency parsing, part-of-speech tagging, and named-entity recognition {--} and one semantic classification task {--} sentiment analysis. We also present a newly created corpus for Maltese, and determine the effect that the pre-training data size and domain have on the downstream performance. Our results show that using a mixture of pre-training domains is often superior to using Wikipedia text only. We also find that a fraction of this corpus is enough to make significant leaps in performance over Wikipedia-trained models. We pre-train and compare two models on the new corpus: a monolingual BERT model trained from scratch (BERTu), and a further pretrained multilingual BERT (mBERTu). The models achieve state-of-the-art performance on these tasks, despite the new corpus being considerably smaller than typically used corpora for high-resourced languages. On average, BERTu outperforms or performs competitively with mBERTu, and the largest gains are observed for higher-level tasks. | null | null | 10.18653/v1/2022.deeplo-1.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,275 |
inproceedings | yu-etal-2022-building | Building an Event Extractor with Only a Few Examples | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.11/ | Yu, Pengfei and Zhang, Zixuan and Voss, Clare and May, Jonathan and Ji, Heng | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 102--109 | Supervised event extraction models require a substantial amount of training data to perform well. However, event annotation requires a lot of human effort and costs much time, which limits the application of existing supervised approaches to new event types. In order to reduce manual labor and shorten the time to build an event extraction system for an arbitrary event ontology, we present a new framework to train such systems much more efficiently without large annotations. Our event trigger labeling model uses a weak supervision approach, which only requires a set of keywords, a small number of examples and an unlabeled corpus, on which our approach automatically collects weakly supervised annotations. Our argument role labeling component performs zero-shot learning, which only requires the names of the argument roles of new event types. The source codes of our event trigger detection1 and event argument extraction2 models are publicly available for research purposes. We also release a dockerized system connecting the two models into an unified event extraction pipeline. | null | null | 10.18653/v1/2022.deeplo-1.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,276 |
inproceedings | pan-etal-2022-task | Task Transfer and Domain Adaptation for Zero-Shot Question Answering | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.12/ | Pan, Xiang and Sheng, Alex and Shimshoni, David and Singhal, Aditya and Rosenthal, Sara and Sil, Avirup | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 110--116 | Pretrained language models have shown success in various areas of natural language processing, including reading comprehension tasks. However, when applying machine learning methods to new domains, labeled data may not always be available. To address this, we use supervised pretraining on source-domain data to reduce sample complexity on domainspecific downstream tasks. We evaluate zeroshot performance on domain-specific reading comprehension tasks by combining task transfer with domain adaptation to fine-tune a pretrained model with no labelled data from the target task. Our approach outperforms DomainAdaptive Pretraining on downstream domainspecific reading comprehension tasks in 3 out of 4 domains. | null | null | 10.18653/v1/2022.deeplo-1.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,277 |
inproceedings | varshney-mishra-and-chitta-baral-2022-model | Let the Model Decide its Curriculum for Multitask Learning | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.13/ | Varshney, Neeraj and Mishra, Swaroop and Baral, Chitta | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 117--125 | Curriculum learning strategies in prior multitask learning approaches arrange datasets in a difficulty hierarchy either based on human perception or by exhaustively searching the optimal arrangement. However, human perception of difficulty may not always correlate well with machine interpretation leading to poor performance and exhaustive search is computationally expensive. Addressing these concerns, we propose two classes of techniques to arrange training instances into a learning curriculum based on difficulty scores computed via model-based approaches. The two classes i.e Dataset-level and Instance-level differ in granularity of arrangement. Through comprehensive experiments with 12 datasets, we show that instance-level and dataset-level techniques result in strong representations as they lead to an average performance improvement of 4.17{\%} and 3.15{\%} over their respective baselines. Furthermore, we find that most of this improvement comes from correctly answering the difficult instances, implying a greater efficacy of our techniques on difficult tasks | null | null | 10.18653/v1/2022.deeplo-1.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,278 |
inproceedings | jude-ogundepo-etal-2022-afriteva | {A}fri{T}e{VA}: Extending ?Small Data? Pretraining Approaches to Sequence-to-Sequence Models | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.14/ | Jude Ogundepo, Odunayo and Oladipo, Akintunde and Adeyemi, Mofetoluwa and Ogueji, Kelechi and Lin, Jimmy | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 126--135 | Pretrained language models represent the state of the art in NLP, but the successful construction of such models often requires large amounts of data and computational resources. Thus, the paucity of data for low-resource languages impedes the development of robust NLP capabilities for these languages. There has been some recent success in pretraining encoderonly models solely on a combination of lowresource African languages, exemplified by AfriBERTa. In this work, we extend the approach of {\textquotedblleft}small data{\textquotedblright} pretraining to encoder{--} decoder models. We introduce AfriTeVa, a family of sequence-to-sequence models derived from T5 that are pretrained on 10 African languages from scratch. With a pretraining corpus of only around 1GB, we show that it is possible to achieve competitive downstream effectiveness for machine translation and text classification, compared to larger models trained on much more data. All the code and model checkpoints described in this work are publicly available at \url{https://github.com/castorini/afriteva}. | null | null | 10.18653/v1/2022.deeplo-1.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,279 |
inproceedings | wang-liu-and-james-hearne-2022-shot | Few-shot Learning for {S}umerian Named Entity Recognition | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.15/ | Wang, Guanghai and Liu, Yudong and Hearne, James | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 136--145 | This paper presents our study in exploring the task of named entity recognition (NER) in a low resource setting, focusing on few-shot learning on the Sumerian NER task. The Sumerian language is deemed as an extremely low-resource language due to that (1) it is a long dead language, (2) highly skilled language experts are extremely scarce. NER on Sumerian text is important in that it helps identify the actors and entities active in a given period of time from the collections of tens of thousands of texts in building socio-economic networks of the archives of interest. As a text classification task, NER tends to become challenging when the amount of annotated data is limited or the model is required to handle new classes. The Sumerian NER is no exception. In this work, we propose to use two few-shot learning systems, ProtoBERT and NNShot, to the Sumerian NER task. Our experiments show that the ProtoBERT NER generally outperforms both the NNShot NER and the fully supervised BERT NER in low resource settings on the predictions of rare classes. In particular, F1-score of ProtoBERT on unseen entity types on our test set has achieved 89.6{\%} that is significantly better than the F1-score of 84.3{\%} of the BERT NER. | null | null | 10.18653/v1/2022.deeplo-1.15 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,280 |
inproceedings | tan-le-etal-2022-deep | Deep Learning-Based Morphological Segmentation for Indigenous Languages: A Study Case on Innu-Aimun | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.16/ | Tan Le, Ngoc and Cadotte, Antoine and Boivin, Mathieu and Sadat, Fatiha and Terraza, Jimena | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 146--151 | Recent advances in the field of deep learning have led to a growing interest in the development of NLP approaches for low-resource and endangered languages. Nevertheless, relatively little research, related to NLP, has been conducted on indigenous languages. These languages are considered to be filled with complexities and challenges that make their study incredibly difficult in the NLP and AI fields. This paper focuses on the morphological segmentation of indigenous languages, an extremely challenging task because of polysynthesis, dialectal variations with rich morpho-phonemics, misspellings and resource-limited scenario issues. The proposed approach, towards a morphological segmentation of Innu-Aimun, an extremely low-resource indigenous language of Canada, is based on deep learning. Experiments and evaluations have shown promising results, compared to state-of-the-art rule-based and unsupervised approaches. | null | null | 10.18653/v1/2022.deeplo-1.16 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,281 |
inproceedings | chen-yu-and-samuel-r-bowman-2022-clean | Clean or Annotate: How to Spend a Limited Data Collection Budget | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.17/ | Chen, Derek and Yu, Zhou and Bowman, Samuel R. | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 152--168 | Crowdsourcing platforms are often used to collect datasets for training machine learning models, despite higher levels of inaccurate labeling compared to expert labeling. There are two common strategies to manage the impact of such noise: The first involves aggregating redundant annotations, but comes at the expense of labeling substantially fewer examples. Secondly, prior works have also considered using the entire annotation budget to label as many examples as possible and subsequently apply denoising algorithms to implicitly clean the dataset. We find a middle ground and propose an approach which reserves a fraction of annotations to explicitly clean up highly probable error samples to optimize the annotation process. In particular, we allocate a large portion of the labeling budget to form an initial dataset used to train a model. This model is then used to identify specific examples that appear most likely to be incorrect, which we spend the remaining budget to relabel. Experiments across three model variations and four natural language processing tasks show our approach outperforms or matches both label aggregation and advanced denoising methods designed to handle noisy labels when allocated the same finite annotation budget. | null | null | 10.18653/v1/2022.deeplo-1.17 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,282 |
inproceedings | liu-etal-2022-unsupervised | Unsupervised Knowledge Graph Generation Using Semantic Similarity Matching | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.18/ | Liu, Lixian and Omidvar, Amin and Ma, Zongyang and Agrawal, Ameeta and An, Aijun | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 169--179 | Knowledge Graphs (KGs) are directed labeled graphs representing entities and the relationships between them. Most prior work focuses on supervised or semi-supervised approaches which require large amounts of annotated data. While unsupervised approaches do not need labeled training data, most existing methods either generate too many redundant relations or require manual mapping of the extracted relations to a known schema. To address these limitations, we propose an unsupervised method for KG generation that requires neither labeled data nor manual mapping to the predefined relation schema. Instead, our method leverages sentence-level semantic similarity for automatically generating relations between pairs of entities. Our proposed method outperforms two baseline systems when evaluated over four datasets. | null | null | 10.18653/v1/2022.deeplo-1.18 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,283 |
inproceedings | papadopoulos-etal-2022-farfetched | {F}ar{F}etched: Entity-centric Reasoning and Claim Validation for the {G}reek Language based on Textually Represented Environments | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.19/ | Papadopoulos, Dimitris and Metropoulou, Katerina and Papadakis, Nikolaos and Matsatsinis, Nikolaos | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 180--191 | Our collective attention span is shortened by the flood of online information. With FarFetched, we address the need for automated claim validation based on the aggregated evidence derived from multiple online news sources. We introduce an entity-centric reasoning framework in which latent connections between events, actions, or statements are revealed via entity mentions and represented in a graph database. Using entity linking and semantic similarity, we offer a way for collecting and combining information from diverse sources in order to generate evidence relevant to the user`s claim. Then, we leverage textual entailment recognition to quantitatively determine whether this assertion is credible, based on the created evidence. Our approach tries to fill the gap in automated claim validation for less-resourced languages and is showcased on the Greek language, complemented by the training of relevant semantic textual similarity (STS) and natural language inference (NLI) models that are evaluated on translated versions of common benchmarks. | null | null | 10.18653/v1/2022.deeplo-1.19 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,284 |
inproceedings | mustavi-maheen-etal-2022-alternative | Alternative non-{BERT} model choices for the textual classification in low-resource languages and environments | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.20/ | Mustavi Maheen, Syed and Rahman Faisal, Moshiur and Rafakat Rahman, Md. and Karim, Md. Shahriar | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 192--202 | Natural Language Processing (NLP) tasks in non-dominant and low-resource languages have not experienced significant progress. Although pre-trained BERT models are available, GPU-dependency, large memory requirement, and data scarcity often limit their applicability. As a solution, this paper proposes a fusion chain architecture comprised of one or more layers of CNN, LSTM, and BiLSTM and identifies precise configuration and chain length. The study shows that a simpler, CPU-trainable non-BERT fusion CNN + BiLSTM + CNN is sufficient to surpass the textual classification performance of the BERT-related models in resource-limited languages and environments. The fusion architecture competitively approaches the state-of-the-art accuracy in several Bengali NLP tasks and a six-class emotion detection task for a newly developed Bengali dataset. Interestingly, the performance of the identified fusion model, for instance, CNN + BiLSTM + CNN, also holds for other lowresource languages and environments. Efficacy study shows that the CNN + BiLSTM + CNN model outperforms BERT implementation for Vietnamese languages and performs almost equally in English NLP tasks experiencing artificial data scarcity. For the GLUE benchmark and other datasets such as Emotion, IMDB, and Intent classification, the CNN + BiLSTM + CNN model often surpasses or competes with BERT-base, TinyBERT, DistilBERT, and mBERT. Besides, a position-sensitive selfattention layer role further improves the fusion models' performance in the Bengali emotion classification. The models are also compressible to as low as {\ensuremath{\approx}} 5{\texttimes} smaller through pruning and retraining, making them more viable for resource-constrained environments. Together, this study may help NLP practitioners and serve as a blueprint for NLP model choices in textual classification for low-resource languages and environments. | null | null | 10.18653/v1/2022.deeplo-1.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,285 |
inproceedings | pouran-ben-veyseh-etal-2022-generating | Generating Complement Data for Aspect Term Extraction with {GPT}-2 | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.21/ | Pouran Ben Veyseh, Amir and Dernoncourt, Franck and Min, Bonan and Nguyen, Thien Huu | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 203--213 | Aspect Term Extraction (ATE) is the task of identifying the word(s) in a review text toward which the author express an opinion. A major challenges for ATE involve data scarcity that hinder the training of deep sequence taggers to identify rare targets. To overcome these issues, we propose a novel method to better exploit the available labeled data for ATE by computing effective complement sentences to augment the input data and facilitate the aspect term prediction. In particular, we introduce a multistep training procedure that first obtains optimal complement representations and sentences for training data with respect to a deep ATE model. Afterward, we fine-tune the generative language model GPT-2 to allow complement sentence generation at test data. The REINFORCE algorithm is employed to incorporate different expected properties into the reward function to perform the fine-tuning. We perform extensive experiments on the benchmark datasets to demonstrate the benefits of the proposed method that achieve the state-of-the-art performance on different datasets. | null | null | 10.18653/v1/2022.deeplo-1.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,286 |
inproceedings | jundi-and-gabriella-lapesa-2022-translate | How to Translate Your Samples and Choose Your Shots? Analyzing Translate-train {\&} Few-shot Cross-lingual Transfer | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.22/ | Jundi, Iman and Lapesa, Gabriella | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 214--226 | The lack of resources for languages in the Americas has proven to be a problem for the creation of digital systems such as machine translation, search engines, chat bots, and more. The scarceness of digital resources for a language causes a higher impact on populations where the language is spoken by millions of people. We introduce the first official large combined corpus for deep learning of an indigenous South American low-resource language spoken by millions called Quechua. Specifically, our curated corpus is created from text gathered from the southern region of Peru where a dialect of Quechua is spoken that has not traditionally been used for digital systems as a target dialect in the past. In order to make our work repeatable by others, we also offer a public, pre-trained, BERT model called QuBERT which is the largest linguistic model ever trained for any Quechua type, not just the southern region dialect. We furthermore test our corpus and its corresponding BERT model on two major tasks: (1) named-entity recognition (NER) and (2) part-of-speech (POS) tagging by using state-of-the-art techniques where we achieve results comparable to other work on higher-resource languages. In this article, we describe the methodology, challenges, and results from the creation of QuBERT which is on on par with other state-of-the-art multilingual models for natural language processing achieving between 71 and 74{\%} F1 score on NER and 84{--}87{\%} on POS tasks. | null | null | 10.18653/v1/2022.deeplo-1.22 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,287 |
inproceedings | n-c-2022-unified | Unified {NMT} models for the {I}ndian subcontinent, transcending script-barriers | Cherry, Colin and Fan, Angela and Foster, George and Haffari, Gholamreza (Reza) and Khadivi, Shahram and Peng, Nanyun (Violet) and Ren, Xiang and Shareghi, Ehsan and Swayamdipta, Swabha | jul | 2022 | Hybrid | Association for Computational Linguistics | https://aclanthology.org/2022.deeplo-1.23/ | N.c., Gokul | Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing | 227--236 | Highly accurate machine translation systems are very important in societies and countries where multilinguality is very common, and where English often does not suffice. The Indian subcontinent (or South Asia) is such a region, with all the Indic languages currently being under-represented in the NLP ecosystem. It is essential to thoroughly explore various techniques to improve the performance of such lowresource languages at least using the data available in open-source, which itself is something not very explored in the Indic ecosystem. In our work, we perform a study with a focus on improving the performance of very-low-resource South Asian languages, especially of countries in addition to India. Specifically, we propose how unified models can be built that can exploit the data from comparatively resource-rich languages of the same region. We propose strategies to unify different types of unexplored scripts, especially Perso{--}Arabic scripts and Indic scripts to build multilingual models for all the South Asian languages despite the script barrier. We also study how augmentation techniques like back-translation can be made useof to build unified models just using openly available raw data, to understand what levels of improvements can be expected for these Indic languages. | null | null | 10.18653/v1/2022.deeplo-1.23 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,288 |
inproceedings | choudhary-oriordan-2022-cross | Cross-lingual Semantic Role Labelling with the {V}al{P}a{L} Database Knowledge | Agirre, Eneko and Apidianaki, Marianna and Vuli{\'c}, Ivan | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.deelio-1.1/ | Choudhary, Chinmay and O{'}Riordan, Colm | Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures | 1--10 | Cross-lingual Transfer Learning typically involves training a model on a high-resource sourcelanguage and applying it to a low-resource tar-get language. In this work we introduce a lexi-cal database calledValency Patterns Leipzig(ValPal)which provides the argument patterninformation about various verb-forms in mul-tiple languages including low-resource langua-ges. We also provide a framework to integratethe ValPal database knowledge into the state-of-the-art LSTM based model for cross-lingualsemantic role labelling. Experimental resultsshow that integrating such knowledge resultedin am improvement in performance of the mo-del on all the target languages on which it isevaluated. | null | null | 10.18653/v1/2022.deelio-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,290 |
inproceedings | mun-desagulier-2022-transformer | How Do Transformer-Architecture Models Address Polysemy of {K}orean Adverbial Postpositions? | Agirre, Eneko and Apidianaki, Marianna and Vuli{\'c}, Ivan | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.deelio-1.2/ | Mun, Seongmin and Desagulier, Guillaume | Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures | 11--21 | Postpositions, which are characterized as multiple form-function associations and thus polysemous, pose a challenge to automatic identification of their usage. Several studies have used contextualized word-embedding models to reveal the functions of Korean postpositions. Despite the superior classification performance of previous studies, the particular reason how these models resolve the polysemy of Korean postpositions is not enough clear. To add more interpretation, for this reason, we devised a classification model by employing two transformer-architecture models{---}BERT and GPT-2{---}and introduces a computational simulation that interactively demonstrates how these transformer-architecture models simulate human interpretation of word-level polysemy involving Korean adverbial postpositions -ey, -eyse, and -(u)lo. Results reveal that (i) the BERT model performs better than the GPT-2 model to classify the intended function of postpositions, (ii) there is an inverse relationship between the classification accuracy and the number of functions that each postposition manifests, (iii) model performance is affected by the corpus size of each function, (iv) the models' performance gradually improves as the epoch proceeds, and (vi) the models are affected by the scarcity of input and/or semantic closeness between the items. | null | null | 10.18653/v1/2022.deelio-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,291 |
inproceedings | cho-etal-2022-query | Query Generation with External Knowledge for Dense Retrieval | Agirre, Eneko and Apidianaki, Marianna and Vuli{\'c}, Ivan | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.deelio-1.3/ | Cho, Sukmin and Jeong, Soyeong and Yang, Wonsuk and Park, Jong | Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures | 22--32 | Dense retrieval aims at searching for the most relevant documents to the given query by encoding texts in the embedding space, requiring a large amount of query-document pairs to train. Since manually constructing such training data is challenging, recent work has proposed to generate synthetic queries from documents and use them to train a dense retriever. However, compared to the manually composed queries, synthetic queries do not generally ask for implicit information, therefore leading to a degraded retrieval performance. In this work, we propose Query Generation with External Knowledge (QGEK), a novel method for generating queries with external information related to the corresponding document. Specifically, we convert a query into a triplet-based template form to accommodate external information and transmit it to a pre-trained language model (PLM). We validate QGEK on both in-domain and out-domain dense retrieval settings. The dense retriever with the queries requiring implicit information is found to make good performance improvement. Also, such queries are similar to manually composed queries, confirmed by both human evaluation and unique {\&} non-unique words distribution. | null | null | 10.18653/v1/2022.deelio-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,292 |
inproceedings | asprino-etal-2022-uncovering | Uncovering Values: Detecting Latent Moral Content from Natural Language with Explainable and Non-Trained Methods | Agirre, Eneko and Apidianaki, Marianna and Vuli{\'c}, Ivan | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.deelio-1.4/ | Asprino, Luigi and Bulla, Luana and De Giorgis, Stefano and Gangemi, Aldo and Marinucci, Ludovica and Mongiovi, Misael | Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures | 33--41 | Moral values as commonsense norms shape our everyday individual and community behavior. The possibility to extract moral attitude rapidly from natural language is an appealing perspective that would enable a deeper understanding of social interaction dynamics and the individual cognitive and behavioral dimension. In this work we focus on detecting moral content from natural language and we test our methods on a corpus of tweets previously labeled as containing moral values or violations, according to Moral Foundation Theory. We develop and compare two different approaches: (i) a frame-based symbolic value detector based on knowledge graphs and (ii) a zero-shot machine learning model fine-tuned on a task of Natural Language Inference (NLI) and a task of emotion detection. The final outcome from our work consists in two approaches meant to perform without the need for prior training process on a moral value detection task. | null | null | 10.18653/v1/2022.deelio-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,293 |
inproceedings | nararatwong-etal-2022-kiqa | {KIQA}: Knowledge-Infused Question Answering Model for Financial Table-Text Data | Agirre, Eneko and Apidianaki, Marianna and Vuli{\'c}, Ivan | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.deelio-1.6/ | Nararatwong, Rungsiman and Kertkeidkachorn, Natthawut and Ichise, Ryutaro | Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures | 53--61 | While entity retrieval models continue to advance their capabilities, our understanding of their wide-ranging applications is limited, especially in domain-specific settings. We highlighted this issue by using recent general-domain entity-linking models, LUKE and GENRE, to inject external knowledge into a question-answering (QA) model for a financial QA task with a hybrid tabular-textual dataset. We found that both models improved the baseline model by 1.57{\%} overall and 8.86{\%} on textual data. Nonetheless, the challenge remains as they still struggle to handle tabular inputs. We subsequently conducted a comprehensive attention-weight analysis, revealing how LUKE utilizes external knowledge supplied by GENRE. The analysis also elaborates how the injection of symbolic knowledge can be helpful and what needs further improvement, paving the way for future research on this challenging QA task and advancing our understanding of how a language model incorporates external knowledge. | null | null | 10.18653/v1/2022.deelio-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,295 |
inproceedings | varun-etal-2022-trans | Trans-{KBLSTM}: An External Knowledge Enhanced Transformer {B}i{LSTM} Model for Tabular Reasoning | Agirre, Eneko and Apidianaki, Marianna and Vuli{\'c}, Ivan | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.deelio-1.7/ | Varun, Yerram and Sharma, Aayush and Gupta, Vivek | Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures | 62--78 | Natural language inference on tabular data is a challenging task. Existing approaches lack the world and common sense knowledge required to perform at a human level. While massive amounts of KG data exist, approaches to integrate them with deep learning models to enhance tabular reasoning are uncommon. In this paper, we investigate a new approach using BiLSTMs to incorporate knowledge effectively into language models. Through extensive analysis, we show that our proposed architecture, Trans-KBLSTM improves the benchmark performance on InfoTabS, a tabular NLI dataset. | null | null | 10.18653/v1/2022.deelio-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,296 |
inproceedings | malon-etal-2022-fast | Fast Few-shot Debugging for {NLU} Test Suites | Agirre, Eneko and Apidianaki, Marianna and Vuli{\'c}, Ivan | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.deelio-1.8/ | Malon, Christopher and Li, Kai and Kruus, Erik | Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures | 79--86 | We study few-shot debugging of transformer based natural language understanding models, using recently popularized test suites to not just diagnose but correct a problem. Given a few debugging examples of a certain phenomenon, and a held-out test set of the same phenomenon, we aim to maximize accuracy on the phenomenon at a minimal cost of accuracy on the original test set. We examine several methods that are faster than full epoch retraining. We introduce a new fast method, which samples a few in-danger examples from the original training set. Compared to fast methods using parameter distance constraints or Kullback-Leibler divergence, we achieve superior original accuracy for comparable debugging accuracy. | null | null | 10.18653/v1/2022.deelio-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,297 |
inproceedings | brayne-etal-2022-masked | On Masked Language Models for Contextual Link Prediction | Agirre, Eneko and Apidianaki, Marianna and Vuli{\'c}, Ivan | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.deelio-1.9/ | Brayne, Angus and Wiatrak, Maciej and Corneil, Dane | Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures | 87--99 | In the real world, many relational facts require context; for instance, a politician holds a given elected position only for a particular timespan. This context (the timespan) is typically ignored in knowledge graph link prediction tasks, or is leveraged by models designed specifically to make use of it (i.e. n-ary link prediction models). Here, we show that the task of n-ary link prediction is easily performed using language models, applied with a basic method for constructing cloze-style query sentences. We introduce a pre-training methodology based around an auxiliary entity-linked corpus that outperforms other popular pre-trained models like BERT, even with a smaller model. This methodology also enables n-ary link prediction without access to any n-ary training set, which can be invaluable in circumstances where expensive and time-consuming curation of n-ary knowledge graphs is not feasible. We achieve state-of-the-art performance on the primary n-ary link prediction dataset WD50K and on WikiPeople facts that include literals - typically ignored by knowledge graph embedding methods. | null | null | 10.18653/v1/2022.deelio-1.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,298 |
inproceedings | liu-etal-2022-makes | What Makes Good In-Context Examples for {GPT}-3? | Agirre, Eneko and Apidianaki, Marianna and Vuli{\'c}, Ivan | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.deelio-1.10/ | Liu, Jiachang and Shen, Dinghan and Zhang, Yizhe and Dolan, Bill and Carin, Lawrence and Chen, Weizhu | Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures | 100--114 | GPT-3 has attracted lots of attention due to its superior performance across a wide range of NLP tasks, especially with its in-context learning abilities. Despite its success, we found that the empirical results of GPT-3 depend heavily on the choice of in-context examples. In this work, we investigate whether there are more effective strategies for judiciously selecting in-context examples (relative to random sampling) that better leverage GPT-3`s in-context learning capabilities. Inspired by the recent success of leveraging a retrieval module to augment neural networks, we propose to retrieve examples that are semantically-similar to a test query sample to formulate its corresponding prompt. Intuitively, the examples selected with such a strategy may serve as more informative inputs to unleash GPT-3`s power of text generation. We evaluate the proposed approach on several natural language understanding and generation benchmarks, where the retrieval-based prompt selection approach consistently outperforms the random selection baseline. Moreover, it is observed that the sentence encoders fine-tuned on task-related datasets yield even more helpful retrieval results. Notably, significant gains are observed on tasks such as table-to-text generation (44.3{\%} on the ToTTo dataset) and open-domain question answering (45.5{\%} on the NQ dataset). | null | null | 10.18653/v1/2022.deelio-1.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,299 |
inproceedings | burkhardt-etal-2022-syntact | {S}ynt{A}ct: A Synthesized Database of Basic Emotions | S{\"alev{\"a, Jonne and Lignos, Constantine | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.dclrl-1.1/ | Burkhardt, Felix and Eyben, Florian and Schuller, Bj{\"orn | Proceedings of the Workshop on Dataset Creation for Lower-Resourced Languages within the 13th Language Resources and Evaluation Conference | 1--9 | Speech emotion recognition is in the focus of research since several decades and has many applications. One problem is sparse data for supervised learning. One way to tackle this problem is the synthesis of data with emotion simulating speech synthesis approaches. We present a synthesized database of five basic emotions and neutral expression based on rule based manipulation for a diphone synthesizer which we release to the public. The database has been validated in several machine learning experiments as a training set to detect emotional expression from natural speech data. The scripts to generate such a database have been made open source and could be used to aid speech emotion recognition for a low resourced language, as MBROLA supports 35 languages | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,301 |
inproceedings | baskal-etal-2022-data | Data Sets of Eating Disorders by Categorizing {R}eddit and Tumblr Posts: A Multilingual Comparative Study Based on Empirical Findings of Texts and Images | S{\"alev{\"a, Jonne and Lignos, Constantine | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.dclrl-1.2/ | Baskal, Christina and Beutel, Amelie Elisabeth and Keberlein, Jessika and Ollmann, Malte and {\"Uresin, Esra and Vischinski, Jana and Weihe, Janina and Achilles, Linda and Womser-Hacker, Christa | Proceedings of the Workshop on Dataset Creation for Lower-Resourced Languages within the 13th Language Resources and Evaluation Conference | 10--18 | Research has shown the potential negative impact of social media usage on body image. Various platforms present numerous medial formats of possibly harmful content related to eating disorders. Different cultural backgrounds, represented, for example, by different languages, are participating in the discussion online. Therefore, this research aims to investigate eating disorder specific content in a multilingual and multimedia environment. We want to contribute to establishing a common ground for further automated approaches. Our first objective is to combine the two media formats, text and image, by classifying the posts from one social media platform (Reddit) and continuing the categorization in the second (Tumblr). Our second objective is the analysis of multilingualism. We worked qualitatively in an iterative valid categorization process, followed by a comparison of the portrayal of eating disorders on both platforms. Our final data sets contained 960 Reddit and 2 081 Tumblr posts. Our analysis revealed that Reddit users predominantly exchange content regarding disease and eating behaviour, while on Tumblr, the focus is on the portrayal of oneself and one`s body. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,302 |
inproceedings | liu-kobayashi-2022-construction | Construction and Validation of a {J}apanese Honorific Corpus Based on Systemic Functional Linguistics | S{\"alev{\"a, Jonne and Lignos, Constantine | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.dclrl-1.3/ | Liu, Muxuan and Kobayashi, Ichiro | Proceedings of the Workshop on Dataset Creation for Lower-Resourced Languages within the 13th Language Resources and Evaluation Conference | 19--26 | In Japanese, there are different expressions used in speech depending on the speaker`s and listener`s social status, called honorifics. Unlike other languages, Japanese has many types of honorific expressions, and it is vital for machine translation and dialogue systems to handle the differences in meaning correctly. However, there is still no corpus that deals with honorific expressions based on social status. In this study, we developed an honorific corpus (KeiCO corpus) that includes social status information based on Systemic Functional Linguistics, which expresses language use in situations from the social group`s values and common understanding. As a general-purpose language resource, it filled in the Japanese honorific blanks. We expect the KeiCO corpus could be helpful for various tasks, such as improving the accuracy of machine translation, automatic evaluation, correction of Japanese composition and style transformation. We also verified the accuracy of our corpus by a BERT-based classification task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,303 |
inproceedings | fridriksdottir-etal-2022-building | Building an {I}celandic Entity Linking Corpus | S{\"alev{\"a, Jonne and Lignos, Constantine | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.dclrl-1.4/ | Fri{\dh}riksd{\'o}ttir, Steinunn Rut and Eggertsson, Valdimar {\'A}g{\'u}st and J{\'o}hannesson, Benedikt Geir and Dan{\'i}elsson, Hjalti and Loftsson, Hrafn and Einarsson, Hafsteinn | Proceedings of the Workshop on Dataset Creation for Lower-Resourced Languages within the 13th Language Resources and Evaluation Conference | 27--35 | In this paper, we present the first Entity Linking corpus for Icelandic. We describe our approach of using a multilingual entity linking model (mGENRE) in combination with Wikipedia API Search (WAPIS) to label our data and compare it to an approach using WAPIS only. We find that our combined method reaches 53.9{\%} coverage on our corpus, compared to 30.9{\%} using only WAPIS. We analyze our results and explain the value of using a multilingual system when working with Icelandic. Additionally, we analyze the data that remain unlabeled, identify patterns and discuss why they may be more difficult to annotate. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,304 |
inproceedings | korner-etal-2022-crawling | Crawling Under-Resourced Languages - a Portal for Community-Contributed Corpus Collection | S{\"alev{\"a, Jonne and Lignos, Constantine | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.dclrl-1.5/ | K{\"orner, Erik and Helfer, Felix and Schr{\"oder, Christopher and Eckart, Thomas and Goldhahn, Dirk | Proceedings of the Workshop on Dataset Creation for Lower-Resourced Languages within the 13th Language Resources and Evaluation Conference | 36--43 | The {\textquotedblleft}Web as corpus{\textquotedblright} paradigm opens opportunities for enhancing the current state of language resources for endangered and under-resourced languages. However, standard crawling strategies tend to overlook available resources of these languages in favor of already well-documented ones. Since 2016, the {\textquotedblleft}Crawling Under-Resourced Languages{\textquotedblright} portal (CURL) has been contributing to bridging the gap between established crawling techniques and knowledge about relevant Web resources that is only available in the specific language communities. The aim of the CURL portal is to enlarge the amount of available text material for under-resourced languages thereby developing available datasets further and to use them as a basis for statistical evaluation and enrichment of already available resources. The application is currently provided and further developed as part of the thematic cluster {\textquotedblleft}Non-Latin scripts and Under-resourced languages{\textquotedblright} in the German national research consortium Text+. In this context, its focus lies on the extraction of text material and statistical information for the data domain {\textquotedblleft}Lexical resources{\textquotedblright}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,305 |
inproceedings | amanaki-etal-2022-fine | Fine-grained Entailment: Resources for {G}reek {NLI} and Precise Entailment | S{\"alev{\"a, Jonne and Lignos, Constantine | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.dclrl-1.6/ | Amanaki, Eirini and Bernardy, Jean-Philippe and Chatzikyriakidis, Stergios and Cooper, Robin and Dobnik, Simon and Karimi, Aram and Ek, Adam and Giannikouri, Eirini Chrysovalantou and Katsouli, Vasiliki and Kolokousis, Ilias and Mamatzaki, Eirini Chrysovalantou and Papadakis, Dimitrios and Petrova, Olga and Psaltaki, Erofili and Soupiona, Charikleia and Skoulataki, Effrosyni and Stefanidou, Christina | Proceedings of the Workshop on Dataset Creation for Lower-Resourced Languages within the 13th Language Resources and Evaluation Conference | 44--52 | In this paper, we present a number of fine-grained resources for Natural Language Inference (NLI). In particular, we present a number of resources and validation methods for Greek NLI and a resource for precise NLI. First, we extend the Greek version of the FraCaS test suite to include examples where the inference is directly linked to the syntactic/morphological properties of Greek. The new resource contains an additional 428 examples, making it in total a dataset of 774 examples. Expert annotators have been used in order to create the additional resource, while extensive validation of the original Greek version of the FraCaS by non-expert and expert subjects is performed. Next, we continue the work initiated by (CITATION), according to which a subset of the RTE problems have been labeled for missing hypotheses and we present a dataset an order of magnitude larger, annotating the whole SuperGlUE/RTE dataset with missing hypotheses. Lastly, we provide a de-dropped version of the Greek XNLI dataset, where the pronouns that are missing due to the pro-drop nature of the language are inserted. We then run some models to see the effect of that insertion and report the results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,306 |
inproceedings | lau-etal-2022-words | Words.hk: A Comprehensive {C}antonese Dictionary Dataset with Definitions, Translations and Transliterated Examples | S{\"alev{\"a, Jonne and Lignos, Constantine | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.dclrl-1.7/ | Lau, Chaak-ming and Chan, Grace Wing-yan and Tse, Raymond Ka-wai and Chan, Lilian Suet-ying | Proceedings of the Workshop on Dataset Creation for Lower-Resourced Languages within the 13th Language Resources and Evaluation Conference | 53--62 | This paper discusses the compilation of the words.hk Cantonese dictionary dataset, which was compiled through manual annotation over a period of 7 years. Cantonese is a low-resource language with limited tagged or manually checked resources, especially at the sentential level, and this dataset is an attempt to fill the gap. The dataset contains over 53,000 entries of Cantonese words, which comes with basic lexical information (Jyutping phonemic transcription, part-of-speech tags, usage tags), manually crafted definitions in Written Cantonese, English translations, and Cantonese examples with English translation and Jyutping transliterations. Special attention has been paid to handle character variants, so that unintended {\textquotedblleft}character errors{\textquotedblright} (equivalent to typos in phonemic writing systems) are filtered out, and intra-speaker variants are handled. Fine details on word segmentation, character variant handling, definition crafting will be discussed. The dataset can be used in a wide range of natural language processing tasks, such as word segmentation, construction of semantic web and training of models for Cantonese transliteration. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,307 |
inproceedings | kabongo-kabenamualu-etal-2022-listra | {L}i{ST}ra Automatic Speech Translation: {E}nglish to {L}ingala Case Study | S{\"alev{\"a, Jonne and Lignos, Constantine | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.dclrl-1.8/ | Kabongo Kabenamualu, Salomon and Marivate, Vukosi and Kamper, Herman | Proceedings of the Workshop on Dataset Creation for Lower-Resourced Languages within the 13th Language Resources and Evaluation Conference | 63--67 | In recent years there has been great interest in addressing the data scarcity of African languages and providing baseline models for different Natural Language Processing tasks (Orife et al., 2020). Several initiatives (Nekoto et al., 2020) on the continent uses the Bible as a data source to provide proof of concept for some NLP tasks. In this work, we present the Lingala Speech Translation (LiSTra) dataset, release a full pipeline for the construction of such dataset in other languages, and report baselines using both the traditional cascade approach (Automatic Speech Recognition - Machine Translation), and a revolutionary transformer based End-2-End architecture (Liu et al., 2020) with a custom interactive attention that allows information sharing between the recognition decoder and the translation decoder. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,308 |
inproceedings | guellil-etal-2022-ara | Ara-Women-Hate: An Annotated Corpus Dedicated to Hate Speech Detection against Women in the {A}rabic Community | S{\"alev{\"a, Jonne and Lignos, Constantine | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.dclrl-1.9/ | Guellil, Imane and Adeel, Ahsan and Azouaou, Faical and Boubred, Mohamed and Houichi, Yousra and Moumna, Akram Abdelhaq | Proceedings of the Workshop on Dataset Creation for Lower-Resourced Languages within the 13th Language Resources and Evaluation Conference | 68--75 | In this paper, an approach for hate speech detection against women in the Arabic community on social media (e.g. Youtube) is proposed. In the literature, similar works have been presented for other languages such as English. However, to the best of our knowledge, not much work has been conducted in the Arabic language. A new hate speech corpus (Arabic{\_}fr{\_}en) is developed using three different annotators. For corpus validation, three different machine learning algorithms are used, including deep Convolutional Neural Network (CNN), long short-term memory (LSTM) network and Bi-directional LSTM (Bi-LSTM) network. Simulation results demonstrate the best performa | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,309 |
inproceedings | dutta-2022-word | Word-level Language Identification Using Subword Embeddings for Code-mixed {B}angla-{E}nglish Social Media Data | S{\"alev{\"a, Jonne and Lignos, Constantine | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.dclrl-1.10/ | Dutta, Aparna | Proceedings of the Workshop on Dataset Creation for Lower-Resourced Languages within the 13th Language Resources and Evaluation Conference | 76--82 | This paper reports work on building a word-level language identification (LID) model for code-mixed Bangla-English social media data using subword embeddings, with an ultimate goal of using this LID module as the first step in a modular part-of-speech (POS) tagger in future research. This work reports preliminary results of a word-level LID model that uses a single bidirectional LSTM with subword embeddings trained on very limited code-mixed resources. At the time of writing, there are no previous reported results available in which subword embeddings are used for language identification with the Bangla-English code-mixed language pair. As part of the current work, a labeled resource for word-level language identification is also presented, by correcting 85.7{\%} of labels from the 2016 ICON Whatsapp Bangla-English dataset. The trained model was evaluated on a test set of 4,015 tokens compiled from the 2015 and 2016 ICON datasets, and achieved a test accuracy of 93.61{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,310 |
inproceedings | zhang-etal-2022-meganno | {MEGA}nno: Exploratory Labeling for {NLP} in Computational Notebooks | Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.dash-1.1/ | Zhang, Dan and Kim, Hannah and Li Chen, Rafael and Kandogan, Eser and Hruschka, Estevam | Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances) | 1--7 | We present MEGAnno, a novel exploratory annotation framework designed for NLP researchers and practitioners. Unlike existing labeling tools that focus on data labeling only, our framework aims to support a broader, iterative ML workflow including data exploration and model development. With MEGAnno`s API, users can programmatically explore the data through sophisticated search and automated suggestion functions and incrementally update task schema as their project evolve. Combined with our widget, the users can interactively sort, filter, and assign labels to multiple items simultaneously in the same notebook where the rest of the NLP project resides. We demonstrate MEGAnno`s flexible, exploratory, efficient, and seamless labeling experience through a sentiment analysis use case. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,312 |
inproceedings | lu-etal-2022-cross | Cross-lingual Short-text Entity Linking: Generating Features for Neuro-Symbolic Methods | Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.dash-1.2/ | Lu, Qiuhao and Gurajada, Sairam and Sen, Prithviraj and Popa, Lucian and Dou, Dejing and Nguyen, Thien | Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances) | 8--14 | Entity linking (EL) on short text is crucial for a variety of industrial applications. Compared with general long-text EL, short-text EL poses particular challenges as the limited context restricts the clues one can leverage to disambiguate textual mentions. On the other hand, existing studies mostly focus on black-box neural methods and thus lack interpretability, which is critical to industrial applications in certain areas. In this study, we extend upon LNN-EL, a monolingual short-text EL method based on interpretable first-order logic, by incorporating three sets of multilingual features to enable disambiguating mentions written in languages other than English. More specifically, we use multilingual autoencoding language models (i.e., mBERT) to capture the similarities between the mention with its context and the candidate entity; we use multilingual sequence-to-sequence language models (i.e., mBART and mT5) to represent the likelihood of the text given the candidate entity. We also propose a word-level context feature to capture the semantic evidence of the co-occurring mentions. We evaluate the proposed xLNN-EL approach on the QALD-9-multilingual dataset and demonstrate the cross-linguality of the model and the effectiveness of the features. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,313 |
inproceedings | wein-schneider-2022-crowdsourcing | Crowdsourcing Preposition Sense Disambiguation with High Precision via a Priming Task | Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.dash-1.3/ | Wein, Shira and Schneider, Nathan | Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances) | 15--22 | The careful design of a crowdsourcing protocol is critical to eliciting highly accurate annotations from untrained workers. In this work, we explore the development of crowdsourcing protocols for a challenging word sense disambiguation task. We find that (a) selecting a similar example usage can serve as a proxy for selecting an explicit definition of the sense, and (b) priming workers with an additional, related task within the HIT improves performance on the main proxy task. Ultimately, we demonstrate the usefulness of our crowdsourcing elicitation technique as an effective alternative to previously investigated training strategies, which can be used if agreement on a challenging task is low. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,314 |
inproceedings | shukla-etal-2022-dosa | {D}o{SA} : A System to Accelerate Annotations on Business Documents with Human-in-the-Loop | Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.dash-1.4/ | Shukla, Neelesh and Raja, Msp and Katikeri, Raghu and Vaid, Amit | Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances) | 23--27 | Business documents come in a variety of structures, formats and information needs which makes information extraction a challenging task. Due to these variations, having a document generic model which can work well across all types of documents for all the use cases seems far-fetched. For document-specific models, we would need customized document-specific labels. We introduce DoSA (Document Specific Automated Annotations), which helps annotators in generating initial annotations automatically using our novel bootstrap approach by leveraging document generic datasets and models. These initial annotations can further be reviewed by a human for correctness. An initial document-specific model can be trained and its inference can be used as feedback for generating more automated annotations. These automated annotations can be reviewed by humanin-the-loop for the correctness and a new improved model can be trained using the current model as pre-trained model before going for the next iteration. In this paper, our scope is limited to Form like documents due to limited availability of generic annotated datasets, but this idea can be extended to a variety of other documents as more datasets are built. An opensource ready-to-use implementation is made available on GitHub. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,315 |
inproceedings | huang-etal-2022-execution | Execution-based Evaluation for Data Science Code Generation Models | Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.dash-1.5/ | Huang, Junjie and Wang, Chenglong and Zhang, Jipeng and Yan, Cong and Cui, Haotian and Inala, Jeevana Priya and Clement, Colin and Duan, Nan | Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances) | 28--36 | Code generation models can benefit data scientists' productivity by automatically generating code from context and text descriptions. An important measure of the modeling progress is whether a model can generate code that can correctly execute to solve the task. However, due to the lack of an evaluation dataset that directly supports execution-based model evaluation, existing work relies on code surface form similarity metrics (e.g., BLEU, CodeBLEU) for model selection, which can be inaccurate. To remedy this, we introduce ExeDS, an evaluation dataset for execution evaluation for data science code generation tasks. ExeDS contains a set of 534 problems from Jupyter Notebooks, each consisting of code context, task description, reference program, and the desired execution output. With ExeDS, we evaluate the execution performance of five state-of-the-art code generation models that have achieved high surface-form evaluation scores. Our experiments show that models with high surface-form scores do not necessarily perform well on execution metrics, and execution-based metrics can better capture model code generation errors. All the code and data will be released upon acceptance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,316 |
inproceedings | amspoker-petruck-2022-gamified | A Gamified Approach to Frame Semantic Role Labeling | Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.dash-1.6/ | Amspoker, Emily and Petruck, Miriam R L | Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances) | 37--42 | Much research has investigated the possibility of creating games with a purpose (GWAPs), i.e., online games whose purpose is gathering information to address the insufficient amount of data for training and testing of large language models (Von Ahn and Dabbish, 2008). Based on such work, this paper reports on the development of a game for frame semantic role labeling, where players have fun while using semantic frames as prompts for short story writing. This game will generate more annotations for FrameNet and original content for annotation, supporting FrameNet`s goal of characterizing the English language in terms of Frame Semantics. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,317 |
inproceedings | hanafi-etal-2022-comparative | A Comparative Analysis between Human-in-the-loop Systems and Large Language Models for Pattern Extraction Tasks | Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.dash-1.7/ | Hanafi, Maeda and Katsis, Yannis and Jindal, Ishan and Popa, Lucian | Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances) | 43--50 | Building a natural language processing (NLP) model can be challenging for end-users such as analysts, journalists, investigators, etc., especially given that they will likely apply existing tools out of the box. In this article, we take a closer look at how two complementary approaches, a state-of-the-art human-in-the-loop (HITL) tool and a generative language model (GPT-3) perform out of the box, that is, without fine-tuning. Concretely, we compare these approaches when end-users with little technical background are given pattern extraction tasks from text. We discover that the HITL tool performs with higher precision, while GPT-3 requires some level of engineering in its input prompts as well as post-processing on its output before it can achieve comparable results. Future work in this space should look further into the advantages and disadvantages of the two approaches, HITL and generative language model, as well as into ways to optimally combine them. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,318 |
inproceedings | edwards-etal-2022-guiding | Guiding Generative Language Models for Data Augmentation in Few-Shot Text Classification | Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.dash-1.8/ | Edwards, Aleksandra and Ushio, Asahi and Camacho-collados, Jose and Ribaupierre, Helene and Preece, Alun | Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances) | 51--63 | Data augmentation techniques are widely used for enhancing the performance of machine learning models by tackling class imbalance issues and data sparsity. State-of-the-art generative language models have been shown to provide significant gains across different NLP tasks. However, their applicability to data augmentation for text classification tasks in few-shot settings have not been fully explored, especially for specialised domains. In this paper, we leverage GPT-2 (Radford et al, 2019) for generating artificial training instances in order to improve classification performance. Our aim is to analyse the impact the selection process of seed training examples has over the quality of GPT-generated samples and consequently the classifier performance. We propose a human-in-the-loop approach for selecting seed samples. Further, we compare the approach to other seed selection strategies that exploit the characteristics of specialised domains such as human-created class hierarchical structure and the presence of noun phrases. Our results show that fine-tuning GPT-2 in a handful of label instances leads to consistent classification improvements and outperform competitive baselines. The seed selection strategies developed in this work lead to significant improvements over random seed selection for specialised domains. We show that guiding text generation through domain expert selection can lead to further improvements, which opens up interesting research avenues for combining generative models and active learning. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,319 |
inproceedings | kumar-etal-2022-partially | Partially Humanizing Weak Supervision: Towards a Better Low Resource Pipeline for Spoken Language Understanding | Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.dash-1.9/ | Kumar, Ayush and Tripathi, Rishabh and Vepa, Jithendra | Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances) | 64--73 | Weak Supervised Learning (WSL) is a popular technique to develop machine learning models in absence of labeled training data. WSL involves training over noisy labels which are traditionally obtained from hand-engineered semantic rules and task-specific pre-trained models. Such rules offer limited coverage and generalization over tasks. On the other hand, pre-trained models are available only for limited tasks. Thus, obtaining weak labels is a bottleneck in weak supervised learning. In this work, we propose to utilize the prompting paradigm to generate weak labels for the underlying tasks. We show that task-agnostic prompts are generalizable and can be used to obtain noisy labels for different Spoken Language Understanding (SLU) tasks such as sentiment classification, disfluency detection and emotion classification. These prompts can additionally be updated with human-in-the-loop to add task-specific contexts, thus providing flexibility to design task-specific prompts. Our proposed WSL pipeline outperforms other competitive low-resource benchmarks on zero and few-shot learning by more than 4{\%} on Macro-F1 and a conventional rule-based WSL baseline by more than 5{\%} across all the benchmark datasets. We demonstrate that prompt-based methods save nearly 75{\%} of time in a weak-supervised framework and generate more reliable labels for the above SLU tasks and thus can be used as a universal strategy to obtain weak labels. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,320 |
inproceedings | kamath-etal-2022-improving | Improving Human Annotation Effectiveness for Fact Collection by Identifying the Most Relevant Answers | Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.dash-1.10/ | Kamath, Pranav and Sun, Yiwen and Semere, Thomas and Green, Adam and Manley, Scott and Qi, Xiaoguang and Qian, Kun and Li, Yunyao | Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances) | 74--80 | Identifying and integrating missing facts is a crucial task for knowledge graph completion to ensure robustness towards downstream applications such as question answering. Adding new facts for a knowledge graph in real world system often involves human verification effort, where candidate facts are verified for accuracy by human annotators. This process is labor-intensive, time-consuming, and inefficient since only a small number of missing facts can be identified. This paper proposes a simple but effective human-in-the-loop framework for fact collection that searches for a diverse set of highly relevant candidate facts for human annotation. Empirical results presented in this work demonstrate that the proposed solution leads to both improvements in i) the quality of the candidate facts as well as ii) the ability of discovering more facts to grow the knowledge graph without requiring additional human effort. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,321 |
inproceedings | sean-etal-2022-ava | {AVA}-{TMP}: A Human-in-the-Loop Multi-layer Dynamic Topic Modeling Pipeline | Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.dash-1.11/ | Sean, Viseth and Danaee, Padideh and Yang, Yang and Kardes, Hakan | Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances) | 81--87 | A phone call is still one of the primary preferred channels for seniors to express their needs, ask questions, and inform potential problems to their health insurance plans. Alignment Healthis a next-generation, consumer-centric organization that is providing a variety of Medicare Advantage Products for seniors. We combine our proprietary technology platform, AVA, and our high-touch clinical model to provide seniors with care as it should be: high quality, low cost, and accompanied by a vastly improved consumer experience. Our members have the ability to connect with our member services and concierge teams 24/7 for a wide variety of ever-changing reasons through different channels, such as phone, email, and messages. We strive to provide an excellent member experience and ensure our members are getting the help and information they need at every touch {---}ideally, even before they reach us. This requires ongoing monitoring of reasons for contacting us, ensuring agents are equipped with the right tools and information to serve members, and coming up with proactive strategies to eliminate the need for the call when possible. We developed an NLP-based dynamic call reason tagging and reporting pipeline with an optimized human-in-the-loop approach to enable accurate call reason reporting and monitoring with the ability to see high-level trends as well as drill down into more granular sub-reasons. Our system produces 96.4{\%} precision and 30{\%}-50{\%} better recall in tagging calls with proper reasons. We have also consistently achieved a 60+ Net Promoter Score (NPS) score, which illustrates high consumer satisfaction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 28,322 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.