entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | voskarides-etal-2022-news | News Article Retrieval in Context for Event-centric Narrative Creation | Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.in2writing-1.10/ | Voskarides, Nikos and Meij, Edgar and Sauer, Sabrina and de Rijke, Maarten | Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022) | 72--73 | Writers such as journalists often use automatic tools to find relevant content to include in their narratives. In this paper, we focus on supporting writers in the news domain to develop event-centric narratives. Given an incomplete narrative that specifies a main event and a context, we aim to retrieve news articles that discuss relevant events that would enable the continuation of the narrative. We formally define this task and propose a retrieval dataset construction procedure that relies on existing news articles to simulate incomplete narratives and relevant articles. Experiments on two datasets derived from this procedure show that state-of-the-art lexical and semantic rankers are not sufficient for this task. We show that combining those with a ranker that ranks articles by reverse chronological order outperforms those rankers alone. We also perform an in-depth quantitative and qualitative analysis of the results that sheds light on the characteristics of this task. | null | null | 10.18653/v1/2022.in2writing-1.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,597 |
inproceedings | kreminski-martens-2022-unmet | Unmet Creativity Support Needs in Computationally Supported Creative Writing | Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.in2writing-1.11/ | Kreminski, Max and Martens, Chris | Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022) | 74--82 | Large language models (LLMs) enabled by the datasets and computing power of the last decade have recently gained popularity for their capacity to generate plausible natural language text from human-provided prompts. This ability makes them appealing to fiction writers as prospective co-creative agents, addressing the common challenge of writer`s block, or getting unstuck. However, creative writers face additional challenges, including maintaining narrative consistency, developing plot structure, architecting reader experience, and refining their expressive intent, which are not well-addressed by current LLM-backed tools. In this paper, we define these needs by grounding them in cognitive and theoretical literature, then survey previous computational narrative research that holds promise for supporting each of them in a co-creative setting. | null | null | 10.18653/v1/2022.in2writing-1.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,598 |
inproceedings | gero-etal-2022-sparks | Sparks: Inspiration for Science Writing using Language Models | Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.in2writing-1.12/ | Gero, Katy and Liu, Vivian and Chilton, Lydia | Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022) | 83--84 | Large-scale language models are rapidly improving, performing well on a variety of tasks with little to no customization. In this work we investigate how language models can support science writing, a challenging writing task that is both open-ended and highly constrained. We present a system for generating {\textquotedblleft}sparks{\textquotedblright}, sentences related to a scientific concept intended to inspire writers. We run a user study with 13 STEM graduate students and find three main use cases of sparks{---}inspiration, translation, and perspective{---}each of which correlates with a unique interaction pattern. We also find that while participants were more likely to select higher quality sparks, the overall quality of sparks seen by a given participant did not correlate with their satisfaction with the tool. | null | null | 10.18653/v1/2022.in2writing-1.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,599 |
inproceedings | liu-etal-2022-chipsong | {C}hip{S}ong: A Controllable Lyric Generation System for {C}hinese Popular Song | Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.in2writing-1.13/ | Liu, Nayu and Han, Wenjing and Liu, Guangcan and Peng, Da and Zhang, Ran and Wang, Xiaorui and Ruan, Huabin | Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022) | 85--95 | In this work, we take a further step towards satisfying practical demands in Chinese lyric generation from musical short-video creators, in respect of the challenges on songs' format constraints, creating specific lyrics from open-ended inspiration inputs, and language rhyme grace. One representative detail in these demands is to control lyric format at word level, that is, for Chinese songs, creators even expect fix-length words on certain positions in a lyric to match a special melody, while previous methods lack such ability. Although recent lyric generation community has made gratifying progress, most methods are not comprehensive enough to simultaneously meet these demands. As a result, we propose ChipSong, which is an assisted lyric generation system built based on a Transformer-based autoregressive language model architecture, and generates controlled lyric paragraphs fit for musical short-video display purpose, by designing 1) a novel Begin-Internal-End (BIE) word-granularity embedding sequence with its guided attention mechanism for word-level length format control, and an explicit symbol set for sentence-level length format control; 2) an open-ended trigger word mechanism to guide specific lyric contents generation; 3) a paradigm of reverse order training and shielding decoding for rhyme control. Extensive experiments show that our ChipSong generates fluent lyrics, with assuring the high consistency to pre-determined control conditions. | null | null | 10.18653/v1/2022.in2writing-1.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,600 |
inproceedings | du-etal-2022-read | Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision | Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.in2writing-1.14/ | Du, Wanyu and Kim, Zae Myung and Raheja, Vipul and Kumar, Dhruv and Kang, Dongyeop | Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022) | 96--108 | Revision is an essential part of the human writing process. It tends to be strategic, adaptive, and, more importantly, iterative in nature. Despite the success of large language models on text revision tasks, they are limited to non-iterative, one-shot revisions. Examining and evaluating the capability of large language models for making continuous revisions and collaborating with human writers is a critical step towards building effective writing assistants. In this work, we present a human-in-the-loop iterative text revision system, Read, Revise, Repeat (R3), which aims at achieving high quality text revisions with minimal human efforts by reading model-generated revisions and user feedbacks, revising documents, and repeating human-machine interactions. In R3, a text revision model provides text editing suggestions for human writers, who can accept or reject the suggested edits. The accepted edits are then incorporated into the model for the next iteration of document revision. Writers can therefore revise documents iteratively by interacting with the system and simply accepting/rejecting its suggested edits until the text revision model stops making further revisions or reaches a predefined maximum number of revisions. Empirical experiments show that R3 can generate revisions with comparable acceptance rate to human writers at early revision depths, and the human-machine interaction can get higher quality revisions with fewer iterations and edits. The collected human-model interaction dataset and system code are available at \url{https://github.com/vipulraheja/IteraTeR}. Our system demonstration is available at \url{https://youtu.be/lK08tIpEoaE}. | null | null | 10.18653/v1/2022.in2writing-1.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,601 |
inproceedings | garg-gupta-2022-edgegraph | {E}dge{G}raph: Revisiting Statistical Measures for Language Independent Keyphrase Extraction Leveraging on Bi-grams | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.1/ | Garg, Muskan and Gupta, Amit | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 1--10 | The NLP research community resort conventional Word Co-occurrence Network (WCN) for keyphrase extraction using random walk sampling mechanism such as PageRank algo rithm to identify candidate words/ phrases. We argue that the nature of WCN is a path-based network and does not follow a core-periphery structure as observed in web-page linking network. Thus, the language networks leveraging on bi-grams may represent better semantics for keyphrase extraction using random walk. In this work, we use bi-gram as a node and adjacent bi-grams are linked together to generate an EdgeGraph. We validate our method over four publicly available dataset to demonstrate the effectiveness of our simple yet effective language network and our extensive experiments show that random walk over EdgeGraph representation performs better than conventional WCN. We make our codes and supplementary materials available over Github. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,617 |
inproceedings | singh-etal-2022-massively | Massively Multilingual Language Models for Cross Lingual Fact Extraction from Low Resource {I}ndian Languages | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.2/ | Singh, Bhavyajeet and Kandru, Siri Venkata Pavan Kumar and Sharma, Anubhav and Varma, Vasudeva | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 11--18 | Massive knowledge graphs like Wikidata attempt to capture world knowledge about multiple entities. Recent approaches concentrate on automatically enriching these KGs from text. However a lot of information present in the form of natural text in low resource languages is often missed out. Cross Lingual Information Extraction aims at extracting factual information in the form of English triples from low resource Indian Language text. Despite its massive potential, progress made on this task is lagging when compared to Monolingual Information Extraction. In this paper, we propose the task of Cross Lingual Fact Extraction(CLFE) from text and devise an end-to-end generative approach for the same which achieves an overall F1 score of 77.46 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,618 |
inproceedings | bolucu-can-2022-analysing | Analysing Syntactic and Semantic Features in Pre-trained Language Models in a Fully Unsupervised Setting | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.3/ | B{\"ol{\"uc{\"u, Necva and Can, Burcu | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 19--31 | Transformer-based pre-trained language models (PLMs) have been used in all NLP tasks and resulted in a great success. This has led to the question of whether we can transfer this knowledge to syntactic or semantic parsing in a completely unsupervised setting. In this study, we leverage PLMs as a source of external knowledge to perform a fully unsupervised parser model for semantic, constituency and dependency parsing. We analyse the results for English, German, French, and Turkish to understand the impact of the PLMs on different languages for syntactic and semantic parsing. We visualize the attention layers and heads in PLMs for parsing to understand the information that can be learned throughout the layers and the attention heads in the PLMs both for different levels of parsing tasks. The results obtained from dependency, constituency, and semantic parsing are similar to each other, and the middle layers and the ones closer to the final layers have more syntactic and semantic information. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,619 |
inproceedings | kale-etal-2022-knowledge | Knowledge Enhanced Deep Learning Model for Radiology Text Generation | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.4/ | Kale, Kaveri and Bhattacharya, Pushpak and Shetty, Aditya and Gune, Milind and Shrivastava, Kush and Lawyer, Rustom and Biswas, Spriha | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 32--42 | Manual radiology report generation is a time-consuming task. First, radiologists prepare brief notes while carefully examining the imaging report. Then, radiologists or their secretaries create a full-text report that describes the findings by referring to the notes. Automatic radiology report generation is the primary objective of this research. The central part of automatic radiology report generation is generating the finding section (main body of the report) from the radiologists' notes. In this research, we suggest a knowledge graph (KG) enhanced radiology text generator that can provide additional domain-specific information. Our approach uses a KG-BART model to generate a description of clinical findings (referred to as pathological description) from radiologists' brief notes. We have constructed a parallel dataset of radiologists' notes and corresponding pathological descriptions to train the KG-BART model. Our findings demonstrate that, compared to the BART-large and T5-large models, the BLEU-2 score of the pathological descriptions generated by our approach is raised by 4{\%} and 9{\%}, and the ROUGE-L score by 2{\%} and 2{\%}, respectively. Our analysis shows that the KG-BART model for radiology text generation outperforms the T5-large model. Furthermore, we apply our proposed radiology text generator for whole radiology report generation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,620 |
inproceedings | nandigam-etal-2022-named | Named Entity Recognition for Code-Mixed {K}annada-{E}nglish Social Media Data | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.5/ | Nandigam, Poojitha and Appidi, Abhinav and Shrivastava, Manish | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 43--49 | Named Entity Recognition (NER) is a critical task in the field of Natural Language Processing (NLP) and is also a sub-task of Information Extraction. There has been a significant amount of work done in entity extraction and Named Entity Recognition for resource-rich languages. Entity extraction from code-mixed social media data like tweets from twitter complicates the problem due to its unstructured, informal, and incomplete information available in tweets. Here, we present work on NER in Kannada-English code-mixed social media corpus with corresponding named entity tags referring to Organisation (Org), Person (Pers), and Location (Loc). We experimented with machine learning classification models like Conditional Random Fields (CRF), Bi-LSTM, and Bi-LSTM-CRF models on our corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,621 |
inproceedings | nargund-etal-2022-par | {PAR}: Persona Aware Response in Conversational Systems | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.6/ | Nargund, Abhijit and Pandey, Sandeep and Ham, Jina | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 50--54 | To make the Human Computer Interaction more user friendly and persona aligned, detection of user persona is of utmost significance. Towards achieving this objective, we describe a novel approach to select the persona of a user from pre-determine list of personas and utilize it to generate personalized responses. This is achieved in two steps. Firstly, closest matching persona is detected from a set of pre-determined persona for the user. The second step involves the use of a fine-tuned natural language generation (NLG) model to generate persona compliant responses. Through experiments, we demonstrate that the proposed architecture generates better responses than current approaches by using a detected persona. Experimental evaluation on the PersonaChat dataset has demonstrated notable performance in terms of perplexity and F1-score. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,622 |
inproceedings | tiwari-etal-2022-iaemp | {IAE}mp: Intent-aware Empathetic Response Generation | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.7/ | Tiwari, Mrigank and Dahiya, Vivek and Mohanty, Om and Saride, Girija | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 55--59 | In the domain of virtual assistants or conversational systems, it is important to empathise with the user. Being empathetic involves understanding the emotion of the ongoing dialogue and responding to the situation with empathy. We propose a novel approach for empathetic response generation, which leverages predicted intents for future response and prompts the encoder-decoder model to improve empathy in generated responses. Our model exploits the combination of dialogues and their respective emotions to generate empathetic response. As responding intent plays an important part in our generation, we also employ one or more intents to generate responses with relevant empathy. We achieve improved human and automated metrics, compared to the baselines. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,623 |
inproceedings | choi-etal-2022-kildst | {KILDST}: Effective Knowledge-Integrated Learning for Dialogue State Tracking using Gazetteer and Speaker Information | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.8/ | Choi, Hyungtak and Ko, Hyeonmok and Kaur, Gurpreet and Ravuru, Lohith and Gandikota, Kiranmayi and Jhawar, Manisha and Dharani, Simma and Patil, Pranamya | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 60--66 | Dialogue State Tracking (DST) is core research in dialogue systems and has received much attention. In addition, it is necessary to define a new problem that can deal with dialogue between users as a step toward the conversational AI that extracts and recommends information from the dialogue between users. So, we introduce a new task - DST from dialogue between users about scheduling an event (DST-USERS). The DST-USERS task is much more challenging since it requires the model to understand and track dialogue states in the dialogue between users, as well as to understand who suggested the schedule and who agreed to the proposed schedule. To facilitate DST-USERS research, we develop dialogue datasets between users that plan a schedule. The annotated slot values which need to be extracted in the dialogue are date, time, and location. Previous approaches, such as Machine Reading Comprehension (MRC) and traditional DST techniques, have not achieved good results in our extensive evaluations. By adopting the knowledge-integrated learning method, we achieve exceptional results. The proposed model architecture combines gazetteer features and speaker information efficiently. Our evaluations of the dialogue datasets between users that plan a schedule show that our model outperforms the baseline model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,624 |
inproceedings | patil-etal-2022-efficient | Efficient Dialog State Tracking Using Gated- Intent based Slot Operation Prediction for On-device Dialog Systems | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.9/ | Patil, Pranamya and Choi, Hyungtak and Samal, Ranjan and Kaur, Gurpreet and Jhawar, Manisha and Tammewar, Aniruddha and Mukherjee, Siddhartha | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 67--74 | Conversational agents on smart devices need to be efficient concerning latency in responding, for enhanced user experience and real-time utility. This demands on-device processing (as on-device processing is quicker), which limits the availability of resources such as memory and processing. Most state-of-the-art Dialog State Tracking (DST) systems make use of large pre-trained language models that require high resource computation, typically available on high-end servers. Whereas, on-device systems are memory efficient, have reduced time/latency, preserve privacy, and don`t rely on network. A recent approach tries to reduce the latency by splitting the task of slot prediction into two subtasks of State Operation Prediction (SOP) to select an action for each slot, and Slot Value Generation (SVG) responsible for producing values for the identified slots. SVG being computationally expensive, is performed only for a small subset of actions predicted in the SOP. Motivated from this optimization technique, we build a similar system and work on multi-task learning to achieve significant improvements in DST performance, while optimizing the resource consumption. We propose a quadruplet (Domain, Intent, Slot, and Slot Value) based DST, which significantly boosts the performance. We experiment with different techniques to fuse different layers of representations from intent and slot prediction tasks. We obtain the best joint accuracy of 53.3{\%} on the publicly available MultiWOZ 2.2 dataset, using BERT-medium along with a gating mechanism. We also compare the cost efficiency of our system with other large models and find that our system is best suited for an on-device based production environment. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,625 |
inproceedings | choudhry-etal-2022-emotion | Emotion-guided Cross-domain Fake News Detection using Adversarial Domain Adaptation | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.10/ | Choudhry, Arjun and Khatri, Inder and Chakraborty, Arkajyoti and Vishwakarma, Dinesh and Prasad, Mukesh | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 75--79 | Recent works on fake news detection have shown the efficacy of using emotions as a feature or emotions-based features for improved performance. However, the impact of these emotion-guided features for fake news detection in cross-domain settings, where we face the problem of domain shift, is still largely unexplored. In this work, we evaluate the impact of emotion-guided features for cross-domain fake news detection, and further propose an emotion-guided, domain-adaptive approach using adversarial learning. We prove the efficacy of emotion-guided models in cross-domain settings for various combinations of source and target datasets from FakeNewsAMT, Celeb, Politifact and Gossipcop datasets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,626 |
inproceedings | banerjee-etal-2022-generalised | Generalised Spherical Text Embedding | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.11/ | Banerjee, Souvik and Mishra, Bamdev and Jawanpuria, Pratik and Shrivastava, Manish Shrivastava | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 80--85 | This paper aims to provide an unsupervised modelling approach that allows for a more flexible representation of text embeddings. It jointly encodes the words and the paragraphs as individual matrices of arbitrary column dimension with unit Frobenius norm. The representation is also linguistically motivated with the introduction of a metric for the ambient space in which we train the embeddings that calculates the similarity between matrices of unequal number of columns. Thus, the proposed modelling and the novel similarity metric exploits the matrix structure of embeddings. We then go on to show that the same matrices can be reshaped into vectors of unit norm and transform our problem into an optimization problem in a spherical manifold for optimization simplicity. Given the total number of matrices we are dealing with, which is equal to the vocab size plus the total number of documents in the corpus, this makes the training of an otherwise expensive non-linear model extremely efficient. We also quantitatively verify the quality of our text embeddings by showing that they demonstrate improved results in document classification, document clustering and semantic textual similarity benchmark tests. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,627 |
inproceedings | subedi-krishna-bal-2022-cnn | {CNN}-Transformer based Encoder-Decoder Model for {N}epali Image Captioning | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.12/ | Subedi, Bipesh and Krishna Bal, Bal | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 86--91 | Many image captioning tasks have been carried out in recent years, the majority of the work being for the English language. A few research works have also been carried out for Hindi and Bengali languages in the domain. Unfortunately, not much research emphasis seems to be given to the Nepali language in this direction. Furthermore, the datasets are also not publicly available in the Nepali language. The aim of this research is to prepare a dataset with Nepali captions and develop a deep learning model based on the Convolutional Neural Network (CNN) and Transformer combined model to automatically generate image captions in the Nepali language. The dataset for this work is prepared by applying different data preprocessing techniques on the Flickr8k dataset. The preprocessed data is then passed to the CNN-Transformer model to generate image captions. ResNet-101 and EfficientNetB0 are the two pre-trained CNN models employed for this work. We have achieved some promising results which can be further improved in the future. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,628 |
inproceedings | singh-etal-2022-verb | Verb Phrase Anaphora:Do(ing) so with Heuristics | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.13/ | Singh, Sandhya and Shree, Kushagra and Saha, Sriparna and Bhattacharyya, Pushpak and Chinnadurai, Gladvin and Vatsa, Manish | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 92--98 | Verb Phrase Anaphora (VPA) is a universal language phenomenon. It can occur in the form of do so phrase, verb phrase ellipsis, etc. Resolving VPA can improve the performance of Dialogue processing systems, Natural Language Generation (NLG), Question Answering (QA) and so on. In this paper, we present a novel computational approach to resolve the specific verb phrase anaphora appearing as do so construct and its lexical variations for the English language. The approach follows a heuristic technique using a combination of parsing from classical NLP, state-of-the-art (SOTA) Generative Pre-trained Transformer (GPT) language model and RoBERTa grammar correction model. The result indicates that our approach can resolve these specific verb phrase anaphora cases with 73.40 F1 score. The data set used for testing the specific verb phrase anaphora cases of do so and doing so is released for research purposes. This module has been used as the last module in a coreference resolution pipeline for a downstream QA task for the electronic home appliances sector. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,629 |
inproceedings | s-hussain-etal-2022-event | Event Oriented Abstractive Summarization | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.14/ | S Hussain, Aafiya and Z Chafekar, Talha and Sharma, Grishma and H Sharma, Deepak | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 99--108 | Abstractive Summarization models are generally conditioned on the source article. This would generate a summary with the central theme of the article. However, it would not be possible to generate a summary focusing on specific key areas of the article. To solve this problem, we introduce a novel method for abstractive summarization. We aim to use a transformer to generate summaries which are more tailored to the events in the text by using event information. We extract events from text, perform generalized pooling to get a representation for these events and add an event attention block in the decoder to aid the transformer model in summarization. We carried out experiments on CNN / Daily Mail dataset and the BBC Extreme Summarization dataset. We achieve comparable results on both these datasets, with less training and better inclusion of event information in the summaries as shown by human evaluation scores. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,630 |
inproceedings | kumar-etal-2022-augmenting | Augmenting e{B}ooks with with recommended questions using contrastive fine-tuned T5 | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.15/ | Kumar, Shobhan and Chauhan, Arun and Kumar, Pavan | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 109--115 | The recent advances in AI made generation of questions from natural language text possible, the approach is completely excludes human in the loop while generating the appropriate questions which improves the students learning engagement. The ever growing amount of educational content renders it increasingly difficult to manually generate sufficient practice or quiz questions to accompany it. Reading comprehension can be improved by asking the right questions, So, this work offers a Transformer based question generation model for autonomously producing quiz questions from educational information, such as eBooks. This work proposes an contrastive training approach for {\textquoteleft}Text-to-Text Transfer Transformer' (T5) model where the model (T5-eQG) creates the summarized text for the input document and then automatically generates the questions. Our model shows promising results over earlier neural network-based and rules-based models for question generating task on benchmark datasets and NCERT eBooks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,631 |
inproceedings | chaudhry-etal-2022-reducing | Reducing Inference Time of Biomedical {NER} Tasks using Multi-Task Learning | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.16/ | Chaudhry, Mukund Chaudhry and Kazmi, Arman and Jatav, Shashank and Verma, Akhilesh and Samal, Vishal and Paul, Kristopher and Modi, Ashutosh | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 116--122 | Recently, fine-tuned transformer-based models (e.g., PubMedBERT, BioBERT) have shown the state-of-the-art performance of a number of BioNLP tasks, such as Named Entity Recognition (NER). However, transformer-based models are complex and have millions of parameters, and, consequently, are relatively slow during inference. In this paper, we address the time complexity limitations of the BioNLP transformer models. In particular, we propose a Multi-Task Learning based framework for jointly learning three different biomedical NER tasks. Our experiments show a reduction in inference time by a factor of three without any reduction in prediction accuracy. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,632 |
inproceedings | ghosh-mamidi-2022-english | {E}nglish To {I}ndian {S}ign {L}anguage:Rule-Based Translation System Along With Multi-Word Expressions and Synonym Substitution | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.17/ | Ghosh, Abhigyan and Mamidi, Radhika | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 123--127 | The hearing challenged communities all over the world face difficulties to communicate with others. Machine translation has been one of the prominent technologies to facilitate communication with the deaf and hard of hearing community worldwide. We have explored and formulated the fundamental rules of Indian Sign Language(ISL) and implemented them as a translation mechanism of English Text to Indian Sign Language glosses. According to the formulated rules and sub-rules, the source text structure is identified and transferred to the target ISL gloss. This target language is such that it can be easily converted to videos using the Indian Sign Language dictionary. This research work also mentions the intermediate phases of the transfer process and innovations in the process such as Multi-Word Expression detection and synonym substitution to handle the limited vocabulary size of Indian Sign Language while producing semantically accurate translations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,633 |
inproceedings | adhya-etal-2022-improving | Improving Contextualized Topic Models with Negative Sampling | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.18/ | Adhya, Suman and Lahiri, Avishek and Kumar Sanyal, Debarshi and Pratim Das, Partha | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 128--138 | Topic modeling has emerged as a dominant method for exploring large document collections. Recent approaches to topic modeling use large contextualized language models and variational autoencoders. In this paper, we propose a negative sampling mechanism for a contextualized topic model to improve the quality of the generated topics. In particular, during model training, we perturb the generated document-topic vector and use a triplet loss to encourage the document reconstructed from the correct document-topic vector to be similar to the input document and dissimilar to the document reconstructed from the perturbed vector. Experiments for different topic counts on three publicly available benchmark datasets show that in most cases, our approach leads to an increase in topic coherence over that of the baselines. Our model also achieves very high topic diversity. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,634 |
inproceedings | kamal-etal-2022-imfine | {IMF}in{E}:An Integrated {BERT}-{CNN}-{B}i{GRU} Model for Mental Health Detection in Financial Context on Textual Data | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.19/ | Kamal, Ashraf and Mohankumar, Padmapriya and K Singh, Vishal | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 139--148 | Nowadays, mental health is a global issue. It is a pervasive phenomenon over online social network platforms. It is observed in varied categories, such as depression, suicide, and stress on the Web. Hence, mental health detection problem is receiving continuous attention among computational linguistics researchers. On the other hand, public emotions and reactions play a significant role in financial domain and the issue of mental health is directly associated. In this paper, we propose a new study to detect mental health in financial context. It starts with two-step data filtration steps to prepare the mental health dataset in financial context. A new model called IMFinE is introduced. It consists of an input layer, followed by two relevant BERT embedding layers, a convolutional neural network, a bidirectional gated recurrent unit, and finally, dense and output layers. The empirical evaluation of the proposed model is performed on Reddit datasets and it shows impressive results in terms of precision, recall, and f-score. It also outperforms relevant state-of-the-art and baseline methods. To the best of our knowledge, this is the first study on mental health detection in financial context. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,635 |
inproceedings | haswani-mohankumar-2022-methods | Methods to Optimize {W}av2{V}ec with Language Model for Automatic Speech Recognition in Resource Constrained Environment | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.20/ | Haswani, Vaibhav and Mohankumar, Padmapriya | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 149--153 | Automatic Speech Recognition (ASR) on resource constrained environment is a complex task since most of the State-Of-The-Art models are combination of multilayered convolutional neural network (CNN) and Transformer models which itself requires huge resources such as GPU or TPU for training as well as inference. The accuracy as a performance metric of an ASR system depends upon the efficiency of phonemes to word translation of an Acoustic Model and context correction of the Language model. However, inference as a performance metric is also an important aspect, which mostly depends upon the resources. Also, most of the ASR models uses transformer models at its core and one caveat of transformers is that it usually has a finite amount of sequence length it can handle. Either because it uses position encodings or simply because the cost of attention in transformers is actually O(n{\texttwosuperior}) in sequence length, meaning that using very large sequence length explodes in complexity/memory. So you cannot run the system with finite hardware even a very high-end GPU, because if we inference even a one hour long audio with Wav2Vec the system will crash. In this paper, we used some state-of-the-art methods to optimize the Wav2Vec model for better accuracy of predictions in resource constrained systems. In addition, we have performed tests with other SOTA models such as Citrinet and Quartznet for the comparative analysis. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,636 |
inproceedings | s-etal-2022-knowledge | Knowledge Graph-based Thematic Similarity for {I}ndian Legal Judgement Documents using Rhetorical Roles | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.21/ | S, Sheetal and N, Veda and Prabhu, Ramya and P, Pruthv and R, Mamatha H R | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 154--160 | Automation in the legal domain is promising to be vital to help solve the backlog that currently affects the Indian judiciary. For any system that is developed to aid such a task, it is imperative that it is informed by choices that legal professionals often take in the real world in order to achieve the same task while also ensuring that biases are eliminated. The task of legal case similarity is accomplished in this paper by extracting the thematic similarity of the documents based on their rhetorical roles. The similarity scores between the documents are calculated, keeping in mind the different amount of influence each of these rhetorical roles have in real life practices over determining the similarity between two documents. Knowledge graphs are used to capture this information in order to facilitate the use of this method for applications like information retrieval and recommendation systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,637 |
inproceedings | adibhatla-shrivastava-2022-scone | {SC}on{E}:Contextual Relevance based {S}ignificant {C}ompo{N}ent {E}xtraction from Contracts | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.22/ | Adibhatla, Hiranmai and Shrivastava, Manish | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 161--171 | Automatic extraction of {\textquotedblleft}significant{\textquotedblright} components of a legal contract, has the potential to simplify the end user`s comprehension. In essence, {\textquotedblleft}significant{\textquotedblright} pieces of information have 1) information pertaining to material/practical details about a specific contract and 2) information that is novel or comes as a {\textquotedblleft}surprise{\textquotedblright} for a specific type of contract. It indicates that the significance of a component may be defined at an individual contract level and at a contract-type level. A component, sentence, or paragraph, may be considered significant at a contract level if it contains contract-specific information (CSI), like names, dates, or currency terms. At a contract-type level, components that deviate significantly from the norm for the type may be considered significant (type-specific information (TSI)). In this paper, we present approaches to extract {\textquotedblleft}significant{\textquotedblright} components from a contract at both these levels. We attempt to do this by identifying patterns in a pool of documents of the same kind. Consequently, in our approach, the solution is formulated in two parts: identifying CSI using a BERT-based contract-specific information extractor and identifying TSI by scoring sentences in a contract for their likelihood. In this paper, we even describe the annotated corpus of contract documents that we created as a first step toward the development of such a language-processing system. We also release a dataset of contract samples containing sentences belonging to CSI and TSI. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,638 |
inproceedings | khurana-etal-2022-animojity | {A}ni{MOJ}ity:Detecting Hate Comments in {I}ndic languages and Analysing Bias against Content Creators | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.23/ | Khurana, Rahul and Pandey, Chaitanya and Gupta, Priyanshi and Nagrath, Preeti | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 172--182 | Online platforms have dramatically changed how people communicate with one another, resulting in a 467 million increase in the number of Indians actively exchanging and distributing social data. This caused an unexpected rise in harmful, racially, sexually, and religiously biased Internet content humans cannot control. As a result, there is an urgent need to research automated computational strategies for identifying hostile content in academic forums. This paper presents our learning pipeline and novel model, which classifies a multilingual text with a test f1-Score of 88.6{\%} on the Moj Multilingual Abusive Comment Identification dataset for hate speech detection in thirteen Indian regional languages. Our model, Animojity, incorporates transfer learning and SOTA pre- and post-processing techniques. We manually annotate 300 samples to investigate bias and provide insight into the hate towards creators. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,639 |
inproceedings | das-etal-2022-revisiting | Revisiting Anwesha:Enhancing Personalised and Natural Search in {B}angla | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.24/ | Das, Arup and Acharya, Joyojyoti and Kundu, Bibekananda and Chakraborti, Sutanu | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 183--193 | Bangla is a low-resource, highly agglutinative language. Thus it is challenging to facilitate an effective search of Bangla documents. We have created a gold standard dataset containing query document relevance pairs for evaluation purposes. We utilise Named Entities to improve the retrieval effectiveness of traditional Bangla search algorithms. We suggest a reasonable starting model for leveraging implicit preference feedback based on the user search behaviour to enhance the results retrieved by the Explicit Semantic Analysis (ESA) approach. We use contextual sentence embeddings obtained via Language-agnostic BERT Sentence Embedding (LaBSE) to rerank the candidate documents retrieved by the traditional search algorithms (tf-idf) based on the top sentences that are most relevant to the query. This paper presents our empirical findings across these directions and critically analyses the results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,640 |
inproceedings | shukla-etal-2022-knowpaml | {K}now{PAML}:A Knowledge Enhanced Framework for Adaptable Personalized Dialogue Generation Using Meta-Learning | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.25/ | Shukla, Aditya and Ahmad, Zishan and Ekbal, Asif | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 194--203 | In order to provide personalized interactions in a conversational system, responses must be consistent with the user and agent persona while still being relevant to the context of the conversation. Existing personalized conversational systems increase the consistency of the generated response by leveraging persona descriptions, which sometimes tend to generate irrelevant responses to the context. To solve this problems, we propose to extend the persona-agnostic meta-learning (PAML) framework by adding knowledge from ConceptNet knowledge graph with multi-hop attention mechanism. Knowledge is a concept in a triple form that helps in conversational flow. The multi-hop attention mechanism helps select the most appropriate triples with respect to the conversational context and persona description, as not all triples are beneficial for generating responses. The Meta-Learning (PAML) framework allows quick adaptation to different personas by utilizing only a few dialogue samples from the same user. Our experiments on the Persona-Chat dataset show that our method outperforms in terms of persona-adaptability, resulting in more persona-consistent responses, as evidenced by the entailment (Entl) score in the automatic evaluation and the consistency (Con) score in human evaluation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,641 |
inproceedings | agarwal-etal-2022-big | There is No Big Brother or Small Brother:Knowledge Infusion in Language Models for Link Prediction and Question Answering | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.26/ | Agarwal, Ankush and Gawade, Sakharam and Channabasavarajendra, Sachin and Bhattacharya, Pushpak | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 204--211 | The integration of knowledge graphs with deep learning is thriving in improving the performance of various natural language processing (NLP) tasks. In this paper, we focus on knowledge-infused link prediction and question answering using language models, T5, and BLOOM across three domains:Aviation, Movie, and Web. In this context, we infuse knowledge in large and small language models and study their performance, and find the performance to be similar. For the link prediction task on the Aviation Knowledge Graph, we obtain a 0.2 hits@1 score using T5-small, T5-base, T5-large, and BLOOM. Using template-based scripts, we create a set of 1 million synthetic factoid QA pairs in the aviation domain from National Transportation Safety Board (NTSB) reports. On our curated QA pairs, the three models of T5 achieve a 0.7 hits@1 score. We validate our findings with the paired student t test and Cohen`s kappa scores. For link prediction on Aviation Knowledge Graph using T5-small and T5-large, we obtain a Cohen`s kappa score of 0.76, showing substantial agreement between the models. Thus, we infer that small language models perform similar to large language models with the infusion of knowledge. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,642 |
inproceedings | yazdani-etal-2022-efficient | Efficient Joint Learning for Clinical Named Entity Recognition and Relation Extraction Using {F}ourier Networks:A Use Case in Adverse Drug Events | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.27/ | Yazdani, Anthony and Proios, Dimitrios and Rouhizadeh, Hossein and Teodoro, Douglas | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 212--223 | Current approaches for clinical information extraction are inefficient in terms of computational costs and memory consumption, hindering their application to process large-scale electronic health records (EHRs). We propose an efficient end-to-end model, the Joint-NER-RE-Fourier (JNRF), to jointly learn the tasks of named entity recognition and relation extraction for documents of variable length. The architecture uses positional encoding and unitary batch sizes to process variable length documents and uses a weight-shared Fourier network layer for low-complexity token mixing. Finally, we reach the theoretical computational complexity lower bound for relation extraction using a selective pooling strategy and distance-aware attention weights with trainable polynomial distance functions. We evaluated the JNRF architecture using the 2018 N2C2 ADE benchmark to jointly extract medication-related entities and relations in variable-length EHR summaries. JNRF outperforms rolling window BERT with selective pooling by 0.42{\%}, while being twice as fast to train. Compared to state-of-the-art BiLSTM-CRF architectures on the N2C2 ADE benchmark, results show that the proposed approach trains 22 times faster and reduces GPU memory consumption by 1.75 folds, with a reasonable performance tradeoff of 90{\%}, without the use of external tools, hand-crafted rules or post-processing. Given the significant carbon footprint of deep learning models and the current energy crises, these methods could support efficient and cleaner information extraction in EHRs and other types of large-scale document databases. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,643 |
inproceedings | kumar-bojar-2022-genre | Genre Transfer in {NMT}:Creating Synthetic Spoken Parallel Sentences using Written Parallel Data | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.28/ | Kumar, Nalin and Bojar, Ondrej | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 224--233 | Text style transfer (TST) aims to control attributes in a given text without changing the content. The matter gets complicated when the boundary separating two styles gets blurred. We can notice similar difficulties in the case of parallel datasets in spoken and written genres. Genuine spoken features like filler words and repetitions in the existing spoken genre parallel datasets are often cleaned during transcription and translation, making the texts closer to written datasets. This poses several problems for spoken genre-specific tasks like simultaneous speech translation. This paper seeks to address the challenge of improving spoken language translations. We start by creating a genre classifier for individual sentences and then try two approaches for data augmentation using written examples:(1) a novel method that involves assembling and disassembling spoken and written neural machine translation (NMT) models, and (2) a rule-based method to inject spoken features. Though the observed results for (1) are not promising, we get some interesting insights into the solution. The model proposed in (1) fine-tuned on the synthesized data from (2) produces naturally looking spoken translations for written-to-spoken genre transfer in En-Hi translation systems. We use this system to produce a second-stage En-Hi synthetic corpus, which however lacks appropriate alignments of explicit spoken features across the languages. For the final evaluation, we fine-tune Hi-En spoken translation systems on the synthesized parallel corpora. We observe that the parallel corpus synthesized using our rule-based method produces the best results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,644 |
inproceedings | chatterjee-etal-2022-pacman | {PACMAN}:{PA}rallel {C}ode{M}ixed d{A}ta generatio{N} for {POS} tagging | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.29/ | Chatterjee, Arindam and Sharma, Chhavi and Raj, Ayush and Ekbal, Asif | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 234--244 | Code-mixing or Code-switching is the mixing of languages in the same context, predominantly observed in multilingual societies. The existing code-mixed datasets are small and primarily contain social media text that does not adhere to standard spelling and grammar. Computational models built on such data fail to generalise on unseen code-mixed data. To address the unavailability of quality code-mixed annotated datasets, we explore the combined task of generating annotated code mixed data, and building computational models from this generated data, specifically for code-mixed Part-Of-Speech (POS) tagging. We introduce PACMAN(PArallel CodeMixed dAta generatioN) - a synthetically generated code-mixed POS tagged dataset, with above 50K samples, which is the largest annotated code-mixed dataset. We build POS taggers using classical machine learning and deep learning based techniques on the generated data to report an F1-score of 98{\%} (8{\%} above current State-of-the-art (SOTA)). To determine the efficacy of our data, we compare it against the existing benchmark in code-mixed POS tagging. PACMAN outperforms the benchmark, ratifying that our dataset and, subsequently, our POS tagging models are generalised and capable of handling even natural code-mixed and monolingual data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,645 |
inproceedings | arnardottir-etal-2022-error | Error Corpora for Different Informant Groups:Annotating and Analyzing Texts from {L}2 Speakers, People with Dyslexia and Children | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.30/ | Arnard{\'o}ttir, {\TH}{\'o}runn and Glisic, Isidora and Simonsen, Annika and Stef{\'a}nsd{\'o}ttir, Lilja and Ingason, Anton | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 245--252 | Error corpora are useful for many tasks, in particular for developing spell and grammar checking software and teaching material and tools. We present and compare three specialized Icelandic error corpora; the Icelandic L2 Error Corpus, the Icelandic Dyslexia Error Corpus, and the Icelandic Child Language Error Corpus. Each corpus contains texts written by speakers of a particular group; L2 speakers of Icelandic, people with dyslexia, and children aged 10 to 15. The corpora shed light on errors made by these groups and their frequencies, and all errors are manually labeled according to an annotation scheme. The corpora vary in size, consisting of errors ranging from 7,817 to 24,948, and are published under a CC BY 4.0 license. In this paper, we describe the corpora and their annotation scheme, and draw comparisons between their errors and their frequencies. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,646 |
inproceedings | saha-etal-2022-similarity | Similarity Based Label Smoothing For Dialogue Generation | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.31/ | Saha, Sougata and Das, Souvik and Srihari, Rohini | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 253--259 | Generative neural conversational systems are typically trained by minimizing the entropy loss between the training {\textquotedblleft}hard{\textquotedblright} targets and the predicted logits. Performance gains and improved generalization are often achieved by employing regularization techniques like label smoothing, which converts the training {\textquotedblleft}hard{\textquotedblright} targets to soft targets. However, label smoothing enforces a data independent uniform distribution on the incorrect training targets, leading to a false assumption of equiprobability. In this paper, we propose and experiment with incorporating data-dependent word similarity-based weighing methods to transform the uniform distribution of the incorrect target probabilities in label smoothing to a more realistic distribution based on semantics. We introduce hyperparameters to control the incorrect target distribution and report significant performance gains over networks trained using standard label smoothing-based loss on two standard open-domain dialogue corpora. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,647 |
inproceedings | roychoudhury-etal-2022-novel | A Novel Approach towards Cross Lingual Sentiment Analysis using Transliteration and Character Embedding | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.32/ | Roychoudhury, Rajarshi and Dey, Subhrajit and Akhtar, Md and Das, Amitava and Naskar, Sudip | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 260--268 | Sentiment analysis with deep learning in resource-constrained languages is a challenging task. In this paper, we introduce a novel approach for sentiment analysis in resource-constrained scenarios using character embedding and cross-lingual sentiment analysis with transliteration. We use this method to introduce the novel task of inducing sentiment polarity of words and sentences and aspect term sentiment analysis in the no-resource scenario. We formulate this task by taking a metalingual approach whereby we transliterate data from closely related languages and transform it into a meta language. We also demonstrated the efficacy of using character-level embedding for sentence representation. We experimented with 4 Indian languages {--} Bengali, Hindi, Tamil, and Telugu, and obtained encouraging results. We also presented new state-of-the-art results on the Hindi sentiment analysis dataset leveraging our metalingual character embeddings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,648 |
inproceedings | yadav-etal-2022-normalization | Normalization of Spelling Variations in Code-Mixed Data | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.33/ | Yadav, Krishna and Akhtar, Md and Chakraborty, Tanmoy | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 269--279 | Code-mixed text infused with low resource language has always been a challenge for natural language understanding models. A significant problem while understanding such texts is the correlation between the syntactic and semantic arrangement of words. The phonemes of each character in a word dictates the spelling representation of a term in low resource language. However, there is no universal protocol or alphabet mapping for code-mixing. In this paper, we highlight the impact of spelling variations in code-mixed data for training natural language understanding models. We emphasize the impact of using phonetics to neutralize this variation in spelling across different usage of a word with the same semantics. The proposed approach is a computationally inexpensive technique and improves the performances of state-of-the-art models for three dialog system tasks \textit{viz.} intent classification, slot-filling, and response generation. We propose a data pipeline for normalizing spelling variations irrespective of language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,649 |
inproceedings | bharti-etal-2022-method | A Method for Automatically Estimating the Informativeness of Peer Reviews | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.34/ | Bharti, Prabhat and Ghosal, Tirthankar and Agarwal, Mayank and Ekbal, Asif | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 280--289 | Peer reviews are intended to give authors constructive and informative feedback. It is expected that the reviewers will make constructive suggestions over certain aspects, e.g., novelty, clarity, empirical and theoretical soundness, etc., and sections, e.g., problem definition/idea, datasets, methodology, experiments, results, etc., of the paper in a detailed manner. With this objective, we analyze the reviewer`s attitude towards the work. Aspects of the review are essential to determine how much weight the editor/chair should place on the review in making a decision. In this paper, we used a publically available Peer Review Analyze dataset of peer review texts manually annotated at the sentence level ({\ensuremath{\sim}}13.22 k sentences) across two layers:Paper Section Correspondence and Paper Aspect Category. We transform these categorical annotations to derive an informativeness score of the review based on the review`s coverage across section correspondence, aspects of the paper, and reviewer-centric uncertainty associated with the review. We hope that our proposed methods, which are motivated towards automatically estimating the quality of peer reviews in the form of informativeness scores, will give editors an additional layer of confidence for the automatic judgment of review quality. We make our codes available at \url{https://github.com/PrabhatkrBharti/informativeness.git}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,650 |
inproceedings | s-2022-spellchecker | Spellchecker for {S}anskrit:The Road Less Taken | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.35/ | S, Prasanna | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 290--299 | A spellchecker is essential for any language for producing error-free content. While there exist advanced computational tools for Sanskrit, such as word segmenter, morphological analyser, sentential parser, and machine translation, a fully functional spellchecker is not available. This paper presents a Sanskrit spellchecking dictionary for Hunspell, thereby creating a spellchecker that works across the numerous platforms Hunspell supports. The spellchecking rules are created based on the Paninian grammar, and the dictionary design follows the word-and-paradigm model, thus, making it easily extendible for future improvements. The paper also presents an online spellchecking interface for Sanskrit developed mainly for the platforms where Hunspell integration is not available yet. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,651 |
inproceedings | vemula-etal-2022-tequad | {T}e{Q}u{AD}:{T}elugu Question Answering Dataset | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.36/ | Vemula, Rakesh and Nuthi, Mani and Srivastava, Manish | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 300--307 | Recent state of the art models and new datasets have advanced many Natural Language Processing areas, especially, Machine Reading Comprehension tasks have improved with the help of datasets like SQuAD (Stanford Question Answering Dataset). But, large high quality datasets are still not a reality for low resource languages like Telugu to record progress in MRC. In this paper, we present a Telugu Question Answering Dataset - TeQuAD with the size of 82k parallel triples created by translating triples from the SQuAD. We also introduce a few methods to create similar Question Answering datasets for the low resource languages. Then, we present the performance of our models which outperform baseline models on Monolingual and Cross Lingual Machine Reading Comprehension (CLMRC) setups, the best of them resulting in an F1 score of 83 {\%} and Exact Match (EM) score of 61 {\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,652 |
inproceedings | s-etal-2022-comprehensive | A Comprehensive Study of Mahabharat using Semantic and Sentiment Analysis | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.37/ | J S, Srijeyarankesh and Kumaran, Aishwarya and Lakshminarasimhan, Nithyasri and M, Shanmuga Priya | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 308--317 | Indian epics have not been analyzed computationally to the extent that Greek epics have. In this paper, we show how interesting insights can be derived from the ancient epic Mahabharata by applying a variety of analytical techniques based on a combination of natural language processing methods like semantic analysis, sentiment analysis and Named Entity Recognition (NER). The key findings include the analysis of events and their importance in shaping the story, character`s life and their actions leading to consequences and change of emotions across the eighteen parvas of the story. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,653 |
inproceedings | sah-abulaish-2022-deepada | {D}eep{ADA}:An Attention-Based Deep Learning Framework for Augmenting Imbalanced Textual Datasets | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.38/ | Sah, Amit and Abulaish, Muhammad | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 318--327 | In this paper, we present an attention-based deep learning framework, DeepADA, which uses data augmentation to address the class imbalance problem in textual datasets. The proposed framework carries out the following functions:(i) using MPNET-based embeddings to extract keywords out of documents from the minority class, (ii) making use of a CNN-BiLSTM architecture with parallel attention to learn the important contextual words associated with the minority class documents' keywords and provide them with word-level characteristics derived from their statistical and semantic features, (iii) using MPNET, replacing the key contextual terms derived from the oversampled documents that match to a keyword with the contextual term that best fits the context, and finally (iv) oversampling the minority class dataset to produce a balanced dataset. Using a 2-layer stacked BiLSTM classifier, we assess the efficacy of the proposed framework using the original and oversampled versions of three Amazon`s reviews datasets. We contrast the proposed data augmentation approach with two state-of-the-art text data augmentation methods. The experimental results reveal that our method produces an oversampled dataset that is more useful and helps the classifier perform better than the other two state-of-the-art methods. Nevertheless, we discover that the oversampled datasets outperformed their original ones by a wide margin. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,654 |
inproceedings | abulaish-gulia-2022-compact | Compact Residual Learning with Frequency-Based Non-Square Kernels for Small Footprint Keyword Spotting | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.39/ | Abulaish, Muhammad and Gulia, Rahul | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 328--336 | Enabling voice assistants on small embedded devices requires a keyword spotter with a smaller model size and adequate accuracy. It becomes difficult to achieve a reasonable trade-off between a small footprint and high accuracy. Recent studies have demonstrated that convolution neural networks are also effective in the audio domain. In this paper, taking into account the nature of spectrograms, we propose a compact ResNet architecture that uses frequency-based non-square kernels to extract the maximum number of timbral features for keyword spotting. The proposed architecture is approximately three-and-a-half times smaller than a comparable architecture with conventional square kernels. On the Google`s speech command dataset v1, it outperforms both Google`s convolution neural networks and the equivalent ResNet architecture with square kernels. By implementing non-square kernels for spectrogram-related data, we can achieve a significant increase in accuracy with relatively few parameters, as compared to the conventional square kernels that are the default choice for every problem. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,655 |
inproceedings | roychowdhury-etal-2022-unsupervised | Unsupervised {B}engali Text Summarization Using Sentence Embedding and Spectral Clustering | Akhtar, Md. Shad and Chakraborty, Tanmoy | dec | 2022 | New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-main.40/ | Roychowdhury, Sohini and Sarkar, Kamal and Maji, Arka | Proceedings of the 19th International Conference on Natural Language Processing (ICON) | 337--346 | Single document extractive text summarization produces a condensed version of a document by extracting salient sentences from the document. Most significant and diverse information can be obtained from a document by breaking it into topical clusters of sentences. The spectral clustering method is useful in text summarization because it does not assume any fixed shape of the clusters, and the number of clusters can automatically be inferred using the Eigen gap method. In our approach, we have used word embedding-based sentence representation and a spectral clustering algorithm to identify various topics covered in a Bengali document and generate an extractive summary by selecting salient sentences from the identified topics. We have compared our developed Bengali summarization system with several baseline extractive summarization systems. The experimental results show that the proposed approach performs better than some baseline Bengali summarization systems it is compared to. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,656 |
inproceedings | e-ojo-etal-2022-language | Language Identification at the Word Level in Code-Mixed Texts Using Character Sequence and Word Embedding | Chakravarthi, Bharathi Raja and Murugappan, Abirami and Chinnappa, Dhivya and Hane, Adeep and Kumeresan, Prasanna Kumar and Ponnusamy, Rahul | dec | 2022 | IIIT Delhi, New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-wlli.1/ | E. Ojo, O. and Gelbukh, A. and Calvo, H. and Feldman, A. and O. Adebanji, O. and Armenta-Segura, J. | Proceedings of the 19th International Conference on Natural Language Processing (ICON): Shared Task on Word Level Language Identification in Code-mixed Kannada-English Texts | 1--6 | People often switch languages in conversations or written communication in order to communicate thoughts on social media platforms. The languages in texts of this type, also known as code-mixed texts, can be mixed at the sentence, word, or even sub-word level. In this paper, we address the problem of identifying language at the word level in code-mixed texts using a sequence of characters and word embedding. We feed machine learning and deep neural networks with a range of character-based and word-based text features as input. The data for this experiment was created by combining YouTube video comments from code-mixed Kannada and English (Kn-En) texts. The texts were pre-processed, split into words, and categorized as {\textquoteleft}Kannada', {\textquoteleft}English', {\textquoteleft}Mixed-Language', {\textquoteleft}Name', {\textquoteleft}Location', and {\textquoteleft}Other'. The proposed techniques were able to learn from these features and were able to effectively identify the language of the words in the dataset. The proposed CK-Keras model with pre-trained Word2Vec embedding was our best-performing system, as it outperformed other methods when evaluated by the F1 scores. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,658 |
inproceedings | vajrobol-2022-coli | {C}o{LI}-Kanglish: Word-Level Language Identification in Code-Mixed {K}annada-{E}nglish Texts Shared Task using the Distilka model | Chakravarthi, Bharathi Raja and Murugappan, Abirami and Chinnappa, Dhivya and Hane, Adeep and Kumeresan, Prasanna Kumar and Ponnusamy, Rahul | dec | 2022 | IIIT Delhi, New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-wlli.2/ | Vajrobol, Vajratiya | Proceedings of the 19th International Conference on Natural Language Processing (ICON): Shared Task on Word Level Language Identification in Code-mixed Kannada-English Texts | 7--11 | Due to the intercultural demographic of online users, code-mixed language is often used by them to express themselves on social media. Language support to such users is based on the ability of a system to identify the constituent languages of the code-mixed language. Therefore, the process of language identification that helps in determining the language of individual textual entities from a code-mixed corpus is a current and relevant classification problem. Code-mixed texts are difficult to interpret and analyze from an algorithmic perspective. However, highly complex transformer- based techniques can be used to analyze and identify distinct languages of words in code-mixed texts. Kannada is one of the Dravidian languages which is spoken and written in Karnataka, India. This study aims to identify the language of individual words of texts from a corpus of code-mixed Kannada-English texts using transformer-based techniques. The proposed Distilka model was developed by fine-tuning the DistilBERT model using the code-mixed corpus. This model performed best on the official test dataset with a macro-averaged F1-score of 0.62 and weighted precision score of 0.86. The proposed solution ranked first in the shared task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,659 |
inproceedings | deka-etal-2022-bert | {BERT}-based Language Identification in Code-Mix {K}annada-{E}nglish Text at the {C}o{LI}-Kanglish Shared Task@{ICON} 2022 | Chakravarthi, Bharathi Raja and Murugappan, Abirami and Chinnappa, Dhivya and Hane, Adeep and Kumeresan, Prasanna Kumar and Ponnusamy, Rahul | dec | 2022 | IIIT Delhi, New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-wlli.3/ | Deka, Pritam and Jyoti Kalita, Nayan and Kumar Sarma, Shikhar | Proceedings of the 19th International Conference on Natural Language Processing (ICON): Shared Task on Word Level Language Identification in Code-mixed Kannada-English Texts | 12--17 | Language identification has recently gained research interest in code-mixed languages due to the extensive use of social media among people. People who speak multiple languages tend to use code-mixed languages when communicating with each other. It has become necessary to identify the languages in such code-mixed environment to detect hate speeches, fake news, misinformation or disinformation and for tasks such as sentiment analysis. In this work, we have proposed a BERT-based approach for language identification in the CoLI-Kanglish shared task at ICON 2022. Our approach achieved 86{\%} weighted average F-1 score and a macro average F-1 score of 57{\%} in the test set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,660 |
inproceedings | lambebo-tonja-etal-2022-transformer | Transformer-based Model for Word Level Language Identification in Code-mixed {K}annada-{E}nglish Texts | Chakravarthi, Bharathi Raja and Murugappan, Abirami and Chinnappa, Dhivya and Hane, Adeep and Kumeresan, Prasanna Kumar and Ponnusamy, Rahul | dec | 2022 | IIIT Delhi, New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-wlli.4/ | Lambebo Tonja, Atnafu and Gemeda Yigezu, Mesay and Kolesnikova, Olga and Shahiki Tash, Moein and Sidorov, Grigori and Gelbukh, Alexander | Proceedings of the 19th International Conference on Natural Language Processing (ICON): Shared Task on Word Level Language Identification in Code-mixed Kannada-English Texts | 18--24 | Language Identification at the Word Level in Kannada-English Texts. This paper describes the system paper of CoLI-Kanglish 2022 shared task. The goal of this task is to identify the different languages used in CoLI-Kanglish 2022. This dataset is distributed into different categories including Kannada, English, Mixed-Language, Location, Name, and Others. This Code-Mix was compiled by CoLI-Kanglish 2022 organizers from posts on social media. We use two classification techniques, KNN and SVM, and achieve an F1-score of 0.58 and place third out of nine competitors. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,661 |
inproceedings | shahiki-tash-etal-2022-word | Word Level Language Identification in Code-mixed {K}annada-{E}nglish Texts using traditional machine learning algorithms | Chakravarthi, Bharathi Raja and Murugappan, Abirami and Chinnappa, Dhivya and Hane, Adeep and Kumeresan, Prasanna Kumar and Ponnusamy, Rahul | dec | 2022 | IIIT Delhi, New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-wlli.5/ | Shahiki Tash, M. and Ahani, Z. and Tonja, A.l. and Gemeda, M. and Hussain, N. and Kolesnikova, O. | Proceedings of the 19th International Conference on Natural Language Processing (ICON): Shared Task on Word Level Language Identification in Code-mixed Kannada-English Texts | 25--28 | Language Identification at the Word Level in Kannada-English Texts. This paper de- scribes the system paper of CoLI-Kanglish 2022 shared task. The goal of this task is to identify the different languages used in CoLI- Kanglish 2022. This dataset is distributed into different categories including Kannada, En- glish, Mixed-Language, Location, Name, and Others. This Code-Mix was compiled by CoLI- Kanglish 2022 organizers from posts on social media. We use two classification techniques, KNN and SVM, and achieve an F1-score of 0.58 and place third out of nine competitors. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,662 |
inproceedings | gemeda-yigezu-etal-2022-word | Word Level Language Identification in Code-mixed {K}annada-{E}nglish Texts using Deep Learning Approach | Chakravarthi, Bharathi Raja and Murugappan, Abirami and Chinnappa, Dhivya and Hane, Adeep and Kumeresan, Prasanna Kumar and Ponnusamy, Rahul | dec | 2022 | IIIT Delhi, New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-wlli.6/ | Gemeda Yigezu, Mesay and Lambebo Tonja, Atnafu and Kolesnikova, Olga and Shahiki Tash, Moein and Sidorov, Grigori and Gelbukh, Alexander | Proceedings of the 19th International Conference on Natural Language Processing (ICON): Shared Task on Word Level Language Identification in Code-mixed Kannada-English Texts | 29--33 | The goal of code-mixed language identification (LID) is to determine which language is spoken or written in a given segment of a speech, word, sentence, or document. Our task is to identify English, Kannada, and mixed language from the provided data. To train a model we used the CoLI-Kenglish dataset, which contains English, Kannada, and mixed-language words. In our work, we conducted several experiments in order to obtain the best performing model. Then, we implemented the best model by using Bidirectional Long Short Term Memory (Bi-LSTM), which outperformed the other trained models with an F1-score of 0.61{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,663 |
inproceedings | ismail-etal-2022-bonc | {B}o{NC}: Bag of N-Characters Model for Word Level Language Identification | Chakravarthi, Bharathi Raja and Murugappan, Abirami and Chinnappa, Dhivya and Hane, Adeep and Kumeresan, Prasanna Kumar and Ponnusamy, Rahul | dec | 2022 | IIIT Delhi, New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-wlli.7/ | Ismail, Shimaa and K. Gallab, Mai and Nayel, Hamada | Proceedings of the 19th International Conference on Natural Language Processing (ICON): Shared Task on Word Level Language Identification in Code-mixed Kannada-English Texts | 34--37 | This paper describes the model submitted by NLP{\_}BFCAI team for Kanglish shared task held at ICON 2022. The proposed model used a very simple approach based on the word representation. Simple machine learning classification algorithms, Random Forests, Support Vector Machines, Stochastic Gradient Descent and Multi-Layer Perceptron have been imple- mented. Our submission, RF, securely ranked fifth among all other submissions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,664 |
inproceedings | balouchzahi-etal-2022-overview | Overview of {C}o{LI}-Kanglish: Word Level Language Identification in Code-mixed {K}annada-{E}nglish Texts at {ICON} 2022 | Chakravarthi, Bharathi Raja and Murugappan, Abirami and Chinnappa, Dhivya and Hane, Adeep and Kumeresan, Prasanna Kumar and Ponnusamy, Rahul | dec | 2022 | IIIT Delhi, New Delhi, India | Association for Computational Linguistics | https://aclanthology.org/2022.icon-wlli.8/ | Balouchzahi, F. and Butt, S. and Hegde, A. and Ashraf, N. and Shashirekha, H.l. and Sidorov, Grigori and Gelbukh, Alexander | Proceedings of the 19th International Conference on Natural Language Processing (ICON): Shared Task on Word Level Language Identification in Code-mixed Kannada-English Texts | 38--45 | The task of Language Identification (LI) in text processing refers to automatically identifying the languages used in a text document. LI task is usually been studied at the document level and often in high-resource languages while giving less importance to low-resource languages. However, with the recent advance- ment in technologies, in a multilingual country like India, many low-resource language users post their comments using English and one or more language(s) in the form of code-mixed texts. Combination of Kannada and English is one such code-mixed text of mixing Kannada and English languages at various levels. To address the word level LI in code-mixed text, in CoLI-Kanglish shared task, we have focused on open-sourcing a Kannada-English code-mixed dataset for word level LI of Kannada, English and mixed-language words written in Roman script. The task includes classifying each word in the given text into one of six predefined categories, namely: Kannada (kn), English (en), Kannada-English (kn-en), Name (name), Lo-cation (location), and Other (other). Among the models submitted by all the participants, the best performing model obtained averaged-weighted and averaged-macro F1 scores of 0.86 and 0.62 respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,665 |
inproceedings | kim-kim-2022-vacillating | Vacillating Human Correlation of {S}acre{BLEU} in Unprotected Languages | Belz, Anya and Popovi{\'c}, Maja and Reiter, Ehud and Shimorina, Anastasia | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.humeval-1.1/ | Kim, Ahrii and Kim, Jinhyeon | Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval) | 1--15 | SacreBLEU, by incorporating a text normalizing step in the pipeline, has become a rising automatic evaluation metric in recent MT studies. With agglutinative languages such as Korean, however, the lexical-level metric cannot provide a conceivable result without a customized pre-tokenization. This paper endeavors to ex- amine the influence of diversified tokenization schemes {--}word, morpheme, subword, character, and consonants {\&} vowels (CV){--} on the metric after its protective layer is peeled off. By performing meta-evaluation with manually- constructed into-Korean resources, our empirical study demonstrates that the human correlation of the surface-based metric and other homogeneous ones (as an extension) vacillates greatly by the token type. Moreover, the human correlation of the metric often deteriorates due to some tokenization, with CV one of its culprits. Guiding through the proper usage of tokenizers for the given metric, we discover i) the feasibility of the character tokens and ii) the deficit of CV in the Korean MT evaluation. | null | null | 10.18653/v1/2022.humeval-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,705 |
inproceedings | borovikova-etal-2022-methodology | A Methodology for the Comparison of Human Judgments With Metrics for Coreference Resolution | Belz, Anya and Popovi{\'c}, Maja and Reiter, Ehud and Shimorina, Anastasia | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.humeval-1.2/ | Borovikova, Mariya and Grobol, Lo{\"ic and Halftermeyer, Ana{\"is and Billot, Sylvie | Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval) | 16--23 | We propose a method for investigating the interpretability of metrics used for the coreference resolution task through comparisons with human judgments. We provide a corpus with annotations of different error types and human evaluations of their gravity. Our preliminary analysis shows that metrics considerably overlook several error types and overlook errors in general in comparison to humans. This study is conducted on French texts, but the methodology is language-independent. | null | null | 10.18653/v1/2022.humeval-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,706 |
inproceedings | macketanz-etal-2022-perceptual | Perceptual Quality Dimensions of Machine-Generated Text with a Focus on Machine Translation | Belz, Anya and Popovi{\'c}, Maja and Reiter, Ehud and Shimorina, Anastasia | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.humeval-1.3/ | Macketanz, Vivien and Naderi, Babak and Schmidt, Steven and M{\"oller, Sebastian | Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval) | 24--31 | The quality of machine-generated text is a complex construct consisting of various aspects and dimensions. We present a study that aims to uncover relevant perceptual quality dimensions for one type of machine-generated text, that is, Machine Translation. We conducted a crowdsourcing survey in the style of a Semantic Differential to collect attribute ratings for German MT outputs. An Exploratory Factor Analysis revealed the underlying perceptual dimensions. As a result, we extracted four factors that operate as relevant dimensions for the Quality of Experience of MT outputs: precision, complexity, grammaticality, and transparency. | null | null | 10.18653/v1/2022.humeval-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,707 |
inproceedings | ramirez-sanchez-etal-2022-human | Human evaluation of web-crawled parallel corpora for machine translation | Belz, Anya and Popovi{\'c}, Maja and Reiter, Ehud and Shimorina, Anastasia | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.humeval-1.4/ | Ram{\'i}rez-S{\'a}nchez, Gema and Ba{\~n}{\'o}n, Marta and Zaragoza-Bernabeu, Jaume and Ortiz Rojas, Sergio | Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval) | 32--41 | Quality assessment has been an ongoing activity of the series of ParaCrawl efforts to crawl massive amounts of parallel data from multilingual websites for 29 languages. The goal of ParaCrawl is to get parallel data that is good for machine translation. To prove so, both, automatic (extrinsic) and human (intrinsic and extrinsic) evaluation tasks have been included as part of the quality assessment activity of the project. We sum up the various methods followed to address these evaluation tasks for the web-crawled corpora produced and their results. We review their advantages and disadvantages for the final goal of the ParaCrawl project and the related ongoing project MaCoCu. | null | null | 10.18653/v1/2022.humeval-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,708 |
inproceedings | balloccu-reiter-2022-beyond | Beyond calories: evaluating how tailored communication reduces emotional load in diet-coaching | Belz, Anya and Popovi{\'c}, Maja and Reiter, Ehud and Shimorina, Anastasia | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.humeval-1.5/ | Balloccu, Simone and Reiter, Ehud | Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval) | 42--53 | Dieting is a behaviour change task that is difficult for many people to conduct successfully. This is due to many factors, including stress and cost. Mobile applications offer an alternative to traditional coaching. However, previous work on apps evaluation only focused on dietary outcomes, ignoring users' emotional state despite its influence on eating habits. In this work, we introduce a novel evaluation of the effects that tailored communication can have on the emotional load of dieting. We implement this by augmenting a traditional diet-app with affective NLG, text-tailoring and persuasive communication techniques. We then run a short 2-weeks experiment and check dietary outcomes, user feedback of produced text and, most importantly, its impact on emotional state, through PANAS questionnaire. Results show that tailored communication significantly improved users' emotional state, compared to an app-only control group. | null | null | 10.18653/v1/2022.humeval-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,709 |
inproceedings | shimorina-belz-2022-human | The Human Evaluation Datasheet: A Template for Recording Details of Human Evaluation Experiments in {NLP} | Belz, Anya and Popovi{\'c}, Maja and Reiter, Ehud and Shimorina, Anastasia | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.humeval-1.6/ | Shimorina, Anastasia and Belz, Anya | Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval) | 54--75 | This paper presents the Human Evaluation Datasheet (HEDS), a template for recording the details of individual human evaluation experiments in Natural Language Processing (NLP), and reports on first experience of researchers using HEDS sheets in practice. Originally taking inspiration from seminal papers by Bender and Friedman (2018), Mitchell et al. (2019), and Gebru et al. (2020), HEDS facilitates the recording of properties of human evaluations in sufficient detail, and with sufficient standardisation, to support comparability, meta-evaluation,and reproducibility assessments for human evaluations. These are crucial for scientifically principled evaluation, but the overhead of completing a detailed datasheet is substantial, and we discuss possible ways of addressing this and other issues observed in practice. | null | null | 10.18653/v1/2022.humeval-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,710 |
inproceedings | saldias-fuentes-etal-2022-toward | Toward More Effective Human Evaluation for Machine Translation | Belz, Anya and Popovi{\'c}, Maja and Reiter, Ehud and Shimorina, Anastasia | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.humeval-1.7/ | Sald{\'i}as Fuentes, Bel{\'e}n and Foster, George and Freitag, Markus and Tan, Qijun | Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval) | 76--89 | Improvements in text generation technologies such as machine translation have necessitated more costly and time-consuming human evaluation procedures to ensure an accurate signal. We investigate a simple way to reduce cost by reducing the number of text segments that must be annotated in order to accurately predict a score for a complete test set. Using a sampling approach, we demonstrate that information from document membership and automatic metrics can help improve estimates compared to a pure random sampling baseline. We achieve gains of up to 20{\%} in average absolute error by leveraging stratified sampling and control variates. Our techniques can improve estimates made from a fixed annotation budget, are easy to implement, and can be applied to any problem with structure similar to the one we study. | null | null | 10.18653/v1/2022.humeval-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,711 |
inproceedings | logacheva-etal-2022-study | A Study on Manual and Automatic Evaluation for Text Style Transfer: The Case of Detoxification | Belz, Anya and Popovi{\'c}, Maja and Reiter, Ehud and Shimorina, Anastasia | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.humeval-1.8/ | Logacheva, Varvara and Dementieva, Daryna and Krotova, Irina and Fenogenova, Alena and Nikishina, Irina and Shavrina, Tatiana and Panchenko, Alexander | Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval) | 90--101 | It is often difficult to reliably evaluate models which generate text. Among them, text style transfer is a particularly difficult to evaluate, because its success depends on a number of parameters. We conduct an evaluation of a large number of models on a detoxification task. We explore the relations between the manual and automatic metrics and find that there is only weak correlation between them, which is dependent on the type of model which generated text. Automatic metrics tend to be less reliable for better-performing models. However, our findings suggest that, ChrF and BertScore metrics can be used as a proxy for human evaluation of text detoxification to some extent. | null | null | 10.18653/v1/2022.humeval-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,712 |
inproceedings | lai-etal-2022-human | Human Judgement as a Compass to Navigate Automatic Metrics for Formality Transfer | Belz, Anya and Popovi{\'c}, Maja and Reiter, Ehud and Shimorina, Anastasia | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.humeval-1.9/ | Lai, Huiyuan and Mao, Jiali and Toral, Antonio and Nissim, Malvina | Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval) | 102--115 | Although text style transfer has witnessed rapid development in recent years, there is as yet no established standard for evaluation, which is performed using several automatic metrics, lacking the possibility of always resorting to human judgement. We focus on the task of formality transfer, and on the three aspects that are usually evaluated: style strength, content preservation, and fluency. To cast light on how such aspects are assessed by common and new metrics, we run a human-based evaluation and perform a rich correlation analysis. We are then able to offer some recommendations on the use of such metrics in formality transfer, also with an eye to their generalisability (or not) to related tasks. | null | null | 10.18653/v1/2022.humeval-1.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,713 |
inproceedings | luu-2022-towards | Towards Human Evaluation of Mutual Understanding in Human-Computer Spontaneous Conversation: An Empirical Study of Word Sense Disambiguation for Naturalistic Social Dialogs in {A}merican {E}nglish | Belz, Anya and Popovi{\'c}, Maja and Reiter, Ehud and Shimorina, Anastasia | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.humeval-1.10/ | Lưu, Alex | Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval) | 116--125 | Current evaluation practices for social dialog systems, dedicated to human-computer spontaneous conversation, exclusively focus on the quality of system-generated surface text, but not human-verifiable aspects of mutual understanding between the systems and their interlocutors. This work proposes Word Sense Disambiguation (WSD) as an essential component of a valid and reliable human evaluation framework, whose long-term goal is to radically improve the usability of dialog systems in real-life human-computer collaboration. The practicality of this proposal is proved via experimentally investigating (1) the WordNet 3.0 sense inventory coverage of lexical meanings in spontaneous conversation between humans in American English, assumed as an upper bound of lexical diversity of human-computer communication, and (2) the effectiveness of state-of-the-art WSD models and pretrained transformer-based contextual embeddings on this type of data. | null | null | 10.18653/v1/2022.humeval-1.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,714 |
inproceedings | surdeanu-etal-2022-taxonomy | Taxonomy Builder: a Data-driven and User-centric Tool for Streamlining Taxonomy Construction | Blodgett, Su Lin and Daum{\'e} III, Hal and Madaio, Michael and Nenkova, Ani and O'Connor, Brendan and Wallach, Hanna and Yang, Qian | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.hcinlp-1.1/ | Surdeanu, Mihai and Hungerford, John and Chan, Yee Seng and MacBride, Jessica and Gyori, Benjamin and Zupon, Andrew and Tang, Zheng and Qiu, Haoling and Min, Bonan and Zverev, Yan and Hilverman, Caitlin and Thomas, Max and Andrews, Walter and Alcock, Keith and Zhang, Zeyu and Reynolds, Michael and Bethard, Steven and Sharp, Rebecca and Laparra, Egoitz | Proceedings of the Second Workshop on Bridging Human--Computer Interaction and Natural Language Processing | 1--10 | An existing domain taxonomy for normalizing content is often assumed when discussing approaches to information extraction, yet often in real-world scenarios there is none. When one does exist, as the information needs shift, it must be continually extended. This is a slow and tedious task, and one which does not scale well. Here we propose an interactive tool that allows a taxonomy to be built or extended \textit{rapidly} and with a \textit{human in the loop} to control precision. We apply insights from text summarization and information extraction to reduce the search space dramatically, then leverage modern pretrained language models to perform contextualized clustering of the remaining concepts to yield candidate nodes for the user to review. We show this allows a user to consider as many as 200 taxonomy concept candidates an hour, to quickly build or extend a taxonomy to better fit information needs. | null | null | 10.18653/v1/2022.hcinlp-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,716 |
inproceedings | mcmillan-major-etal-2022-interactive | An Interactive Exploratory Tool for the Task of Hate Speech Detection | Blodgett, Su Lin and Daum{\'e} III, Hal and Madaio, Michael and Nenkova, Ani and O'Connor, Brendan and Wallach, Hanna and Yang, Qian | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.hcinlp-1.2/ | McMillan-Major, Angelina and Paullada, Amandalynne and Jernite, Yacine | Proceedings of the Second Workshop on Bridging Human--Computer Interaction and Natural Language Processing | 11--20 | With the growth of Automatic Content Moderation (ACM) on widely used social media platforms, transparency into the design of moderation technology and policy is necessary for online communities to advocate for themselves when harms occur. In this work, we describe a suite of interactive modules to support the exploration of various aspects of this technology, and particularly of those components that rely on English models and datasets for hate speech detection, a subtask within ACM. We intend for this demo to support the various stakeholders of ACM in investigating the definitions and decisions that underpin current technologies such that those with technical knowledge and those with contextual knowledge may both better understand existing systems. | null | null | 10.18653/v1/2022.hcinlp-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,717 |
inproceedings | girju-girju-2022-design | Design Considerations for an {NLP}-Driven Empathy and Emotion Interface for Clinician Training via Telemedicine | Blodgett, Su Lin and Daum{\'e} III, Hal and Madaio, Michael and Nenkova, Ani and O'Connor, Brendan and Wallach, Hanna and Yang, Qian | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.hcinlp-1.3/ | Girju, Roxana and Girju, Marina | Proceedings of the Second Workshop on Bridging Human--Computer Interaction and Natural Language Processing | 21--27 | As digital social platforms and mobile technologies become more prevalent and robust, the use of Artificial Intelligence (AI) in facilitating human communication will grow. This, in turn, will encourage development of intuitive, adaptive, and effective empathic AI interfaces that better address the needs of socially and culturally diverse communities. In this paper, we present several design considerations of an intelligent digital interface intended to guide the clinicians toward more empathetic communication. This approach allows various communities of practice to investigate how AI, on one side, and human communication and healthcare needs, on the other, can contribute to each other`s development. | null | null | 10.18653/v1/2022.hcinlp-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,718 |
inproceedings | barale-2022-human | Human-centered computing in legal {NLP} - An application to refugee status determination | Blodgett, Su Lin and Daum{\'e} III, Hal and Madaio, Michael and Nenkova, Ani and O'Connor, Brendan and Wallach, Hanna and Yang, Qian | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.hcinlp-1.4/ | Barale, Claire | Proceedings of the Second Workshop on Bridging Human--Computer Interaction and Natural Language Processing | 28--33 | This paper proposes an approach to the design of an ethical human-AI reasoning support system for decision makers in refugee law. In the context of refugee status determination, practitioners mostly rely on text data. We therefore investigate human-AI cooperation in legal natural language processing. Specifically, we want to determine which design methods can be transposed to legal text analytics. Although little work has been done so far on human-centered design methods applicable to the legal domain, we assume that introducing iterative cooperation and user engagement in the design process is (1) a method to reduce technical limitations of an NLP system and (2) that it will help design more ethical and effective applications by taking users' preferences and feedback into account. The proposed methodology is based on three main design steps: cognitive process formalization in models understandable by both humans and computers, speculative design of prototypes, and semi-directed interviews with a sample of potential users. | null | null | 10.18653/v1/2022.hcinlp-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,719 |
inproceedings | soper-etal-2022-lets | Let`s Chat: Understanding User Expectations in Socialbot Interactions | Blodgett, Su Lin and Daum{\'e} III, Hal and Madaio, Michael and Nenkova, Ani and O'Connor, Brendan and Wallach, Hanna and Yang, Qian | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.hcinlp-1.5/ | Soper, Elizabeth and Pacquetet, Erin and Saha, Sougata and Das, Souvik and Srihari, Rohini | Proceedings of the Second Workshop on Bridging Human--Computer Interaction and Natural Language Processing | 34--39 | This paper analyzes data from the 2021 Amazon Alexa Prize Socialbot Grand Challenge 4, in order to better understand the differences between human-computer interactions (HCI) in a socialbot setting and conventional human-to-human interactions. We find that because socialbots are a new genre of HCI, we are still negotiating norms to guide interactions in this setting. We present several notable patterns in user behavior toward socialbots, which have important implications for guiding future work in the development of conversational agents. | null | null | 10.18653/v1/2022.hcinlp-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,720 |
inproceedings | titung-alm-2022-teaching | Teaching Interactively to Learn Emotions in Natural Language | Blodgett, Su Lin and Daum{\'e} III, Hal and Madaio, Michael and Nenkova, Ani and O'Connor, Brendan and Wallach, Hanna and Yang, Qian | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.hcinlp-1.6/ | Titung, Rajesh and Alm, Cecilia | Proceedings of the Second Workshop on Bridging Human--Computer Interaction and Natural Language Processing | 40--46 | Motivated by prior literature, we provide a proof of concept simulation study for an understudied interactive machine learning method, machine teaching (MT), for the text-based emotion prediction task. We compare this method experimentally against a more well-studied technique, active learning (AL). Results show the strengths of both approaches over more resource-intensive offline supervised learning. Additionally, applying AL and MT to fine-tune a pre-trained model offers further efficiency gain. We end by recommending research directions which aim to empower users in the learning process. | null | null | 10.18653/v1/2022.hcinlp-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,721 |
inproceedings | sultana-etal-2022-narrative | Narrative Datasets through the Lenses of {NLP} and {HCI} | Blodgett, Su Lin and Daum{\'e} III, Hal and Madaio, Michael and Nenkova, Ani and O'Connor, Brendan and Wallach, Hanna and Yang, Qian | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.hcinlp-1.7/ | Sultana, Sharifa and Zhang, Renwen and Lim, Hajin and Antoniak, Maria | Proceedings of the Second Workshop on Bridging Human--Computer Interaction and Natural Language Processing | 47--54 | In this short paper, we compare existing value systems and approaches in NLP and HCI for collecting narrative data. Building on these parallel discussions, we shed light on the challenges facing some popular NLP dataset types, which we discuss these in relation to widely-used narrative-based HCI research methods; and we highlight points where NLP methods can broaden qualitative narrative studies. In particular, we point towards contextuality, positionality, dataset size, and open research design as central points of difference and windows for collaboration when studying narratives. Through the use case of narratives, this work contributes to a larger conversation regarding the possibilities for bridging NLP and HCI through speculative mixed-methods. | null | null | 10.18653/v1/2022.hcinlp-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,722 |
inproceedings | dacon-2022-towards | Towards a Deep Multi-layered Dialectal Language Analysis: A Case Study of {A}frican-{A}merican {E}nglish | Blodgett, Su Lin and Daum{\'e} III, Hal and Madaio, Michael and Nenkova, Ani and O'Connor, Brendan and Wallach, Hanna and Yang, Qian | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.hcinlp-1.8/ | Dacon, Jamell | Proceedings of the Second Workshop on Bridging Human--Computer Interaction and Natural Language Processing | 55--63 | Currently, natural language processing (NLP) models proliferate language discrimination leading to potentially harmful societal impacts as a result of biased outcomes. For example, part-of-speech taggers trained on Mainstream American English (MAE) produce non-interpretable results when applied to African American English (AAE) as a result of language features not seen during training. In this work, we incorporate a human-in-the-loop paradigm to gain a better understanding of AAE speakers' behavior and their language use, and highlight the need for dialectal language inclusivity so that native AAE speakers can extensively interact with NLP systems while reducing feelings of disenfranchisement. | null | null | 10.18653/v1/2022.hcinlp-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,723 |
inproceedings | declerck-2022-towards-linking | Towards the Linking of a Sign Language Ontology with Lexical Data | Kernerman, Ilan and Krek, Simon | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.gwll-1.2/ | Declerck, Thierry | Proceedings of Globalex Workshop on Linked Lexicography within the 13th Language Resources and Evaluation Conference | 6--9 | We describe our current work for linking a new ontology for representing constitutive elements of Sign Languages with lexical data encoded within the OntoLex-Lemon framework. We first present very briefly the current state of the ontology, and show how transcriptions of signs can be represented in OntoLex-Lemon, in a minimalist manner, before addressing the challenges of linking the elements of the ontology to full lexical descriptions of the spoken languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,726 |
inproceedings | chiarcos-etal-2022-modelling | Modelling Collocations in {O}nto{L}ex-{F}r{AC} | Kernerman, Ilan and Krek, Simon | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.gwll-1.3/ | Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Kabashi, Besim and Khan, Fahad and Truic{\u{a}}, Ciprian-Octavian | Proceedings of Globalex Workshop on Linked Lexicography within the 13th Language Resources and Evaluation Conference | 10--18 | Following presentations of frequency and attestations, and embeddings and distributional similarity, this paper introduces the third cornerstone of the emerging OntoLex module for Frequency, Attestation and Corpus-based Information, OntoLex-FrAC. We provide an RDF vocabulary for collocations, established as a consensus over contributions from five different institutions and numerous data sets, with the goal of eliciting feedback from reviewers, workshop audience and the scientific community in preparation of the final consolidation of the OntoLex-FrAC module, whose publication as a W3C community report is foreseen for the end of this year. The novel collocation component of OntoLex-FrAC is described in application to a lexicographic resource and corpus-based collocation scores available from the web, and finally, we demonstrate the capability and genericity of the model by showing how to retrieve and aggregate collocation information by means of SPARQL, and its export to a tabular format, so that it can be easily processed in downstream applications. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,727 |
inproceedings | gracia-etal-2022-tiad | {TIAD} 2022: The Fifth Translation Inference Across Dictionaries Shared Task | Kernerman, Ilan and Krek, Simon | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.gwll-1.4/ | Gracia, Jorge and Kabashi, Besim and Kernerman, Ilan | Proceedings of Globalex Workshop on Linked Lexicography within the 13th Language Resources and Evaluation Conference | 19--25 | The objective of the Translation Inference Across Dictionaries (TIAD) series of shared tasks is to explore and compare methods and techniques that infer translations indirectly between language pairs, based on other bilingual/multilingual lexicographic resources. In this fifth edition, the participating systems were asked to generate new translations automatically among three languages - English, French, Portuguese - based on known indirect translations contained in the Apertium RDF graph. Such evaluation pairs have been the same during the four last TIAD editions. Since the fourth edition, however, a larger graph is used as a basis to produce the translations, namely Apertium RDF v2. The evaluation of the results was carried out by the organisers against manually compiled language pairs of K Dictionaries. For the second time in the TIAD series, some systems beat the proposed baselines. This paper gives an overall description of the shard task, the evaluation data and methodology, and the systems' results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,728 |
inproceedings | bestgen-2022-creating | Creating Bilingual Dictionaries from Existing Ones by Means of Pivot-Oriented Translation Inference and Logistic Regression | Kernerman, Ilan and Krek, Simon | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.gwll-1.5/ | Bestgen, Yves | Proceedings of Globalex Workshop on Linked Lexicography within the 13th Language Resources and Evaluation Conference | 26--31 | To produce new bilingual dictionaries from existing ones, an important task in the field of translation, a system based on a very classical supervised learning technique, with no other knowledge than the available bilingual dictionaries, is proposed. It performed very well in the Translation Inference Across Dictionaries (TIAD) shared task on the combined 2021 and 2022 editions. An analysis of the pros and cons suggests a series of avenues to further improve its effectiveness. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,729 |
inproceedings | steingrimsson-etal-2022-compiling | Compiling a Highly Accurate Bilingual Lexicon by Combining Different Approaches | Kernerman, Ilan and Krek, Simon | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.gwll-1.6/ | Steingr{\'i}msson, Stein{\th}{\'o}r and O{'}Brien, Luke and Ingimundarson, Finnur and Loftsson, Hrafn and Way, Andy | Proceedings of Globalex Workshop on Linked Lexicography within the 13th Language Resources and Evaluation Conference | 32--41 | Bilingual lexicons can be generated automatically using a wide variety of approaches. We perform a rigorous manual evaluation of four different methods: word alignments on different types of bilingual data, pivoting, machine translation and cross-lingual word embeddings. We investigate how the different setups perform using publicly available data for the English-Icelandic language pair, doing separate evaluations for each method, dataset and confidence class where it can be calculated. The results are validated by human experts, working with a random sample from all our experiments. By combining the most promising approaches and data sets, using confidence scores calculated from the data and the results of manually evaluating samples from our manual evaluation as indicators, we are able to induce lists of translations with a very high acceptance rate. We show how multiple different combinations generate lists with well over 90{\%} acceptance rate, substantially exceeding the results for each individual approach, while still generating reasonably large candidate lists. All manually evaluated equivalence pairs are published in a new lexicon of over 232,000 pairs under an open license. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,730 |
inproceedings | steiner-2022-converting | Converting a Database of Complex {G}erman Word Formation for Linked Data | Kernerman, Ilan and Krek, Simon | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.gwll-1.8/ | Steiner, Petra | Proceedings of Globalex Workshop on Linked Lexicography within the 13th Language Resources and Evaluation Conference | 52--59 | This work combines two lexical resources with morphological information on German word formation, CELEX for German and the latest release of GermaNet, for extracting and building complex word structures. This yields a database of over 100,000 German wordtrees. A definition for sequential morphological analyses leads to a Ontolex-Lemon type model. By using GermaNet sense information, the data can be linked to other semantic resources. An alignment to the CIDOC Conceptual Reference Model (CIDOC-CRM) is also provided. The scripts for the data generation are publicly available on GitHub. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,732 |
inproceedings | zdravkova-2022-resolving | Resolving Inflectional Ambiguity of {M}acedonian Adjectives | Kernerman, Ilan and Krek, Simon | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.gwll-1.9/ | Zdravkova, Katerina | Proceedings of Globalex Workshop on Linked Lexicography within the 13th Language Resources and Evaluation Conference | 60--67 | Macedonian adjectives are inflected for gender, number, definiteness and degree, with in average 47.98 inflections per headword. The inflection paradigm of qualificative adjectives is even richer, embracing 56.27 morphophonemic alterations. Depending on the word they were derived from, more than 600 Macedonian adjectives have an identical headword and two different word forms for each grammatical category. While non-verbal adjectives alter the root before adding the inflectional suffixes, suffixes of verbal adjectives are added directly to the root. In parallel with the morphological differences, both types of adjectives have a different translation, depending on the category of the words they have been derived from. Nouns that collocate with these adjectives are mutually disjunctive, enabling the resolution of inflectional ambiguity. They are organised as a lexical taxonomy, created using hierarchical divisive clustering. If embedded in the future spell-checking applications, this taxonomy will significantly reduce the risk of forming incorrect inflections, which frequently occur in the daily news and more often in the advertisements and social media. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,733 |
inproceedings | arican-etal-2022-morpholex | Morpholex {T}urkish: A Morphological Lexicon for {T}urkish | Kernerman, Ilan and Krek, Simon | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.gwll-1.10/ | Arican, Bilge and Kuzgun, Asl{\i and Mar{\c{san, B{\"u{\c{sra and Aslan, Deniz Baran and Saniyar, Ezgi and Cesur, Neslihan and Kara, Neslihan and Kuyrukcu, Oguzhan and Ozcelik, Merve and Yenice, Arife Betul and Dogan, Merve and Oksal, Ceren and Ercan, G{\"okhan and Y{\ild{\iz, Olcay Taner | Proceedings of Globalex Workshop on Linked Lexicography within the 13th Language Resources and Evaluation Conference | 68--74 | MorphoLex is a study in which root, prefix and suffixes of words are analyzed. With MorphoLex, many words can be analyzed according to certain rules and a useful database can be created. Due to the fact that Turkish is an agglutinative language and the richness of its language structure, it offers different analyzes and results from previous studies in MorphoLex. In this study, we revealed the process of creating a database with 48,472 words and the results of the differences in language structure. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,734 |
inproceedings | oksal-etal-2022-time | Time Travel in {T}urkish: {W}ord{N}ets for {M}odern {T}urkish | Kernerman, Ilan and Krek, Simon | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.gwll-1.11/ | Oksal, Ceren and Oguz, Hikmet N. and Catal, Mert and Erbay, Nurkay and Yuzer, Ozgecan and Unsal, Ipek B. and Kuyrukcu, Oguzhan and Yenice, Arife B. and Kuzgun, Asl{\i and Mar{\c{san, B{\"u{\c{sra and San{\iyar, Ezgi and Arican, Bilge and Dogan, Merve and Bakay, {\"Ozge and Y{\ild{\iz, Olcay Taner | Proceedings of Globalex Workshop on Linked Lexicography within the 13th Language Resources and Evaluation Conference | 75--84 | Wordnets have been popular tools for providing and representing semantic and lexical relations of languages. They are useful tools for various purposes in NLP studies. Many researches created WordNets for different languages. For Turkish, there are two WordNets, namely the Turkish WordNet of BalkaNet and KeNet. In this paper, we present new WordNets for Turkish each of which is based on one of the first 9 editions of the Turkish dictionary starting from the 1944 edition. These WordNets are historical in nature and make implications for Modern Turkish. They are developed by extending KeNet, which was created based on the 2005 and 2011 editions of the Turkish dictionary. In this paper, we explain the steps in creating these 9 new WordNets for Turkish, discuss the challenges in the process and report comparative results about the WordNets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,735 |
inproceedings | dogan-etal-2022-wordnet | {W}ord{N}et and {W}ikipedia Connection in {T}urkish {W}ord{N}et {K}e{N}et | Kernerman, Ilan and Krek, Simon | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.gwll-1.12/ | Do{\u{gan, Merve and Oksal, Ceren and Yenice, Arife Bet{\"ul and Beyhan, Fatih and Yeniterzi, Reyyan and Y{\ild{\iz, Olcay Taner | Proceedings of Globalex Workshop on Linked Lexicography within the 13th Language Resources and Evaluation Conference | 85--89 | This paper aims to present WordNet and Wikipedia connection by linking synsets from Turkish WordNet KeNet with Wikipedia and thus, provide a better machine-readable dictionary to create an NLP model with rich data. For this purpose, manual mapping between two resources is realized and 11,478 synsets are linked to Wikipedia. In addition to this, automatic linking approaches are utilized to analyze possible connection suggestions. Baseline Approach and ElasticSearch Based Approach help identify the potential human annotation errors and analyze the effectiveness of these approaches in linking. Adopting both manual and automatic mapping provides us with an encompassing resource of WordNet and Wikipedia connections. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,736 |
inproceedings | maudslay-teufel-2022-homonymy | Homonymy Information for {E}nglish {W}ord{N}et | Kernerman, Ilan and Krek, Simon | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.gwll-1.13/ | Maudslay, Rowan Hall and Teufel, Simone | Proceedings of Globalex Workshop on Linked Lexicography within the 13th Language Resources and Evaluation Conference | 90--98 | A widely acknowledged shortcoming of WordNet is that it lacks a distinction between word meanings which are systematically related (polysemy), and those which are coincidental (homonymy). Several previous works have attempted to fill this gap, by inferring this information using computational methods. We revisit this task, and exploit recent advances in language modelling to synthesise homonymy annotation for Princeton WordNet. Previous approaches treat the problem using clustering methods; by contrast, our method works by linking WordNet to the Oxford English Dictionary, which contains the information we need. To perform this alignment, we pair definitions based on their proximity in an embedding space produced by a Transformer model. Despite the simplicity of this approach, our best model attains an F1 of .97 on an evaluation set that we annotate. The outcome of our work is a high-quality homonymy annotation layer for Princeton WordNet, which we release. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,737 |
inproceedings | salar-mohtaj-babak-naderi-2022-overview | Overview of the {G}erm{E}val 2022 Shared Task on Text Complexity Assessment of {G}erman Text | M{\"oller, Sebastian and Mohtaj, Salar and Naderi, Babak | sep | 2022 | Potsdam, Germany | Association for Computational Linguistics | https://aclanthology.org/2022.germeval-1.1/ | Mohtaj, Salar and Naderi, Babak and M{\"oller, Sebastian | Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text | 1--9 | In this paper we present the GermEval 2022 shared task on Text Complexity Assessment of German text. Text forms an integral part of exchanging information and interacting with the world, correlating with quality and experience of life. Text complexity is one of the factors which affects a reader`s understanding of a text. The mapping of a body of text to a mathematical unit quantifying the degree of readability is the basis of complexity assessment. As readability might be influenced by representation, we only target the text complexity for readers in this task. We designed the task as text regression in which participants developed models to predict complexity of pieces of text for a German learner in a range from 1 to 7. The shared task is organized in two phases; the development and the test phases. Among 24 participants who registered for the shared task, ten teams submitted their results on the test data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,739 |
inproceedings | hamster-2022-everybody | Everybody likes short sentences - A Data Analysis for the Text Complexity {DE} Challenge 2022 | M{\"oller, Sebastian and Mohtaj, Salar and Naderi, Babak | sep | 2022 | Potsdam, Germany | Association for Computational Linguistics | https://aclanthology.org/2022.germeval-1.2/ | Hamster, Ulf A. | Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text | 10--14 | The German Text Complexity Assessment Shared Task in KONVENS 2022 explores how to predict a complexity score for sentence examples from language learners' perspective. Our modeling approach for this shared task utilizes off-the-shelf NLP tools for feature engineering and a Random Forest regression model. We identified the text length, or resp. the logarithm of a sentence`s string length, as the most important feature to predict the complexity score. Further analysis showed that the Pearson correlation between text length and complexity score is about $\rho$ {\ensuremath{\approx}} 0.777. A sensitivity analysis on the loss function revealed that semantic SBert features impact the complexity score as well. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,740 |
inproceedings | asghari-hewett-2022-hiig | {HIIG} at {G}erm{E}val 2022: Best of Both Worlds Ensemble for Automatic Text Complexity Assessment | M{\"oller, Sebastian and Mohtaj, Salar and Naderi, Babak | sep | 2022 | Potsdam, Germany | Association for Computational Linguistics | https://aclanthology.org/2022.germeval-1.3/ | Asghari, Hadi and Hewett, Freya | Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text | 15--20 | In this paper we explain HIIG`s contribution to the shared task Text Complexity DE Challenge 2022. Our best-performing model for the task of automatically determining the complexity level of a German-language sentence is a combination of a transformer model and a classic feature-based model, which achieves a mapped root square mean error of 0.446 on the test data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,741 |
inproceedings | anschutz-groh-2022-tum | {TUM} Social Computing at {G}erm{E}val 2022: Towards the Significance of Text Statistics and Neural Embeddings in Text Complexity Prediction | M{\"oller, Sebastian and Mohtaj, Salar and Naderi, Babak | sep | 2022 | Potsdam, Germany | Association for Computational Linguistics | https://aclanthology.org/2022.germeval-1.4/ | Ansch{\"utz, Miriam and Groh, Georg | Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text | 21--26 | In this paper, we describe our submission to the GermEval 2022 Shared Task on Text Complexity Assessment of German Text. It addresses the problem of predicting the complexity of German sentences on a continuous scale. While many related works still rely on handcrafted statistical features, neural networks have emerged as state-of-the-art in other natural language processing tasks. Therefore, we investigate how both can complement each other and which features are most relevant for text complexity prediction in German. We propose a fine-tuned German DistilBERT model enriched with statistical text features that achieved fourth place in the shared task with a RMSE of 0.481 on the competition`s test data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,742 |
inproceedings | arps-etal-2022-hhuplexity | {HHU}plexity at Text Complexity {DE} Challenge 2022 | M{\"oller, Sebastian and Mohtaj, Salar and Naderi, Babak | sep | 2022 | Potsdam, Germany | Association for Computational Linguistics | https://aclanthology.org/2022.germeval-1.5/ | Arps, David and Kels, Jan and Kr{\"amer, Florian and Renz, Yunus and Stodden, Regina and Petersen, Wiebke | Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text | 27--32 | In this paper, we describe our submission to the {\textquoteleft}Text Complexity DE Challenge 2022' shared task on predicting the complexity of German sentences. We compare performance of different feature-based regression architectures and transformer language models. Our best candidate is a fine-tuned German Distilbert model that ignores linguistic features of the sentences. Our model ranks 7th place in the shared task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,743 |
inproceedings | kostic-etal-2022-pseudo | Pseudo-Labels Are All You Need | M{\"oller, Sebastian and Mohtaj, Salar and Naderi, Babak | sep | 2022 | Potsdam, Germany | Association for Computational Linguistics | https://aclanthology.org/2022.germeval-1.6/ | Kosti{\'c}, Bogdan and Lucka, Mathis and Risch, Julian | Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text | 33--38 | Automatically estimating the complexity of texts for readers has a variety of applications, such as recommending texts with an appropriate complexity level to language learners or supporting the evaluation of text simplification approaches. In this paper, we present our submission to the Text Complexity DE Challenge 2022, a regression task where the goal is to predict the complexity of a German sentence for German learners at level B. Our approach relies on more than 220,000 pseudolabels created from the German Wikipedia and other corpora to train Transformer-based models, and refrains from any feature engineering or any additional, labeled data. We find that the pseudo-label-based approach gives impressive results yet requires little to no adjustment to the specific task and therefore could be easily adapted to other domains and tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,744 |
inproceedings | mosquera-2022-tackling | Tackling Data Drift with Adversarial Validation: An Application for {G}erman Text Complexity Estimation | M{\"oller, Sebastian and Mohtaj, Salar and Naderi, Babak | sep | 2022 | Potsdam, Germany | Association for Computational Linguistics | https://aclanthology.org/2022.germeval-1.7/ | Mosquera, Alejandro | Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text | 39--44 | This paper describes the winning approach in the first automated German text complexity assessment shared task as part of KONVENS 2022. To solve this difficult problem, the evaluated system relies on an ensemble of regression models that successfully combines both traditional feature engineering and pre-trained resources. Moreover, the use of adversarial validation is proposed as a method for countering the data drift identified during the development phase, thus helping to select relevant models and features and avoid leaderboard overfitting. The best submission reached 0.43 mapped RMSE on the test set during the final phase of the competition. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,745 |
inproceedings | girrbach-2022-text | Text Complexity {DE} Challenge 2022 Submission Description: Pairwise Regression for Complexity Prediction | M{\"oller, Sebastian and Mohtaj, Salar and Naderi, Babak | sep | 2022 | Potsdam, Germany | Association for Computational Linguistics | https://aclanthology.org/2022.germeval-1.8/ | Girrbach, Leander | Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text | 45--50 | This paper describes our submission to the Text Complexity DE Challenge 2022 (Mohtaj et al., 2022). We evaluate a pairwise regression model that predicts the relative difference in complexity of two sentences, instead of predicting a complexity score from a single sentence. In consequence, the model returns samples of scores (as many as there are training sentences) instead of a point estimate. Due to an error in the submission, test set results are unavailable. However, we show by cross-validation that pairwise regression does not improve performance over standard regression models using sentence embeddings taken from pretrained language models as input. Furthermore, we do not find the distribution standard deviations to reflect differences in {\textquotedblleft}uncertainty{\textquotedblright} of the model predictions in an useful way. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,746 |
inproceedings | vladika-etal-2022-tum | {TUM} sebis at {G}erm{E}val 2022: A Hybrid Model Leveraging {G}aussian Processes and Fine-Tuned {XLM}-{R}o{BERT}a for {G}erman Text Complexity Analysis | M{\"oller, Sebastian and Mohtaj, Salar and Naderi, Babak | sep | 2022 | Potsdam, Germany | Association for Computational Linguistics | https://aclanthology.org/2022.germeval-1.9/ | Vladika, Juraj and Meisenbacher, Stephen and Matthes, Florian | Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text | 51--56 | The task of quantifying the complexity of written language presents an interesting endeavor, particularly in the opportunity that it presents for aiding language learners. In this pursuit, the question of what exactly about natural language contributes to its complexity (or lack thereof) is an interesting point of investigation. We propose a hybrid approach, utilizing shallow models to capture linguistic features, while leveraging a fine-tuned embedding model to encode the semantics of input text. By harmonizing these two methods, we achieve competitive scores in the given metric, and we demonstrate improvements over either singular method. In addition, we uncover the effectiveness of Gaussian processes in the training of shallow models for text complexity analysis. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,747 |
inproceedings | blaneck-etal-2022-automatic | Automatic Readability Assessment of {G}erman Sentences with Transformer Ensembles | M{\"oller, Sebastian and Mohtaj, Salar and Naderi, Babak | sep | 2022 | Potsdam, Germany | Association for Computational Linguistics | https://aclanthology.org/2022.germeval-1.10/ | Blaneck, Patrick Gustav and Bornheim, Tobias and Grieger, Niklas and Bialonski, Stephan | Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text | 57--62 | Reliable methods for automatic readability assessment have the potential to impact a variety of fields, ranging from machine translation to self-informed learning. Recently, large language models for the German language (such as GBERT and GPT-2-Wechsel) have become available, allowing to develop Deep Learning based approaches that promise to further improve automatic readability assessment. In this contribution, we studied the ability of ensembles of fine-tuned GBERT and GPT-2-Wechsel models to reliably predict the readability of German sentences. We combined these models with linguistic features and investigated the dependence of prediction performance on ensemble size and composition. Mixed ensembles of GBERT and GPT-2-Wechsel performed better than ensembles of the same size consisting of only GBERT or GPT-2-Wechsel models. Our models were evaluated in the GermEval 2022 Shared Task on Text Complexity Assessment on data of German sentences. On out-of-sample data, our best ensemble achieved a root mean squared error of 0:435. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,748 |
inproceedings | pernes-etal-2022-improving | Improving abstractive summarization with energy-based re-ranking | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.1/ | Pernes, Diogo and Mendes, Afonso and Martins, Andr{\'e} F. T. | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 1--17 | Current abstractive summarization systems present important weaknesses which prevent their deployment in real-world applications, such as the omission of relevant information and the generation of factual inconsistencies (also known as hallucinations). At the same time, automatic evaluation metrics such as CTC scores (Deng et al., 2021) have been recently proposed that exhibit a higher correlation with human judgments than traditional lexical-overlap metrics such as ROUGE. In this work, we intend to close the loop by leveraging the recent advances in summarization metrics to create quality-aware abstractive summarizers. Namely, we propose an energy-based model that learns to re-rank summaries according to one or a combination of these metrics. We experiment using several metrics to train our energy-based re-ranker and show that it consistently improves the scores achieved by the predicted summaries. Nonetheless, human evaluation results show that the re-ranking approach should be used with care for highly abstractive summaries, as the available metrics are not yet sufficiently reliable for this purpose. | null | null | 10.18653/v1/2022.gem-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,750 |
inproceedings | golovneva-etal-2022-task | Task-driven augmented data evaluation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.2/ | Golovneva, Olga and Wei, Pan and Abboud, Khadige and Peris, Charith and Tan, Lizhen and Yu, Haiyang | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 18--25 | In the area of data augmentation research, the main focus to date has been on the improvement of the generation models, while the examination and improvements to synthetic data evaluation methods remains less explored. In our work, we explore a number of sentence similarity measures in the context of data generation filtering, and evaluate their impact on the performance of the targeted Natural Language Understanding problem on the example of the intent classification and named entity recognition tasks. Our experiments on ATIS dataset show that the right choice of filtering technique can bring up to 33{\%} in sentence accuracy improvement for targeted underrepresented intents. | null | null | 10.18653/v1/2022.gem-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,751 |
inproceedings | cai-etal-2022-generating | Generating Coherent Narratives with Subtopic Planning to Answer How-to Questions | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.3/ | Cai, Pengshan and Yu, Mo and Liu, Fei and Yu, Hong | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 26--42 | Answering how-to questions remains a major challenge in question answering research. A vast number of narrow, long-tail questions cannot be readily answered using a search engine. Moreover, there is little to no annotated data available to develop such systems. This paper makes a first attempt at generating coherent, long-form answers for how-to questions. We propose new architectures, consisting of passage retrieval, subtopic planning and narrative generation, to consolidate multiple relevant passages into a coherent, explanatory answer. Our subtopic planning module aims to produce a set of relevant, diverse subtopics that serve as the backbone for answer generation to improve topic coherence. We present extensive experiments on a WikiHow dataset repurposed for long-form question answering. Empirical results demonstrate that generating narratives to answer how-to questions is a challenging task. Nevertheless, our architecture incorporated with subtopic planning can produce high-quality, diverse narratives evaluated using automatic metrics and human assessment. | null | null | 10.18653/v1/2022.gem-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,752 |
inproceedings | pal-etal-2022-weakly | Weakly Supervised Context-based Interview Question Generation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.4/ | Pal, Samiran and Khan, Kaamraan and Singh, Avinash Kumar and Ghosh, Subhasish and Nayak, Tapas and Palshikar, Girish and Bhattacharya, Indrajit | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 43--53 | We explore the task of automated generation of technical interview questions from a given textbook. Such questions are different from those for reading comprehension studied in question generation literature. We curate a context based interview questions data set for Machine Learning and Deep Learning from two popular textbooks. We first explore the possibility of using a large generative language model (GPT-3) for this task in a zero shot setting. We then evaluate the performance of smaller generative models such as BART fine-tuned on weakly supervised data obtained using GPT-3 and hand-crafted templates. We deploy an automatic question importance assignment technique to figure out suitability of a question in a technical interview. It improves the evaluation results in many dimensions. We dissect the performance of these models for this task and also scrutinize the suitability of questions generated by them for use in technical interviews. | null | null | 10.18653/v1/2022.gem-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,753 |
inproceedings | kirstein-etal-2022-analyzing | Analyzing Multi-Task Learning for Abstractive Text Summarization | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.5/ | Kirstein, Frederic Thomas and Wahle, Jan Philip and Ruas, Terry and Gipp, Bela | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 54--77 | Despite the recent success of multi-task learning and pre-finetuning for natural language understanding, few works have studied the effects of task families on abstractive text summarization. Task families are a form of task grouping during the pre-finetuning stage to learn common skills, such as reading comprehension. To close this gap, we analyze the influence of multi-task learning strategies using task families for the English abstractive text summarization task. We group tasks into one of three strategies, i.e., sequential, simultaneous, and continual multi-task learning, and evaluate trained models through two downstream tasks. We find that certain combinations of task families (e.g., advanced reading comprehension and natural language inference) positively impact downstream performance. Further, we find that choice and combinations of task families influence downstream performance more than the training scheme, supporting the use of task families for abstractive text | null | null | 10.18653/v1/2022.gem-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,754 |
inproceedings | chuklin-etal-2022-clse | {CLSE}: Corpus of Linguistically Significant Entities | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.6/ | Chuklin, Aleksandr and Zhao, Justin and Kale, Mihir | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 78--96 | One of the biggest challenges of natural language generation (NLG) is the proper handling of named entities. Named entities are a common source of grammar mistakes such as wrong prepositions, wrong article handling, or incorrect entity inflection. Without factoring linguistic representation, such errors are often underrepresented when evaluating on a small set of arbitrarily picked argument values, or when translating a dataset from a linguistically simpler language, like English, to a linguistically complex language, like Russian. However, for some applications, broadly precise grammatical correctness is critical {--} native speakers may find entity-related grammar errors silly, jarring, or even offensive. To enable the creation of more linguistically diverse NLG datasets, we release a Corpus of Linguistically Significant Entities (CLSE) annotated by linguist experts. The corpus includes 34 languages and covers 74 different semantic types to support various applications from airline ticketing to video games. To demonstrate one possible use of CLSE, we produce an augmented version of the Schema-Guided Dialog Dataset, SGD-CLSE. Using the CLSE`s entities and a small number of human translations, we create a linguistically representative NLG evaluation benchmark in three languages: French (high-resource), Marathi (low-resource), and Russian (highly inflected language). We establish quality baselines for neural, template-based, and hybrid NLG systems and discuss the strengths and weaknesses of each approach. | null | null | 10.18653/v1/2022.gem-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,755 |
inproceedings | glover-etal-2022-revisiting | Revisiting text decomposition methods for {NLI}-based factuality scoring of summaries | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.7/ | Glover, John and Fancellu, Federico and Jagannathan, Vasudevan and Gormley, Matthew R. and Schaaf, Thomas | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 97--105 | Scoring the factuality of a generated summary involves measuring the degree to which a target text contains factual information using the input document as support. Given the similarities in the problem formulation, previous work has shown that Natural Language Inference models can be effectively repurposed to perform this task. As these models are trained to score entailment at a sentence level, several recent studies have shown that decomposing either the input document or the summary into sentences helps with factuality scoring. But is fine-grained decomposition always a winning strategy? In this paper we systematically compare different granularities of decomposition - from document to sub-sentence level, and we show that the answer is no. Our results show that incorporating additional context can yield improvement, but that this does not necessarily apply to all datasets. We also show that small changes to previously proposed entailment-based scoring methods can result in better performance, highlighting the need for caution in model and methodology selection for downstream tasks. | null | null | 10.18653/v1/2022.gem-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,756 |
inproceedings | leung-etal-2022-semantic | Semantic Similarity as a Window into Vector- and Graph-Based Metrics | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.8/ | Leung, Wai Ching and Wein, Shira and Schneider, Nathan | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 106--115 | In this work, we use sentence similarity as a lens through which to investigate the representation of meaning in graphs vs. vectors. On semantic textual similarity data, we examine how similarity metrics based on vectors alone (SENTENCE-BERT and BERTSCORE) fare compared to metrics based on AMR graphs (SMATCH and S2MATCH). Quantitative and qualitative analyses show that the AMR-based metrics can better capture meanings dependent on sentence structures, but can also be distracted by structural differences{---}whereas the BERT-based metrics represent finer-grained meanings of individual words, but often fail to capture the ordering effect of words within sentences and suffer from interpretability problems. These findings contribute to our understanding of each approach to semantic representation and motivate distinct use cases for graph and vector-based representations. | null | null | 10.18653/v1/2022.gem-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,757 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.