entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | jiang-etal-2022-semantic | Semantic Simplification for Sentiment Classification | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.757/ | Jiang, Xiaotong and Wang, Zhongqing and Zhou, Guodong | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11022--11032 | Recent work on document-level sentiment classification has shown that the sentiment in the original text is often hard to capture, since the sentiment is usually either expressed implicitly or shifted due to the occurrences of negation and rhetorical words. To this end, we enhance the original text with a sentiment-driven simplified clause to intensify its sentiment. The simplified clause shares the same opinion with the original text but expresses the opinion much more simply. Meanwhile, we employ Abstract Meaning Representation (AMR) for generating simplified clauses, since AMR explicitly provides core semantic knowledge, and potentially offers core concepts and explicit structures of original texts. Empirical studies show the effectiveness of our proposed model over several strong baselines. The results also indicate the importance of simplified clauses for sentiment classification. | null | null | 10.18653/v1/2022.emnlp-main.757 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,887 |
inproceedings | ma-etal-2022-xprompt | {XP}rompt: Exploring the Extreme of Prompt Tuning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.758/ | Ma, Fang and Zhang, Chen and Ren, Lei and Wang, Jingang and Wang, Qifan and Wu, Wei and Quan, Xiaojun and Song, Dawei | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11033--11047 | Prompt tuning learns soft prompts to condition the frozen Pre-trained Language Models (PLMs) for performing downstream tasks in a parameter-efficient manner. While prompt tuning has gradually reached the performance level of fine-tuning as the model scale increases, there is still a large performance gap between prompt tuning and fine-tuning for models of moderate and small scales (typically less than 11B parameters). In this paper, we empirically show that the trained prompt tokens can have a negative impact on a downstream task and thus degrade its performance. To bridge the gap, we propose a novel Prompt tuning model with an eXtremely small scale (XPrompt) under the regime of lottery tickets hypothesis. Specifically, XPrompt eliminates the negative prompt tokens at different granularity levels through a hierarchical structured pruning, yielding a more parameter-efficient prompt yet with a competitive performance. Comprehensive experiments are carried out on the SuperGLUE tasks, and the results indicate that XPrompt is able to close the performance gap at smaller model scales. | null | null | 10.18653/v1/2022.emnlp-main.758 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,888 |
inproceedings | min-etal-2022-rethinking | Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.759/ | Min, Sewon and Lyu, Xinxi and Holtzman, Ari and Artetxe, Mikel and Lewis, Mike and Hajishirzi, Hannaneh and Zettlemoyer, Luke | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11048--11064 | Large language models (LMs) are able to in-context learn{---}perform a new task via inference alone by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs. However, there has been little understanding of how the model learns and which aspects of the demonstrations contribute to end task performance. In this paper, we show that ground truth demonstrations are in fact not required{---}randomly replacing labels in the demonstrations barely hurts performance on a range of classification and multi-choce tasks, consistently over 12 different models including GPT-3. Instead, we find that other aspects of the demonstrations are the key drivers of endtask performance, including the fact that they provide a few examples of (1) the label space, (2) the distribution of the input text, and (3) the overall format of the sequence. Together, our analysis provides a new way of understanding how and why in-context learning works, while opening up new questions about how much can be learned from large language models through inference alone. | null | null | 10.18653/v1/2022.emnlp-main.759 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,889 |
inproceedings | stengel-eskin-van-durme-2022-curious | The Curious Case of Control | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.760/ | Stengel-Eskin, Elias and Van Durme, Benjamin | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11065--11076 | Children acquiring English make systematic errors on subject control sentences even after they have reached near-adult competence (Chomsky, 1969), possibly due to heuristics based on semantic roles (Maratsos, 1974).Given the advanced fluency of large generative language models, we ask whether model outputs are consistent with these heuristics, and to what degree different models are consistent with each other. We find that models can be categorized by behavior into three separate groups, with broad differences between the groups. The outputs of models in the largest group are consistent with positional heuristics that succeed on subject control but fail on object control. This result is surprising, given that object control is orders of magnitude more frequent in the text data used to train such models. We examine to what degree the models are sensitive to prompting with agent-patient information, finding that raising the salience of agent and patient relations results in significant changes in the outputs of most models. Based on this observation, we leverage an existing dataset of semantic proto-role annotations (White et al. 2020) to explore the connections between control and labeling event participants with properties typically associated with agents and patients. | null | null | 10.18653/v1/2022.emnlp-main.760 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,890 |
inproceedings | li-etal-2022-share | {SHARE}: a System for Hierarchical Assistive Recipe Editing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.761/ | Li, Shuyang and Li, Yufei and Ni, Jianmo and McAuley, Julian | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11077--11090 | The large population of home cooks with dietary restrictions is under-served by existing cooking resources and recipe generation models. To help them, we propose the task of controllable recipe editing: adapt a base recipe to satisfy a user-specified dietary constraint. This task is challenging, and cannot be adequately solved with human-written ingredient substitution rules or existing end-to-end recipe generation models. We tackle this problem with SHARE: a System for Hierarchical Assistive Recipe Editing, which performs simultaneous ingredient substitution before generating natural-language steps using the edited ingredients. By decoupling ingredient and step editing, our step generator can explicitly integrate the available ingredients. Experiments on the novel RecipePairs dataset{---}83K pairs of similar recipes where each recipe satisfies one of seven dietary constraints{---}demonstrate that SHARE produces convincing, coherent recipes that are appropriate for a target dietary constraint. We further show through human evaluations and real-world cooking trials that recipes edited by SHARE can be easily followed by home cooks to create appealing dishes. | null | null | 10.18653/v1/2022.emnlp-main.761 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,891 |
inproceedings | jiang-etal-2022-im2 | {IM}2: an Interpretable and Multi-category Integrated Metric Framework for Automatic Dialogue Evaluation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.762/ | Jiang, Zhihua and Ye, Guanghui and Rao, Dongning and Wang, Di and Miao, Xin | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11091--11103 | Evaluation metrics shine the light on the best models and thus strongly influence the research directions, such as the recently developed dialogue metrics USR, FED, and GRADE. However, most current metrics evaluate the dialogue data as isolated and static because they only focus on a single quality or several qualities. To mitigate the problem, this paper proposes an interpretable, multi-faceted, and controllable framework IM{\textasciicircum}2 (Interpretable and Multi-category Integrated Metric) to combine a large number of metrics which are good at measuring different qualities. The IM{\textasciicircum}2 framework first divides current popular dialogue qualities into different categories and then applies or proposes dialogue metrics to measure the qualities within each category and finally generates an overall IM{\textasciicircum}2 score. An initial version of IM{\textasciicircum}2 was submitted to the AAAI 2022 Track5.1@DSTC10 challenge and took the 2{\textasciicircum}nd place on both of the development and test leaderboard. After the competition, we develop more metrics and improve the performance of our model. We compare IM{\textasciicircum}2 with other 13 current dialogue metrics and experimental results show that IM{\textasciicircum}2 correlates more strongly with human judgments than any of them on each evaluated dataset. | null | null | 10.18653/v1/2022.emnlp-main.762 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,892 |
inproceedings | yao-etal-2022-pevl | {PEVL}: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.763/ | Yao, Yuan and Chen, Qianyu and Zhang, Ao and Ji, Wei and Liu, Zhiyuan and Chua, Tat-Seng and Sun, Maosong | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11104--11117 | Vision-language pre-training (VLP) has shown impressive performance on a wide range of cross-modal tasks, where VLP models without reliance on object detectors are becoming the mainstream due to their superior computation efficiency and competitive performance. However, the removal of object detectors also deprives the capability of VLP models in explicit object modeling, which is essential to various position-sensitive vision-language (VL) tasks, such as referring expression comprehension and visual commonsense reasoning. To address the challenge, we introduce PEVL that enhances the pre-training and prompt tuning of VLP models with explicit object position modeling. Specifically, PEVL reformulates discretized object positions and language in a unified language modeling framework, which facilitates explicit VL alignment during pre-training, and also enables flexible prompt tuning for various downstream tasks. We show that PEVL enables state-of-the-art performance of detector-free VLP models on position-sensitive tasks such as referring expression comprehension and phrase grounding, and also improves the performance on position-insensitive tasks with grounded inputs. We make the data and code for this paper publicly available at https://github.com/thunlp/PEVL. | null | null | 10.18653/v1/2022.emnlp-main.763 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,893 |
inproceedings | li-etal-2022-pre-training | Pre-training Language Models with Deterministic Factual Knowledge | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.764/ | Li, Shaobo and Li, Xiaoguang and Shang, Lifeng and Sun, Chengjie and Liu, Bingquan and Ji, Zhenzhou and Jiang, Xin and Liu, Qun | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11118--11131 | Previous works show that Pre-trained Language Models (PLMs) can capture factual knowledge. However, some analyses reveal that PLMs fail to perform it robustly, e.g., being sensitive to the changes of prompts when extracting factual knowledge. To mitigate this issue, we propose to let PLMs learn the deterministic relationship between the remaining context and the masked content. The deterministic relationship ensures that the masked factual content can be deterministically inferable based on the existing clues in the context. That would provide more stable patterns for PLMs to capture factual knowledge than randomly masking. Two pre-training tasks are further introduced to motivate PLMs to rely on the deterministic relationship when filling masks. Specifically, we use an external Knowledge Base (KB) to identify deterministic relationships and continuously pre-train PLMs with the proposed methods. The factual knowledge probing experiments indicate that the continuously pre-trained PLMs achieve better robustness in factual knowledge capturing. Further experiments on question-answering datasets show that trying to learn a deterministic relationship with the proposed methods can also help other knowledge-intensive tasks. | null | null | 10.18653/v1/2022.emnlp-main.764 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,894 |
inproceedings | wang-etal-2022-finding-skill | Finding Skill Neurons in Pre-trained Transformer-based Language Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.765/ | Wang, Xiaozhi and Wen, Kaiyue and Zhang, Zhengyan and Hou, Lei and Liu, Zhiyuan and Li, Juanzi | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11132--11152 | Transformer-based pre-trained language models have demonstrated superior performance on various natural language processing tasks. However, it remains unclear how the skills required to handle these tasks distribute among model parameters. In this paper, we find that after prompt tuning for specific tasks, the activations of some neurons within pre-trained Transformers are highly predictive of the task labels. We dub these neurons skill neurons and confirm they encode task-specific skills by finding that: (1) Skill neurons are crucial for handling tasks. Performances of pre-trained Transformers on a task significantly drop when corresponding skill neurons are perturbed. (2) Skill neurons are task-specific. Similar tasks tend to have similar distributions of skill neurons. Furthermore, we demonstrate the skill neurons are most likely generated in pre-training rather than fine-tuning by showing that the skill neurons found with prompt tuning are also crucial for other fine-tuning methods freezing neuron weights, such as the adapter-based tuning and BitFit. We also explore the applications of skill neurons, including accelerating Transformers with network pruning and building better transferability indicators. These findings may promote further research on understanding Transformers. The source code can be obtained from https://github.com/THU-KEG/Skill-Neuron. | null | null | 10.18653/v1/2022.emnlp-main.765 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,895 |
inproceedings | zhao-etal-2022-prompt | Prompt Conditioned {VAE}: Enhancing Generative Replay for Lifelong Learning in Task-Oriented Dialogue | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.766/ | Zhao, Yingxiu and Zheng, Yinhe and Tian, Zhiliang and Gao, Chang and Sun, Jian and Zhang, Nevin L. | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11153--11169 | Lifelong learning (LL) is vital for advanced task-oriented dialogue (ToD) systems. To address the catastrophic forgetting issue of LL, generative replay methods are widely employed to consolidate past knowledge with generated pseudo samples. However, most existing generative replay methods use only a single task-specific token to control their models. This scheme is usually not strong enough to constrain the generative model due to insufficient information involved. In this paper, we propose a novel method, prompt conditioned VAE for lifelong learning (PCLL), to enhance generative replay by incorporating tasks' statistics. PCLL captures task-specific distributions with a conditional variational autoencoder, conditioned on natural language prompts to guide the pseudo-sample generation. Moreover, it leverages a distillation process to further consolidate past knowledge by alleviating the noise in pseudo samples. Experiments on natural language understanding tasks of ToD systems demonstrate that PCLL significantly outperforms competitive baselines in building lifelong learning models. | null | null | 10.18653/v1/2022.emnlp-main.766 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,896 |
inproceedings | don-yehiya-etal-2022-prequel | {P}re{Q}u{EL}: Quality Estimation of Machine Translation Outputs in Advance | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.767/ | Don-Yehiya, Shachar and Choshen, Leshem and Abend, Omri | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11170--11183 | We present the task of PreQuEL, Pre-(Quality-Estimation) Learning. A PreQuEL system predicts how well a given sentence will be translated, without recourse to the actual translation, thus eschewing unnecessary resource allocation when translation quality is bound to be low. PreQuEL can be defined relative to a given MT system (e.g., some industry service) or generally relative to the state-of-the-art.From a theoretical perspective, PreQuEL places the focus on the source text, tracing properties, possibly linguistic features, that make a sentence harder to machine translate.We develop a baseline model for the task and analyze its performance. We also develop a data augmentation method (from parallel corpora), that improves results substantially. We show that this augmentation method can improve the performance of the Quality-Estimation task as well.We investigate the properties of the input text that our model is sensitive to, by testing it on challenge sets and different languages. We conclude that it is aware of syntactic and semantic distinctions, and correlates and even over-emphasizes the importance of standard NLP features. | null | null | 10.18653/v1/2022.emnlp-main.767 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,897 |
inproceedings | schlegel-etal-2022-transformers | Can Transformers Reason in Fragments of Natural Language? | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.768/ | Schlegel, Viktor and Pavlov, Kamen and Pratt-Hartmann, Ian | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11184--11199 | State-of-the-art deep-learning-based approaches to Natural Language Processing (NLP) are credited with various capabilities that involve reasoning with natural language texts. {\%}However, reasoning in this setting is often ill-defined and shallow. In this paper we carry out a large-scale empirical study investigating the detection of formally valid inferences in controlled fragments of natural language for which the satisfiability problem becomes increasingly complex. We find that, while transformer-based language models perform surprisingly well in these scenarios, a deeper analysis reveals that they appear to overfit to superficial patterns in the data rather than acquiring the logical principles governing the reasoning in these fragments. | null | null | 10.18653/v1/2022.emnlp-main.768 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,898 |
inproceedings | kreuk-etal-2022-textless | Textless Speech Emotion Conversion using Discrete {\&} Decomposed Representations | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.769/ | Kreuk, Felix and Polyak, Adam and Copet, Jade and Kharitonov, Eugene and Nguyen, Tu Anh and Rivi{\`e}re, Morgan and Hsu, Wei-Ning and Mohamed, Abdelrahman and Dupoux, Emmanuel and Adi, Yossi | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11200--11214 | Speech emotion conversion is the task of modifying the perceived emotion of a speech utterance while preserving the lexical content and speaker identity. In this study, we cast the problem of emotion conversion as a spoken language translation task. We use a decomposition of the speech signal into discrete learned representations, consisting of phonetic-content units, prosodic features, speaker, and emotion. First, we modify the speech content by translating the phonetic-content units to a target emotion, and then predict the prosodic features based on these units. Finally, the speech waveform is generated by feeding the predicted representations into a neural vocoder. Such a paradigm allows us to go beyond spectral and parametric changes of the signal, and model non-verbal vocalizations, such as laughter insertion, yawning removal, etc. We demonstrate objectively and subjectively that the proposed method is vastly superior to current approaches and even beats text-based systems in terms of perceived emotion and audio quality. We rigorously evaluate all components of such a complex system and conclude with an extensive model analysis and ablation study to better emphasize the architectural choices, strengths and weaknesses of the proposed method. Samples are available under the following link: https://speechbot.github.io/emotion | null | null | 10.18653/v1/2022.emnlp-main.769 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,899 |
inproceedings | chen-etal-2022-textual | Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.770/ | Chen, Yangyi and Qi, Fanchao and Gao, Hongcheng and Liu, Zhiyuan and Sun, Maosong | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11215--11221 | Backdoor attacks are a kind of emergent security threat in deep learning. After being injected with a backdoor, a deep neural model will behave normally on standard inputs but give adversary-specified predictions once the input contains specific backdoor triggers. In this paper, we find two simple tricks that can make existing textual backdoor attacks much more harmful. The first trick is to add an extra training task to distinguish poisoned and clean data during the training of the victim model, and the second one is to use all the clean training data rather than remove the original clean data corresponding to the poisoned data. These two tricks are universally applicable to different attack models. We conduct experiments in three tough situations including clean data fine-tuning, low-poisoning-rate, and label-consistent attacks. Experimental results show that the two tricks can significantly improve attack performance. This paper exhibits the great potential harmfulness of backdoor attacks. All the code and data can be obtained at \url{https://github.com/thunlp/StyleAttack}. | null | null | 10.18653/v1/2022.emnlp-main.770 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,900 |
inproceedings | chen-etal-2022-adversarial | Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial {NLP} | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.771/ | Chen, Yangyi and Gao, Hongcheng and Cui, Ganqu and Qi, Fanchao and Huang, Longtao and Liu, Zhiyuan and Sun, Maosong | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11222--11237 | Textual adversarial samples play important roles in multiple subfields of NLP research, including security, evaluation, explainability, and data augmentation. However, most work mixes all these roles, obscuring the problem definitions and research goals of the security role that aims to reveal the practical concerns of NLP models. In this paper, we rethink the research paradigm of textual adversarial samples in security scenarios. We discuss the deficiencies in previous work and propose our suggestions that the research on the Security-oriented adversarial NLP (SoadNLP) should: (1) evaluate their methods on security tasks to demonstrate the real-world concerns; (2) consider real-world attackers' goals, instead of developing impractical methods. To this end, we first collect, process, and release a security datasets collection Advbench. Then, we reformalize the task and adjust the emphasis on different goals in SoadNLP. Next, we propose a simple method based on heuristic rules that can easily fulfill the actual adversarial goals to simulate real-world attack methods. We conduct experiments on both the attack and the defense sides on Advbench. Experimental results show that our method has higher practical value, indicating that the research paradigm in SoadNLP may start from our new benchmark. All the code and data of Advbench can be obtained at \url{https://github.com/thunlp/Advbench}. | null | null | 10.18653/v1/2022.emnlp-main.771 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,901 |
inproceedings | lin-byrne-2022-retrieval | Retrieval Augmented Visual Question Answering with Outside Knowledge | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.772/ | Lin, Weizhe and Byrne, Bill | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11238--11254 | Outside-Knowledge Visual Question Answering (OK-VQA) is a challenging VQA task that requires retrieval of external knowledge to answer questions about images. Recent OK-VQA systems use Dense Passage Retrieval (DPR) to retrieve documents from external knowledge bases, such as Wikipedia, but with DPR trained separately from answer generation, introducing a potential limit on the overall system performance.Instead, we propose a joint training scheme which includes differentiable DPR integrated with answer generation so that the system can be trained in an end-to-end fashion. Our experiments show that our scheme outperforms recent OK-VQA systems with strong DPR for retrieval. We also introduce new diagnostic metrics to analyze how retrieval and generation interact. The strong retrieval ability of our model significantly reduces the number of retrieved documents needed in training, yielding significant benefits in answer quality and computation required for training. | null | null | 10.18653/v1/2022.emnlp-main.772 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,902 |
inproceedings | zhang-etal-2022-instance | Instance Regularization for Discriminative Language Model Pre-training | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.773/ | Zhang, Zhuosheng and Zhao, Hai and Zhou, Ming | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11255--11265 | Discriminative pre-trained language models (PrLMs) can be generalized as denoising auto-encoders that work with two procedures, ennoising and denoising. First, an ennoising process corrupts texts with arbitrary noising functions to construct training instances. Then, a denoising language model is trained to restore the corrupted tokens. Existing studies have made progress by optimizing independent strategies of either ennoising or denosing. They treat training instances equally throughout the training process, with little attention on the individual contribution of those instances. To model explicit signals of instance contribution, this work proposes to estimate the complexity of restoring the original sentences from corrupted ones in language model pre-training. The estimations involve the corruption degree in the ennoising data construction process and the prediction confidence in the denoising counterpart. Experimental results on natural language understanding and reading comprehension benchmarks show that our approach improves pre-training efficiency, effectiveness, and robustness. Code is publicly available at https://github.com/cooelf/InstanceReg. | null | null | 10.18653/v1/2022.emnlp-main.773 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,903 |
inproceedings | xu-etal-2022-guofeng | {G}uo{F}eng: A Benchmark for Zero Pronoun Recovery and Translation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.774/ | Xu, Mingzhou and Wang, Longyue and Wong, Derek F. and Liu, Hongye and Song, Linfeng and Chao, Lidia S. and Shi, Shuming and Tu, Zhaopeng | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11266--11278 | The phenomenon of zero pronoun (ZP) has attracted increasing interest in the machine translation (MT) community due to its importance and difficulty. However, previous studies generally evaluate the quality of translating ZPs with BLEU scores on MT testsets, which is not expressive or sensitive enough for accurate assessment. To bridge the data and evaluation gaps, we propose a benchmark testset for target evaluation on Chinese-English ZP translation. The human-annotated testset covers five challenging genres, which reveal different characteristics of ZPs for comprehensive evaluation. We systematically revisit eight advanced models on ZP translation and identify current challenges for future exploration. We release data, code, models and annotation guidelines, which we hope can significantly promote research in this field (https://github.com/longyuewangdcu/mZPRT). | null | null | 10.18653/v1/2022.emnlp-main.774 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,904 |
inproceedings | wang-etal-2022-scienceworld | {S}cience{W}orld: Is your Agent Smarter than a 5th Grader? | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.775/ | Wang, Ruoyao and Jansen, Peter and C{\^o}t{\'e}, Marc-Alexandre and Ammanabrolu, Prithviraj | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11279--11298 | We present ScienceWorld, a benchmark to test agents' scientific reasoning abilities in a new interactive text environment at the level of a standard elementary school science curriculum. Despite the transformer-based progress seen in question-answering and scientific text processing, we find that current models cannot reason about or explain learned science concepts in novel contexts. For instance, models can easily answer what the conductivity of a known material is but struggle when asked how they would conduct an experiment in a grounded environment to find the conductivity of an unknown material. This begs the question of whether current models are simply retrieving answers by way of seeing a large number of similar examples or if they have learned to reason about concepts in a reusable manner. We hypothesize that agents need to be grounded in interactive environments to achieve such reasoning capabilities. Our experiments provide empirical evidence supporting this hypothesis {--} showing that a 1.5 million parameter agent trained interactively for 100k steps outperforms a 11 billion parameter model statically trained for scientific question-answering and reasoning from millions of expert demonstrations. | null | null | 10.18653/v1/2022.emnlp-main.775 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,905 |
inproceedings | murrugarra-llerena-etal-2022-improving | Improving Embeddings Representations for Comparing Higher Education Curricula: A Use Case in Computing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.776/ | Murrugarra-Llerena, Jeffri and Alva-Manchego, Fernando and Murrugarra-LLerena, Nils | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11299--11307 | We propose an approach for comparing curricula of study programs in higher education. Pre-trained word embeddings are fine-tuned in a study program classification task, where each curriculum is represented by the names and content of its courses. By combining metric learning with a novel course-guided attention mechanism, our method obtains more accurate curriculum representations than strong baselines. Experiments on a new dataset with curricula of computing programs demonstrate the intuitive power of our approach via attention weights, topic modeling, and embeddings visualizations. We also present a use case comparing computing curricula from USA and Latin America to showcase the capabilities of our improved embeddings representations. | null | null | 10.18653/v1/2022.emnlp-main.776 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,906 |
inproceedings | udomcharoenchaikit-etal-2022-mitigating | Mitigating Spurious Correlation in Natural Language Understanding with Counterfactual Inference | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.777/ | Udomcharoenchaikit, Can and Ponwitayarat, Wuttikorn and Payoungkhamdee, Patomporn and Masuk, Kanruethai and Buaphet, Weerayut and Chuangsuwanich, Ekapol and Nutanong, Sarana | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11308--11321 | Despite their promising results on standard benchmarks, NLU models are still prone to make predictions based on shortcuts caused by unintended bias in the dataset. For example, an NLI model may use lexical overlap as a shortcut to make entailment predictions due to repetitive data generation patterns from annotators, also called annotation artifacts. In this paper, we propose a causal analysis framework to help debias NLU models. We show that (1) by defining causal relationships, we can introspect how much annotation artifacts affect the outcomes. (2) We can utilize counterfactual inference to mitigate bias with this knowledge. We found that viewing a model as a treatment can mitigate bias more effectively than viewing annotation artifacts as treatment. (3) In addition to bias mitigation, we can interpret how much each debiasing strategy is affected by annotation artifacts. Our experimental results show that using counterfactual inference can improve out-of-distribution performance in all settings while maintaining high in-distribution performance. | null | null | 10.18653/v1/2022.emnlp-main.777 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,907 |
inproceedings | han-etal-2022-balancing | Balancing out Bias: Achieving Fairness Through Balanced Training | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.779/ | Han, Xudong and Baldwin, Timothy and Cohn, Trevor | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11335--11350 | Group bias in natural language processing tasks manifests as disparities in system error rates across texts authorized by different demographic groups, typically disadvantaging minority groups. Dataset balancing has been shown to be effective at mitigating bias, however existing approaches do not directly account for correlations between author demographics and linguistic variables, limiting their effectiveness. To achieve Equal Opportunity fairness, such as equal job opportunity without regard to demographics, this paper introduces a simple, but highly effective, objective for countering bias using balanced training.We extend the method in the form of a gated model, which incorporates protected attributes as input, and show that it is effective at reducing bias in predictions through demographic input perturbation, outperforming all other bias mitigation techniques when combined with balanced training. | null | null | 10.18653/v1/2022.emnlp-main.779 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,909 |
inproceedings | xia-etal-2022-prompting | Prompting {ELECTRA}: Few-Shot Learning with Discriminative Pre-Trained Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.780/ | Xia, Mengzhou and Artetxe, Mikel and Du, Jingfei and Chen, Danqi and Stoyanov, Veselin | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11351--11361 | Pre-trained masked language models successfully perform few-shot learning by formulating downstream tasks as text infilling. How- ever, as a strong alternative in full-shot settings, discriminative pre-trained models like ELECTRA do not fit into the paradigm. In this work, we adapt prompt-based few-shot learning to ELECTRA and show that it outperforms masked language models in a wide range of tasks. ELECTRA is pre-trained to distinguish if a token is generated or original. We naturally extend that to prompt-based few-shot learning by training to score the originality of the target options without introducing new parameters. Our method can be easily adapted to tasks involving multi-token predictions without extra computation overhead. Analysis shows that ELECTRA learns distributions that align better with downstream tasks. | null | null | 10.18653/v1/2022.emnlp-main.780 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,910 |
inproceedings | jiang-riloff-2022-identifying | Identifying Physical Object Use in Sentences | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.781/ | Jiang, Tianyu and Riloff, Ellen | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11362--11372 | Commonsense knowledge about the typicalfunctions of physical objects allows people tomake inferences during sentence understanding.For example, we infer that {\textquotedblleft}Sam enjoyedthe book{\textquotedblright} means that Sam enjoyed reading thebook, even though the action is implicit. Priorresearch has focused on learning the prototypicalfunctions of physical objects in order toenable inferences about implicit actions. Butmany sentences refer to objects even when theyare not used (e.g., {\textquotedblleft}The book fell{\textquotedblright}). We arguethat NLP systems need to recognize whether anobject is being used before inferring how theobject is used. We define a new task called ObjectUse Classification that determines whethera physical object mentioned in a sentence wasused or likely will be used. We introduce a newdataset for this task and present a classificationmodel that exploits data augmentation methodsand FrameNet when fine-tuning a pre-trainedlanguage model. We also show that object useclassification combined with knowledge aboutthe prototypical functions of objects has thepotential to yield very good inferences aboutimplicit and anticipated actions. | null | null | 10.18653/v1/2022.emnlp-main.781 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,911 |
inproceedings | de-raedt-etal-2022-robustifying | Robustifying Sentiment Classification by Maximally Exploiting Few Counterfactuals | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.783/ | De Raedt, Maarten and Godin, Fr{\'e}deric and Develder, Chris and Demeester, Thomas | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11386--11400 | For text classification tasks, finetuned language models perform remarkably well. Yet, they tend to rely on spurious patterns in training data, thus limiting their performance on out-of-distribution (OOD) test data. Among recent models aiming to avoid this spurious pattern problem, adding extra counterfactual samples to the training data has proven to be very effective. Yet, counterfactual data generation is costly since it relies on human annotation. Thus, we propose a novel solution that only requires annotation of a small fraction (e.g., 1{\%}) of the original training data, and uses automatic generation of extra counterfactuals in an encoding vector space. We demonstrate the effectiveness of our approach in sentiment classification, using IMDb data for training and other sets for OOD tests (i.e., Amazon, SemEval and Yelp). We achieve noticeable accuracy improvements by adding only 1{\%} manual counterfactuals: +3{\%} compared to adding +100{\%} in-distribution training samples, +1.3{\%} compared to alternate counterfactual approaches. | null | null | 10.18653/v1/2022.emnlp-main.783 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,913 |
inproceedings | gabbolini-etal-2022-data | Data-Efficient Playlist Captioning With Musical and Linguistic Knowledge | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.784/ | Gabbolini, Giovanni and Hennequin, Romain and Epure, Elena | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11401--11415 | Music streaming services feature billions of playlists created by users, professional editors or algorithms. In this content overload scenario, it is crucial to characterise playlists, so that music can be effectively organised and accessed. Playlist titles and descriptions are proposed in natural language either manually by music editors and users or automatically from pre-defined templates. However, the former is time-consuming while the latter is limited by the vocabulary and covered music themes. In this work, we propose PlayNTell, a data-efficient multi-modal encoder-decoder model for automatic playlist captioning. Compared to existing music captioning algorithms, PlayNTell leverages also linguistic and musical knowledge to generate correct and thematic captions. We benchmark PlayNTell on a new editorial playlists dataset collected from two major music streaming services.PlayNTell yields 2x-3x higher BLEU@4 and CIDEr than state of the art captioning algorithms. | null | null | 10.18653/v1/2022.emnlp-main.784 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,914 |
inproceedings | sorokin-2022-improved | Improved grammatical error correction by ranking elementary edits | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.785/ | Sorokin, Alexey | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11416--11429 | We offer a two-stage reranking method for grammatical error correction: the first model serves as edit generator, while the second classifies the proposed edits as correct or false. We show how to use both encoder-decoder and sequence labeling models for the first step of our pipeline. We achieve state-of-the-art quality on BEA 2019 English dataset even using weak BERT-GEC edit generator. Combining our roberta-base scorer with state-of-the-art GECToR edit generator, we surpass GECToR by 2-3{\%}. With a larger model we establish a new SOTA on BEA development and test sets. Our model also sets a new SOTA on Russian, despite using smaller models and less data than the previous approaches. | null | null | 10.18653/v1/2022.emnlp-main.785 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,915 |
inproceedings | khashabi-etal-2022-genie | {GENIE}: Toward Reproducible and Standardized Human Evaluation for Text Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.787/ | Khashabi, Daniel and Stanovsky, Gabriel and Bragg, Jonathan and Lourie, Nicholas and Kasai, Jungo and Choi, Yejin and Smith, Noah A. and Weld, Daniel | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11444--11458 | While often assumed a gold standard, effective human evaluation of text generation remains an important, open area for research.We revisit this problem with a focus on producing consistent evaluations that are reproducible{---}over time and across different populations. We study this goal in different stages of the human evaluation pipeline. In particular, we consider design choices for the annotation interface used to elicit human judgments and their impact on reproducibility. Furthermore, we develop an automated mechanism for maintaining annotator quality via a probabilistic model that detects and excludes noisy annotators. Putting these lessons together, we introduce GENIE: a system for running standardized human evaluations across different generation tasks.We instantiate GENIE with datasets representing four core challenges in text generation: machine translation, summarization, commonsense reasoning, and machine comprehension.For each task, GENIE offers a leaderboard that automatically crowdsources annotations for submissions, evaluating them along axes such as correctness, conciseness, and fluency.We have made the GENIE leaderboards publicly available, and have already ranked 50 submissions from 10 different research groups. We hope GENIE encourages further progress toward effective, standardized evaluations for text generation. | null | null | 10.18653/v1/2022.emnlp-main.787 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,917 |
inproceedings | pimentel-etal-2022-attentional | The Architectural Bottleneck Principle | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.788/ | Pimentel, Tiago and Valvoda, Josef and Stoehr, Niklas and Cotterell, Ryan | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11459--11472 | In this paper, we seek to measure how much information a component in a neural network could extract from the representations fed into it. Our work stands in contrast to prior probing work, most of which investigates how much information a model`s representations contain. This shift in perspective leads us to propose a new principle for probing, the architectural bottleneck principle: In order to estimate how much information a given component could extract, a probe should look exactly like the component. Relying on this principle, we estimate how much syntactic information is available to transformers through our attentional probe, a probe that exactly resembles a transformer`s self-attention head. Experimentally, we find that, in three models (BERT, ALBERT, and RoBERTa), a sentence`s syntax tree is mostly extractable by our probe, suggesting these models have access to syntactic information while composing their contextual representations. Whether this information is actually used by these models, however, remains an open question. | null | null | 10.18653/v1/2022.emnlp-main.788 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,918 |
inproceedings | stengel-eskin-etal-2022-data | When More Data Hurts: A Troubling Quirk in Developing Broad-Coverage Natural Language Understanding Systems | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.789/ | Stengel-Eskin, Elias and Platanios, Emmanouil Antonios and Pauls, Adam and Thomson, Sam and Fang, Hao and Van Durme, Benjamin and Eisner, Jason and Su, Yu | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11473--11487 | In natural language understanding (NLU) production systems, users' evolving needs necessitate the addition of new features over time, indexed by new symbols added to the meaning representation space. This requires additional training data and results in ever-growing datasets. We present the first systematic investigation into this incremental symbol learning scenario. Our analysis reveals a troubling quirk in building broad-coverage NLU systems: as the training dataset grows, performance on a small set of new symbols often decreases. We show that this trend holds for multiple mainstream models on two common NLU tasks: intent recognition and semantic parsing. Rejecting class imbalance as the sole culprit, we reveal that the trend is closely associated with an effect we call source signal dilution, where strong lexical cues for the new symbol become diluted as the training dataset grows. Selectively dropping training examples to prevent dilution often reverses the trend, showing the over-reliance of mainstream neural NLU models on simple lexical cues. | null | null | 10.18653/v1/2022.emnlp-main.789 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,919 |
inproceedings | huang-etal-2022-zero | Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified Multilingual Prompt | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.790/ | Huang, Lianzhe and Ma, Shuming and Zhang, Dongdong and Wei, Furu and Wang, Houfeng | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11488--11497 | Prompt-based tuning has been proven effective for pretrained language models (PLMs). While most of the existing work focuses on the monolingual prompts, we study the multilingual prompts for multilingual PLMs, especially in the zero-shot cross-lingual setting. To alleviate the effort of designing different prompts for multiple languages, we propose a novel model that uses a unified prompt for all languages, called UniPrompt. Different from the discrete prompts and soft prompts, the unified prompt is model-based and language-agnostic. Specifically, the unified prompt is initialized by a multilingual PLM to produce language-independent representation, after which is fused with the text input. During inference, the prompts can be pre-computed so that no extra computation cost is needed. To collocate with the unified prompt, we propose a new initialization method for the target label word to further improve the model`s transferability across languages. Extensive experiments show that our proposed methods can significantly outperform the strong baselines across different languages. We release data and code to facilitate future research. | null | null | 10.18653/v1/2022.emnlp-main.790 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,920 |
inproceedings | pujari-etal-2022-three | Three Real-World Datasets and Neural Computational Models for Classification Tasks in Patent Landscaping | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.791/ | Pujari, Subhash and Str{\"otgen, Jannik and Giereth, Mark and Gertz, Michael and Friedrich, Annemarie | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11498--11513 | Patent Landscaping, one of the central tasks of intellectual property management, includes selecting and grouping patents according to user-defined technical or application-oriented criteria. While recent transformer-based models have been shown to be effective for classifying patents into taxonomies such as CPC or IPC, there is yet little research on how to support real-world Patent Landscape Studies (PLSs) using natural language processing methods. With this paper, we release three labeled datasets for PLS-oriented classification tasks covering two diverse domains. We provide a qualitative analysis and report detailed corpus statistics.Most research on neural models for patents has been restricted to leveraging titles and abstracts. We compare strong neural and non-neural baselines, proposing a novel model that takes into account textual information from the patents' full texts as well as embeddings created based on the patents' CPC labels. We find that for PLS-oriented classification tasks, going beyond title and abstract is crucial, CPC labels are an effective source of information, and combining all features yields the best results. | null | null | 10.18653/v1/2022.emnlp-main.791 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,921 |
inproceedings | byrne-etal-2022-topic | Topic Modeling With Topological Data Analysis | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.792/ | Byrne, Ciar{\'a}n and Horak, Danijela and Moilanen, Karo and Mabona, Amandla | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11514--11533 | Recent unsupervised topic modelling ap-proaches that use clustering techniques onword, token or document embeddings can ex-tract coherent topics. A common limitationof such approaches is that they reveal noth-ing about inter-topic relationships which areessential in many real-world application do-mains. We present an unsupervised topic mod-elling method which harnesses TopologicalData Analysis (TDA) to extract a topologicalskeleton of the manifold upon which contextu-alised word embeddings lie. We demonstratethat our approach, which performs on par witha recent baseline, is able to construct a networkof coherent topics together with meaningfulrelationships between them. | null | null | 10.18653/v1/2022.emnlp-main.792 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,922 |
inproceedings | zhu-etal-2022-predicting | Predicting Fine-Tuning Performance with Probing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.793/ | Zhu, Zining and Shahtalebi, Soroosh and Rudzicz, Frank | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11534--11547 | Large NLP models have recently shown impressive performance in language understanding tasks, typically evaluated by their fine-tuned performance. Alternatively, probing has received increasing attention as being a lightweight method for interpreting the intrinsic mechanisms of large NLP models. In probing, post-hoc classifiers are trained on {\textquotedblleft}out-of-domain{\textquotedblright} datasets that diagnose specific abilities. While probing the language models has led to insightful findings, they appear disjointed from the development of models. This paper explores the utility of probing deep NLP models to extract a proxy signal widely used in model development {--} the fine-tuning performance. We find that it is possible to use the accuracies of only three probing tests to predict the fine-tuning performance with errors 40{\%} - 80{\%} smaller than baselines. We further discuss possible avenues where probing can empower the development of deep NLP models. | null | null | 10.18653/v1/2022.emnlp-main.793 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,923 |
inproceedings | awasthi-etal-2022-diverse | Diverse Parallel Data Synthesis for Cross-Database Adaptation of Text-to-{SQL} Parsers | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.794/ | Awasthi, Abhijeet and Sathe, Ashutosh and Sarawagi, Sunita | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11548--11562 | Text-to-SQL parsers typically struggle with databases unseen during the train time. Adapting Text-to-SQL parsers to new database schemas is a challenging problem owing to a vast diversity of schemas and zero availability of natural language queries in new schemas. We present ReFill, a framework for synthesizing high-quality and textually diverse parallel datasets for adapting Text-to-SQL parsers. Unlike prior methods that utilize SQL-to-Text generation, ReFill learns to retrieve-and-edit text queries in existing schemas and transfer them to the new schema. ReFill utilizes a simple method for retrieving diverse existing text, masking their schema-specific tokens, and refilling with tokens relevant to the new schema. We show that this process leads to significantly more diverse text queries than achievable by standard SQL-to-Text generation models. Through experiments on several databases, we show that adapting a parser by finetuning it on datasets synthesized by ReFill consistently outperforms prior data-augmentation methods. | null | null | 10.18653/v1/2022.emnlp-main.794 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,924 |
inproceedings | sancheti-etal-2022-agent | Agent-Specific Deontic Modality Detection in Legal Language | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.795/ | Sancheti, Abhilasha and Garimella, Aparna and Srinivasan, Balaji Vasan and Rudinger, Rachel | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11563--11579 | Legal documents are typically long and written in legalese, which makes it particularly difficult for laypeople to understand their rights and duties. While natural language understanding technologies can be valuable in supporting such understanding in the legal domain, the limited availability of datasets annotated for deontic modalities in the legal domain, due to the cost of hiring experts and privacy issues, is a bottleneck. To this end, we introduce, LEXDEMOD, a corpus of English contracts annotatedwith deontic modality expressed with respect to a contracting party or agent along with the modal triggers. We benchmark this dataset on two tasks: (i) agent-specific multi-label deontic modality classification, and (ii) agent-specific deontic modality and trigger span detection using Transformer-based (Vaswani et al., 2017) language models. Transfer learning experiments show that the linguistic diversity of modal expressions in LEXDEMOD generalizes reasonably from lease to employment andrental agreements. A small case study indicates that a model trained on LEXDEMOD can detect red flags with high recall. We believe our work offers a new research direction for deontic modality detection in the legal domain. | null | null | 10.18653/v1/2022.emnlp-main.795 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,925 |
inproceedings | deng-etal-2022-cold | {COLD}: A Benchmark for {C}hinese Offensive Language Detection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.796/ | Deng, Jiawen and Zhou, Jingyan and Sun, Hao and Zheng, Chujie and Mi, Fei and Meng, Helen and Huang, Minlie | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11580--11599 | Offensive language detection is increasingly crucial for maintaining a civilized social media platform and deploying pre-trained language models. However, this task in Chinese is still under exploration due to the scarcity of reliable datasets. To this end, we propose a benchmark {--}COLD for Chinese offensive language analysis, including a Chinese Offensive Language Dataset {--}COLDATASET and a baseline detector {--}COLDETECTOR which is trained on the dataset. We show that the COLD benchmark contributes to Chinese offensive language detection which is challenging for existing resources. We then deploy the COLDETECTOR and conduct detailed analyses on popular Chinese pre-trained language models. We first analyze the offensiveness of existing generative models and show that these models inevitably expose varying degrees of offensive issues. Furthermore, we investigate the factors that influence the offensive generations, and we find that anti-bias contents and keywords referring to certain groups or revealing negative attitudes trigger offensive outputs easier. | null | null | 10.18653/v1/2022.emnlp-main.796 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,926 |
inproceedings | murty-etal-2022-fixing | Fixing Model Bugs with Natural Language Patches | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.797/ | Murty, Shikhar and Manning, Christopher and Lundberg, Scott and Ribeiro, Marco Tulio | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11600--11613 | Current approaches for fixing systematic problems in NLP models (e.g., regex patches, finetuning on more data) are either brittle, or labor-intensive and liable to shortcuts. In contrast, humans often provide corrections to each other through natural language. Taking inspiration from this, we explore natural language patches{---}declarative statements that allow developers to provide corrective feedback at the right level of abstraction, either overriding the model ({\textquotedblleft}if a review gives 2 stars, the sentiment is negative{\textquotedblright}) or providing additional information the model may lack ({\textquotedblleft}if something is described as the bomb, then it is good{\textquotedblright}). We model the task of determining if a patch applies separately from the task of integrating patch information, and show that with a small amount of synthetic data, we can teach models to effectively use real patches on real data{---}1 to 7 patches improve accuracy by {\textasciitilde}1{--}4 accuracy points on different slices of a sentiment analysis dataset, and F1 by 7 points on a relation extraction dataset. Finally, we show that finetuning on as many as 100 labeled examples may be needed to match the performance of a small set of language patches. | null | null | 10.18653/v1/2022.emnlp-main.797 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,927 |
inproceedings | jin-etal-2022-wedef | {W}e{D}ef: Weakly Supervised Backdoor Defense for Text Classification | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.798/ | Jin, Lesheng and Wang, Zihan and Shang, Jingbo | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11614--11626 | Existing backdoor defense methods are only effective for limited trigger types. To defend different trigger types at once, we start from the class-irrelevant nature of the poisoning process and propose a novel weakly supervised backdoor defense framework WeDef. Recent advances in weak supervision make it possible to train a reasonably accurate text classifier using only a small number of user-provided, class-indicative seed words. Such seed words shall be considered independent of the triggers. Therefore, a weakly supervised text classifier trained by only the poisoned documents without their labels will likely have no backdoor. Inspired by this observation, in WeDef, we define the reliability of samples based on whether the predictions of the weak classifier agree with their labels in the poisoned training set. We further improve the results through a two-phase sanitization: (1) iteratively refine the weak classifier based on the reliable samples and (2) train a binary poison classifier by distinguishing the most unreliable samples from the most reliable samples. Finally, we train the sanitized model on the samples that the poison classifier predicts as benign. Extensive experiments show that WeDef is effective against popular trigger-based attacks (e.g., words, sentences, and paraphrases), outperforming existing defense methods. | null | null | 10.18653/v1/2022.emnlp-main.798 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,928 |
inproceedings | yu-etal-2022-interventional | Interventional Training for Out-Of-Distribution Natural Language Understanding | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.799/ | Yu, Sicheng and Jiang, Jing and Zhang, Hao and Niu, Yulei and Sun, Qianru and Bing, Lidong | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11627--11638 | Out-of-distribution (OOD) settings are used to measure a model`s performance when the distribution of the test data is different from that of the training data. NLU models are known to suffer in OOD. We study this issue from the perspective of causality, which sees confounding bias as the reason for models to learn spurious correlations. While a common solution is to perform intervention, existing methods handle only known and single confounder, but in many NLU tasks the confounders can be both unknown and multifactorial. In this paper, we propose a novel interventional training method called Bottom-up Automatic Intervention (BAI) that performs multi-granular intervention with identified multifactorial confounders. Our experiments on three NLU tasks, namely, natural language inference, fact verification and paraphrase identification, show the effectiveness of BAI for tackling OOD settings. | null | null | 10.18653/v1/2022.emnlp-main.799 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,929 |
inproceedings | kim-etal-2022-pseudo | Pseudo-Relevance for Enhancing Document Representation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.800/ | Kim, Jihyuk and Hwang, Seung-won and Song, Seoho and Ko, Hyeseon and Song, Young-In | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11639--11652 | This paper studies how to enhance the document representation for the bi-encoder approach in dense document retrieval. The bi-encoder, separately encoding a query and a document as a single vector, is favored for high efficiency in large-scale information retrieval, compared to more effective but complex architectures. To combine the strength of the two, the multi-vector representation of documents for bi-encoder, such as ColBERT preserving all token embeddings, has been widely adopted. Our contribution is to reduce the size of the multi-vector representation, without compromising the effectiveness, supervised by query logs. Our proposed solution decreases the latency and the memory footprint, up to 8- and 3-fold, validated on MSMARCO and real-world search query logs. | null | null | 10.18653/v1/2022.emnlp-main.800 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,930 |
inproceedings | ye-etal-2022-zerogen | {Z}ero{G}en: Efficient Zero-shot Learning via Dataset Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.801/ | Ye, Jiacheng and Gao, Jiahui and Li, Qintong and Xu, Hang and Feng, Jiangtao and Wu, Zhiyong and Yu, Tao and Kong, Lingpeng | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11653--11669 | There is a growing interest in dataset generation recently due to the superior generative capacity of large pre-trained language models (PLMs). In this paper, we study a flexible and efficient zero-short learning method, ZeroGen.Given a zero-shot task, we first generate a dataset from scratch using PLMs in an unsupervised manner. Then, we train a tiny task model (e.g., LSTM) under the supervision of the synthesized dataset. This approach allows highly efficient inference as the final task model only has orders of magnitude fewer parameters comparing to PLMs (e.g., GPT2-XL).Apart from being annotation-free and efficient, we argue that ZeroGen can also provide useful insights from the perspective of data-free model-agnostic knowledge distillation, and unreferenced text generation evaluation. Experiments and analysis on different NLP tasks, namely, text classification, question answering, and natural language inference, show the effectiveness of ZeroGen. | null | null | 10.18653/v1/2022.emnlp-main.801 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,931 |
inproceedings | ostendorff-etal-2022-neighborhood | Neighborhood Contrastive Learning for Scientific Document Representations with Citation Embeddings | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.802/ | Ostendorff, Malte and Rethmeier, Nils and Augenstein, Isabelle and Gipp, Bela and Rehm, Georg | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11670--11688 | Learning scientific document representations can be substantially improved through contrastive learning objectives, where the challenge lies in creating positive and negative training samples that encode the desired similarity semantics. Prior work relies on discrete citation relations to generate contrast samples. However, discrete citations enforce a hard cut-off to similarity. This is counter-intuitive to similarity-based learning and ignores that scientific papers can be very similar despite lacking a direct citation - a core problem of finding related research. Instead, we use controlled nearest neighbor sampling over citation graph embeddings for contrastive learning. This control allows us to learn continuous similarity, to sample hard-to-learn negatives and positives, and also to avoid collisions between negative and positive samples by controlling the sampling margin between them. The resulting method SciNCL outperforms the state-of-the-art on the SciDocs benchmark. Furthermore, we demonstrate that it can train (or tune) language models sample-efficiently and that it can be combined with recent training-efficient methods. Perhaps surprisingly, even training a general-domain language model this way outperforms baselines pretrained in-domain. | null | null | 10.18653/v1/2022.emnlp-main.802 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,932 |
inproceedings | li-etal-2022-spe | {SPE}: Symmetrical Prompt Enhancement for Fact Probing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.803/ | Li, Yiyuan and Che, Tong and Wang, Yezhen and Jiang, Zhengbao and Xiong, Caiming and Chaturvedi, Snigdha | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11689--11698 | Pretrained language models (PLMs) have been shown to accumulate factual knowledge during pretraining (Petroni et al. 2019). Recent works probe PLMs for the extent of this knowledge through prompts either in discrete or continuous forms. However, these methods do not consider symmetry of the task: object prediction and subject prediction. In this work, we propose Symmetrical Prompt Enhancement (SPE), a continuous prompt-based method for factual probing in PLMs that leverages the symmetry of the task by constructing symmetrical prompts for subject and object prediction. Our results on a popular factual probing dataset, LAMA, show significant improvement of SPE over previous probing methods. | null | null | 10.18653/v1/2022.emnlp-main.803 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,933 |
inproceedings | artetxe-etal-2022-efficient | Efficient Large Scale Language Modeling with Mixtures of Experts | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.804/ | Artetxe, Mikel and Bhosale, Shruti and Goyal, Naman and Mihaylov, Todor and Ott, Myle and Shleifer, Sam and Lin, Xi Victoria and Du, Jingfei and Iyer, Srinivasan and Pasunuru, Ramakanth and Anantharaman, Giridharan and Li, Xian and Chen, Shuohui and Akin, Halil and Baines, Mandeep and Martin, Louis and Zhou, Xing and Koura, Punit Singh and O{'}Horo, Brian and Wang, Jeffrey and Zettlemoyer, Luke and Diab, Mona and Kozareva, Zornitsa and Stoyanov, Veselin | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11699--11732 | Mixture of Experts layers (MoEs) enable efficient scaling of language models through conditional computation. This paper presents a detailed empirical study of how autoregressive MoE language models scale in comparison with dense models in a wide range of settings: in- and out-of-domain language modeling, zero- and few-shot priming, and full-shot fine-tuning. With the exception of fine-tuning, we find MoEs to be substantially more compute efficient. At more modest training budgets, MoEs can match the performance of dense models using {\textasciitilde}4 times less compute. This gap narrows at scale, but our largest MoE model (1.1T parameters) consistently outperforms a compute-equivalent dense model (6.7B parameters). Overall, this performance gap varies greatly across tasks and domains, suggesting that MoE and dense models generalize differently in ways that are worthy of future study. We make our code and models publicly available for research use. | null | null | 10.18653/v1/2022.emnlp-main.804 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,934 |
inproceedings | kwon-etal-2022-medjex | {M}ed{JE}x: A Medical Jargon Extraction Model with {W}iki`s Hyperlink Span and Contextualized Masked Language Model Score | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.805/ | Kwon, Sunjae and Yao, Zonghai and Jordan, Harmon and Levy, David and Corner, Brian and Yu, Hong | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11733--11751 | This paper proposes a new natural language processing (NLP) application for identifying medical jargon terms potentially difficult for patients to comprehend from electronic health record (EHR) notes. We first present a novel and publicly available dataset with expert-annotated medical jargon terms from 18K+ EHR note sentences (MedJ). Then, we introduce a novel medical jargon extraction (MedJEx) model which has been shown to outperform existing state-of-the-art NLP models. First, MedJEx improved the overall performance when it was trained on an auxiliary Wikipedia hyperlink span dataset, where hyperlink spans provide additional Wikipedia articles to explain the spans (or terms), and then fine-tuned on the annotated MedJ data. Secondly, we found that a contextualized masked language model score was beneficial for detecting domain-specific unfamiliar jargon terms. Moreover, our results show that training on the auxiliary Wikipedia hyperlink span datasets improved six out of eight biomedical named entity recognition benchmark datasets. MedJEx is publicly available. | null | null | 10.18653/v1/2022.emnlp-main.805 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,935 |
inproceedings | ko-etal-2022-discourse | Discourse Comprehension: A Question Answering Framework to Represent Sentence Connections | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.806/ | Ko, Wei-Jen and Dalton, Cutter and Simmons, Mark and Fisher, Eliza and Durrett, Greg and Li, Junyi Jessy | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11752--11764 | While there has been substantial progress in text comprehension through simple factoid question answering, more holistic comprehension of a discourse still presents a major challenge (Dunietz et al., 2020). Someone critically reflecting on a text as they read it will pose curiosity-driven, often open-ended questions, which reflect deep understanding of the content and require complex reasoning to answer (Ko et al., 2020; Westera et al., 2020). A key challenge in building and evaluating models for this type of discourse comprehension is the lack of annotated data, especially since collecting answers to such questions requires high cognitive load for annotators.This paper presents a novel paradigm that enables scalable data collection targeting the comprehension of news documents, viewing these questions through the lens of discourse. The resulting corpus, DCQA (Discourse Comprehension by Question Answering), captures both discourse and semantic links between sentences in the form of free-form, open-ended questions. On an evaluation set that we annotated on questions from Ko et al. (2020), we show that DCQA provides valuable supervision for answering open-ended questions. We additionally design pre-training methods utilizing existing question-answering resources, and use synthetic data to accommodate unanswerable questions. | null | null | 10.18653/v1/2022.emnlp-main.806 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,936 |
inproceedings | bansal-etal-2022-learning | Learning to Generate Overlap Summaries through Noisy Synthetic Data | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.807/ | Bansal, Naman and Akter, Mousumi and Karmaker Santu, Shubhra Kanti | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11765--11777 | Semantic Overlap Summarization (SOS) is a novel and relatively under-explored seq-to-seq task which entails summarizing common information from multiple alternate narratives. One of the major challenges for solving this task is the lack of existing datasets for supervised training. To address this challenge, we propose a novel data augmentation technique, which allows us to create large amount of synthetic data for training a seq-to-seq model that can perform the SOS task. Through extensive experiments using narratives from the news domain, we show that the models fine-tuned using the synthetic dataset provide significant performance improvements over the pre-trained vanilla summarization techniques and are close to the models fine-tuned on the golden training data; which essentially demonstrates the effectiveness of out proposed data augmentation technique for training seq-to-seq models on the SOS task. | null | null | 10.18653/v1/2022.emnlp-main.807 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,937 |
inproceedings | jiang-etal-2022-mutual | Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.808/ | Jiang, Yichen and Zhou, Xiang and Bansal, Mohit | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11778--11793 | Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models. In this work, we analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias (one target sequence can only be mapped to one source sequence), and the tendency to memorize whole examples rather than separating structures from contents. We propose two techniques to address these two issues respectively: Mutual Exclusivity Training that prevents the model from producing seen generations when facing novel examples via an unlikelihood-based loss, and prim2primX data augmentation that automatically diversifies the arguments of every syntactic function to prevent memorizing and provide a compositional inductive bias without exposing test-set data. Combining these two techniques, we show substantial empirical improvements using standard sequence-to-sequence models (LSTMs and Transformers) on two widely-used compositionality datasets: SCAN and COGS. Finally, we provide analysis characterizing the improvements as well as the remaining challenges, and provide detailed ablations of our method. | null | null | 10.18653/v1/2022.emnlp-main.808 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,938 |
inproceedings | fortuna-etal-2022-directions | Directions for {NLP} Practices Applied to Online Hate Speech Detection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.809/ | Fortuna, Paula and Dominguez, Monica and Wanner, Leo and Talat, Zeerak | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11794--11805 | Addressing hate speech in online spaces has been conceptualized as a classification task that uses Natural Language Processing (NLP) techniques. Through this conceptualization, the hate speech detection task has relied on common conventions and practices from NLP. For instance, inter-annotator agreement is conceptualized as a way to measure dataset quality and certain metrics and benchmarks are used to assure model generalization. However, hate speech is a deeply complex and situated concept that eludes such static and disembodied practices. In this position paper, we critically reflect on these methodologies for hate speech detection, we argue that many conventions in NLP are poorly suited for the problem and encourage researchers to develop methods that are more appropriate for the task. | null | null | 10.18653/v1/2022.emnlp-main.809 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,939 |
inproceedings | di-liello-etal-2022-pre | Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.810/ | Di Liello, Luca and Garg, Siddhant and Soldaini, Luca and Moschitti, Alessandro | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11806--11816 | An important task for designing QA systems is answer sentence selection (AS2): selecting the sentence containing (or constituting) the answer to a question from a set of retrieved relevant documents. In this paper, we propose three novel sentence-level transformer pre-training objectives that incorporate paragraph-level semantics within and across documents, to improve the performance of transformers for AS2, and mitigate the requirement of large labeled datasets. Specifically, the model is tasked to predict whether: (i) two sentences are extracted from the same paragraph, (ii) a given sentence is extracted from a given paragraph, and (iii) two paragraphs are extracted from the same document. Our experiments on three public and one industrial AS2 datasets demonstrate the empirical superiority of our pre-trained transformers over baseline models such as RoBERTa and ELECTRA for AS2. | null | null | 10.18653/v1/2022.emnlp-main.810 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,940 |
inproceedings | kantharaj-etal-2022-opencqa | {O}pen{CQA}: Open-ended Question Answering with Charts | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.811/ | Kantharaj, Shankar and Do, Xuan Long and Leong, Rixie Tiffany and Tan, Jia Qing and Hoque, Enamul and Joty, Shafiq | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11817--11837 | Charts are very popular to analyze data and convey important insights. People often analyze visualizations to answer open-ended questions that require explanatory answers. Answering such questions are often difficult and time-consuming as it requires a lot of cognitive and perceptual efforts. To address this challenge, we introduce a new task called OpenCQA, where the goal is to answer an open-ended question about a chart with descriptive texts. We present the annotation process and an in-depth analysis of our dataset. We implement and evaluate a set of baselines under three practical settings. In the first setting, a chart and the accompanying article is provided as input to the model. The second setting provides only the relevant paragraph(s) to the chart instead of the entire article, whereas the third setting requires the model to generate an answer solely based on the chart. Our analysis of the results show that the top performing models generally produce fluent and coherent text while they struggle to perform complex logical and arithmetic reasoning. | null | null | 10.18653/v1/2022.emnlp-main.811 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,941 |
inproceedings | li-etal-2022-systematic | A Systematic Investigation of Commonsense Knowledge in Large Language Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.812/ | Li, Xiang Lorraine and Kuncoro, Adhiguna and Hoffmann, Jordan and de Masson d{'}Autume, Cyprien and Blunsom, Phil and Nematzadeh, Aida | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11838--11855 | Language models (LMs) trained on large amounts of data have shown impressive performance on many NLP tasks under the zero-shot and few-shot setup. Here we aim to better understand the extent to which such models learn commonsense knowledge {---} a critical component of many NLP applications. We conduct a systematic and rigorous zero-shot and few-shot commonsense evaluation of large pre-trained LMs, where we: (i) carefully control for the LMs' ability to exploit potential surface cues and annotation artefacts, and (ii) account for variations in performance that arise from factors that are not related to commonsense knowledge. Our findings highlight the limitations of pre-trained LMs in acquiring commonsense knowledge without task-specific supervision; furthermore, using larger models or few-shot evaluation is insufficient to achieve human-level commonsense performance. | null | null | 10.18653/v1/2022.emnlp-main.812 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,942 |
inproceedings | raman-etal-2022-transforming | Transforming Sequence Tagging Into A {S}eq2{S}eq Task | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.813/ | Raman, Karthik and Naim, Iftekhar and Chen, Jiecao and Hashimoto, Kazuma and Yalasangi, Kiran and Srinivasan, Krishna | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11856--11874 | Pretrained, large, generative language models (LMs) have had great success in a wide range of sequence tagging and structured prediction tasks. Casting a sequence tagging task as a Seq2Seq one requires deciding the formats of the input and output sequences. However, we lack a principled understanding of the trade-offs associated with these formats (such as the effect on model accuracy, sequence length, multilingual generalization, hallucination). In this paper, we rigorously study different formats one could use for casting input text sentences and their output labels into the input and target (i.e., output) of a Seq2Seq model. Along the way, we introduce a new format, which we show to to be both simpler and more effective. Additionally the new format demonstrates significant gains in the multilingual settings {--} both zero-shot transfer learning and joint training. Lastly, we find that the new format is more robust and almost completely devoid of hallucination {--} an issue we find common in existing formats. With well over a 1000 experiments studying 14 different formats, over 7 diverse public benchmarks {--} including 3 multilingual datasets spanning 7 languages {--} we believe our findings provide a strong empirical basis in understanding how we should tackle sequence tagging tasks. | null | null | 10.18653/v1/2022.emnlp-main.813 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,943 |
inproceedings | iovine-etal-2022-cyclekqr | {C}ycle{KQR}: Unsupervised Bidirectional Keyword-Question Rewriting | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.814/ | Iovine, Andrea and Fang, Anjie and Fetahu, Besnik and Zhao, Jie and Rokhlenko, Oleg and Malmasi, Shervin | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11875--11886 | Users expect their queries to be answered by search systems, regardless of the query`s surface form, which include keyword queries and natural questions. Natural Language Understanding (NLU) components of Search and QA systems may fail to correctly interpret semantically equivalent inputs if this deviates from how the system was trained, leading to suboptimal understanding capabilities. We propose the keyword-question rewriting task to improve query understanding capabilities of NLU systems for all surface forms. To achieve this, we present CycleKQR, an unsupervised approach, enabling effective rewriting between keyword and question queries using non-parallel data.Empirically we show the impact on QA performance of unfamiliar query forms for open domain and Knowledge Base QA systems (trained on either keywords or natural language questions). We demonstrate how CycleKQR significantly improves QA performance by rewriting queries into the appropriate form, while at the same time retaining the original semantic meaning of input queries, allowing CycleKQR to improve performance by up to 3{\%} over supervised baselines. Finally, we release a datasetof 66k keyword-question pairs. | null | null | 10.18653/v1/2022.emnlp-main.814 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,944 |
inproceedings | deng-etal-2022-model | Model Criticism for Long-Form Text Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.815/ | Deng, Yuntian and Kuleshov, Volodymyr and Rush, Alexander | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11887--11912 | Language models have demonstrated the ability to generate highly fluent text; however, it remains unclear whether their output retains coherent high-level structure (e.g., story progression). Here, we propose to apply a statistical tool, model criticism in latent space, to evaluate the high-level structure of the generated text. Model criticism compares the distributions between real and generated data in a latent space obtained according to an assumptive generative process. Different generative processes identify specific failure modes of the underlying model. We perform experiments on three representative aspects of high-level discourse{---}coherence, coreference, and topicality{---}and find that transformer-based language models are able to capture topical structures but have a harder time maintaining structural coherence or modeling coreference. | null | null | 10.18653/v1/2022.emnlp-main.815 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,945 |
inproceedings | wang-etal-2022-improving | Improving Faithfulness by Augmenting Negative Summaries from Fake Documents | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.816/ | Wang, Tianshu and Ladhak, Faisal and Durmus, Esin and He, He | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11913--11921 | Current abstractive summarization systems tend to hallucinate content that is unfaithful to the source document, posing a risk of misinformation. To mitigate hallucination, we must teach the model to distinguish hallucinated summaries from faithful ones. However, the commonly used maximum likelihood training does not disentangle factual errors from other model errors. To address this issue,we propose a back-translation-style approach to augment negative samples that mimic factual errors made by the model. Specifically, we train an elaboration model that generates hallucinated documents given the reference summaries, and then generates negative summaries from the fake documents. We incorporate the negative samples into training through a controlled generator, which produces faithful/unfaithful summaries conditioned on the control codes. Additionally, we find that adding textual entailment data through multitasking further boosts the performance. Experiments on three datasets (XSum, Gigaword, and WikiHow) show that our method consistently improves faithfulness without sacrificing informativeness according to both human and automatic evaluation | null | null | 10.18653/v1/2022.emnlp-main.816 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,946 |
inproceedings | chakrabarti-etal-2022-joint | Joint Completion and Alignment of Multilingual Knowledge Graphs | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.817/ | Chakrabarti, Soumen and Singh, Harkanwar and Lohiya, Shubham and Jain, Prachi and -, Mausam | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11922--11938 | Knowledge Graph Completion (KGC) predicts missing facts in an incomplete Knowledge Graph (KG). Multilingual KGs associate entities and relations with surface forms written in different languages. An entity or relation may be associated with distinct IDs in different KGs, necessitating entity alignment (EA) and relation alignment (RA). Many effective algorithms have been proposed for completion and alignment as separate tasks. Here we show that these tasks are synergistic and best solved together. Our multitask approach starts with a state-of-the-art KG embedding scheme, but adds a novel relation representation based on sets of embeddings of (subject, object) entity pairs. This representation leads to a new relation alignment loss term based on a maximal bipartite matching between two sets of embedding vectors. This loss is combined with traditional KGC loss and optionally, losses based on text embeddings of entity (and relation) names. In experiments over KGs in seven languages, we find that our system achieves large improvements in KGC compared to a strong completion model that combines known facts in all languages. It also outperforms strong EA and RA baselines, underscoring the value of joint alignment and completion. | null | null | 10.18653/v1/2022.emnlp-main.817 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,947 |
inproceedings | sia-etal-2022-offer | Offer a Different Perspective: Modeling the Belief Alignment of Arguments in Multi-party Debates | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.818/ | Sia, Suzanna and Jaidka, Kokil and Ahuja, Hansin and Chhaya, Niyati and Duh, Kevin | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11939--11950 | In contexts where debate and deliberation are the norm, the participants are regularly presented with new information that conflicts with their original beliefs. When required to update their beliefs (belief alignment), they may choose arguments that align with their worldview (confirmation bias). We test this and competing hypotheses in a constraint-based modeling approach to predict the winning arguments in multi-party interactions in the Reddit Change My View and Intelligence Squared debates datasets. We adopt a hierarchical generative Variational Autoencoder as our model and impose structural constraints that reflect competing hypotheses about the nature of argumentation. Our findings suggest that in most settings, predictive models that anticipate winning arguments to be further from the initial argument of the opinion holder are more likely to succeed. | null | null | 10.18653/v1/2022.emnlp-main.818 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,948 |
inproceedings | gandhi-etal-2022-federated | A Federated Approach to Predicting Emojis in {H}indi Tweets | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.819/ | Gandhi, Deep and Mehta, Jash and Parekh, Nirali and Waghela, Karan and D{'}Mello, Lynette and Talat, Zeerak | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11951--11961 | The use of emojis affords a visual modality to, often private, textual communication.The task of predicting emojis however provides a challenge for machine learning as emoji use tends to cluster into the frequently used and the rarely used emojis.Much of the machine learning research on emoji use has focused on high resource languages and has conceptualised the task of predicting emojis around traditional server-side machine learning approaches.However, traditional machine learning approaches for private communication can introduce privacy concerns, as these approaches require all data to be transmitted to a central storage.In this paper, we seek to address the dual concerns of emphasising high resource languages for emoji prediction and risking the privacy of people`s data.We introduce a new dataset of 118k tweets (augmented from 25k unique tweets) for emoji prediction in Hindi, and propose a modification to the federated learning algorithm, CausalFedGSD, which aims to strike a balance between model performance and user privacy. We show that our approach obtains comparative scores with more complex centralised models while reducing the amount of data required to optimise the models and minimising risks to user privacy. | null | null | 10.18653/v1/2022.emnlp-main.819 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,949 |
inproceedings | emelin-etal-2022-injecting | Injecting Domain Knowledge in Language Models for Task-oriented Dialogue Systems | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.820/ | Emelin, Denis and Bonadiman, Daniele and Alqahtani, Sawsan and Zhang, Yi and Mansour, Saab | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11962--11974 | Pre-trained language models (PLM) have advanced the state-of-the-art across NLP applications, but lack domain-specific knowledge that does not naturally occur in pre-training data. Previous studies augmented PLMs with symbolic knowledge for different downstream NLP tasks. However, knowledge bases (KBs) utilized in these studies are usually large-scale and static, in contrast to small, domain-specific, and modifiable knowledge bases that are prominent in real-world task-oriented dialogue (TOD) systems. In this paper, we showcase the advantages of injecting domain-specific knowledge prior to fine-tuning on TOD tasks. To this end, we utilize light-weight adapters that can be easily integrated with PLMs and serve as a repository for facts learned from different KBs. To measure the efficacy of proposed knowledge injection methods, we introduce Knowledge Probing using Response Selection (KPRS) {--} a probe designed specifically for TOD models. Experiments on KPRS and the response generation task show improvements of knowledge injection with adapters over strong baselines. | null | null | 10.18653/v1/2022.emnlp-main.820 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,950 |
inproceedings | cao-etal-2022-tasa | {TASA}: Deceiving Question Answering Models by Twin Answer Sentences Attack | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.821/ | Cao, Yu and Li, Dianqi and Fang, Meng and Zhou, Tianyi and Gao, Jun and Zhan, Yibing and Tao, Dacheng | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11975--11992 | We present Twin Answer Sentences Attack (TASA), an adversarial attack method for question answering (QA) models that produces fluent and grammatical adversarial contexts while maintaining gold answers. Despite phenomenal progress on general adversarial attacks, few works have investigated the vulnerability and attack specifically for QA models. In this work, we first explore the biases in the existing models and discover that they mainly rely on keyword matching between the question and context, and ignore the relevant contextual relations for answer prediction.Based on two biases above, TASA attacks the target model in two folds: (1) lowering the model`s confidence on the gold answer with a perturbed answer sentence; (2) misguiding the model towards a wrong answer with a distracting answer sentence. Equipped with designed beam search and filtering methods, TASA can generate more effective attacks than existing textual attack methods while sustaining the quality of contexts, in extensive experiments on five QA datasets and human evaluations. | null | null | 10.18653/v1/2022.emnlp-main.821 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,951 |
inproceedings | hangya-etal-2022-improving | Improving Low-Resource Languages in Pre-Trained Multilingual Language Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.822/ | Hangya, Viktor and Saadi, Hossain Shaikh and Fraser, Alexander | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 11993--12006 | Pre-trained multilingual language models are the foundation of many NLP approaches, including cross-lingual transfer solutions. However, languages with small available monolingual corpora are often not well-supported by these models leading to poor performance. We propose an unsupervised approach to improve the cross-lingual representations of low-resource languages by bootstrapping word translation pairs from monolingual corpora and using them to improve language alignment in pre-trained language models. We perform experiments on nine languages, using contextual word retrieval and zero-shot named entity recognition to measure both intrinsic cross-lingual word representation quality and downstream task performance, showing improvements on both tasks. Our results show that it is possible to improve pre-trained multilingual language models by relying only on non-parallel resources. | null | null | 10.18653/v1/2022.emnlp-main.822 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,952 |
inproceedings | shaham-etal-2022-scrolls | {SCROLLS}: Standardized {C}ompa{R}ison Over Long Language Sequences | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.823/ | Shaham, Uri and Segal, Elad and Ivgi, Maor and Efrat, Avia and Yoran, Ori and Haviv, Adi and Gupta, Ankit and Xiong, Wenhan and Geva, Mor and Berant, Jonathan and Levy, Omer | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 12007--12021 | NLP benchmarks have largely focused on short texts, such as sentences and paragraphs, even though long texts comprise a considerable amount of natural language in the wild. We introduce SCROLLS, a suite of tasks that require reasoning over long texts. We examine existing long-text datasets, and handpick ones where the text is naturally long, while prioritizing tasks that involve synthesizing information across the input. SCROLLS contains summarization, question answering, and natural language inference tasks, covering multiple domains, including literature, science, business, and entertainment. Initial baselines, including Longformer Encoder-Decoder, indicate that there is ample room for improvement on SCROLLS. We make all datasets available in a unified text-to-text format and host a live leaderboard to facilitate research on model architecture and pretraining methods. | null | null | 10.18653/v1/2022.emnlp-main.823 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,953 |
inproceedings | feng-etal-2022-par | {PAR}: Political Actor Representation Learning with Social Context and Expert Knowledge | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.824/ | Feng, Shangbin and Tan, Zhaoxuan and Chen, Zilong and Wang, Ningnan and Yu, Peisheng and Zheng, Qinghua and Chang, Xiaojun and Luo, Minnan | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 12022--12036 | Modeling the ideological perspectives of political actors is an essential task in computational political science with applications in many downstream tasks. Existing approaches are generally limited to textual data and voting records, while they neglect the rich social context and valuable expert knowledge for holistic ideological analysis. In this paper, we propose PAR, a Political Actor Representation learning framework that jointly leverages social context and expert knowledge. Specifically, we retrieve and extract factual statements about legislators to leverage social context information. We then construct a heterogeneous information network to incorporate social context and use relational graph neural networks to learn legislator representations. Finally, we train PAR with three objectives to align representation learning with expert knowledge, model ideological stance consistency, and simulate the echo chamber phenomenon. Extensive experiments demonstrate that PAR is better at augmenting political text understanding and successfully advances the state-of-the-art in political perspective detection and roll call vote prediction. Further analysis proves that PAR learns representations that reflect the political reality and provide new insights into political behavior. | null | null | 10.18653/v1/2022.emnlp-main.824 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,954 |
inproceedings | zhao-etal-2022-jddc | {JDDC} 2.1: A Multimodal {C}hinese Dialogue Dataset with Joint Tasks of Query Rewriting, Response Generation, Discourse Parsing, and Summarization | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.825/ | Zhao, Nan and Li, Haoran and Wu, Youzheng and He, Xiaodong | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 12037--12051 | The popularity of multimodal dialogue has stimulated the need for a new generation of dialogue agents with multimodal interactivity.When users communicate with customer service, they may express their requirements by means of text, images, or even videos. Visual information usually acts as discriminators for product models, or indicators of product failures, which play an important role in the E-commerce scenario.On the other hand, detailed information provided by the images is limited, and typically, customer service systems cannot understand the intent of users without the input text.Thus, bridging the gap between the image and text is crucial for communicating with customers.In this paper, we construct JDDC 2.1, a large-scale multimodal multi-turn dialogue dataset collected from a mainstream Chinese E-commerce platform, containing about 246K dialogue sessions, 3M utterances, and 507K images, along with product knowledge bases and image category annotations. Over our dataset, we jointly define four tasks: the multimodal dialogue response generation task,the multimodal query rewriting task, the multimodal dialogue discourse parsing task, and the multimodal dialogue summarization task.JDDC 2.1 is the first corpus with annotations for all the above tasks over the same dialogue sessions, which facilitates the comprehensive research around the dialogue.In addition, we present several text-only and multimodal baselines and show the importance of visual information for these tasks. Our dataset and implements will be publicly available. | null | null | 10.18653/v1/2022.emnlp-main.825 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,955 |
inproceedings | wu-etal-2022-pcl | {PCL}: Peer-Contrastive Learning with Diverse Augmentations for Unsupervised Sentence Embeddings | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.826/ | Wu, Qiyu and Tao, Chongyang and Shen, Tao and Xu, Can and Geng, Xiubo and Jiang, Daxin | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 12052--12066 | Learning sentence embeddings in an unsupervised manner is fundamental in natural language processing. Recent common practice is to couple pre-trained language models with unsupervised contrastive learning, whose success relies on augmenting a sentence with a semantically-close positive instance to construct contrastive pairs. Nonetheless, existing approaches usually depend on a mono-augmenting strategy, which causes learning shortcuts towards the augmenting biases and thus corrupts the quality of sentence embeddings. A straightforward solution is resorting to more diverse positives from a multi-augmenting strategy, while an open question remains about how to unsupervisedly learn from the diverse positives but with uneven augmenting qualities in the text field. As one answer, we propose a novel Peer-Contrastive Learning (PCL) with diverse augmentations. PCL constructs diverse contrastive positives and negatives at the group level for unsupervised sentence embeddings. PCL performs peer-positive contrast as well as peer-network cooperation, which offers an inherent anti-bias ability and an effective way to learn from diverse augmentations. Experiments on STS benchmarks verify the effectiveness of PCL against its competitors in unsupervised sentence embeddings. | null | null | 10.18653/v1/2022.emnlp-main.826 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,956 |
inproceedings | yan-etal-2022-digging | Digging Errors in {NMT}: Evaluating and Understanding Model Errors from Partial Hypothesis Space | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.827/ | Yan, Jianhao and Wu, Chenming and Meng, Fandong and Zhou, Jie | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 12067--12085 | Solid evaluation of neural machine translation (NMT) is key to its understanding and improvement. Current evaluation of an NMT system is usually built upon a heuristic decoding algorithm (e.g., beam search) and an evaluation metric assessing similarity between the translation and golden reference. However, this system-level evaluation framework is limited by evaluating only one best hypothesis and search errors brought by heuristic decoding algorithms. To better understand NMT models, we propose a novel evaluation protocol, which defines model errors with model`s ranking capability over hypothesis space. To tackle the problem of exponentially large space, we propose two approximation methods, top region evaluation along with an exact top-k decoding algorithm, which finds top-ranked hypotheses in the whole hypothesis space, and Monte Carlo sampling evaluation, which simulates hypothesis space from a broader perspective. To quantify errors, we define our NMT model errors by measuring distance between the hypothesis array ranked by the model and the ideally ranked hypothesis array. After confirming the strong correlation with human judgment, we apply our evaluation to various NMT benchmarks and model architectures. We show that the state-of-the-art Transformer models face serious ranking issues and only perform at the random chance level in the top region. We further analyze model errors on architectures with different depths and widths, as well as different data-augmentation techniques, showing how these factors affect model errors. Finally, we connect model errors with the search algorithms and provide interesting findings of beam search inductive bias and correlation with Minimum Bayes Risk (MBR) decoding. | null | null | 10.18653/v1/2022.emnlp-main.827 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,957 |
inproceedings | liu-etal-2022-dialogconv | {D}ialog{C}onv: A Lightweight Fully Convolutional Network for Multi-view Response Selection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.828/ | Liu, Yongkang and Feng, Shi and Gao, Wei and Wang, Daling and Zhang, Yifei | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 12086--12098 | Current end-to-end retrieval-based dialogue systems are mainly based on Recurrent Neural Networks or Transformers with attention mechanisms. Although promising results have been achieved, these models often suffer from slow inference or huge number of parameters. In this paper, we propose a novel lightweight fully convolutional architecture, called DialogConv, for response selection. DialogConv is exclusively built on top of convolution to extract matching features of context and response. Dialogues are modeled in 3D views, where DialogConv performs convolution operations on embedding view, word view and utterance view to capture richer semantic information from multiple contextual views. On the four benchmark datasets, compared with state-of-the-art baselines, DialogConv is on average about 8.5x smaller in size, and 79.39x and 10.64x faster on CPU and GPU devices, respectively. At the same time, DialogConv achieves the competitive effectiveness of response selection. | null | null | 10.18653/v1/2022.emnlp-main.828 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,958 |
inproceedings | flanigan-etal-2022-meaning | Meaning Representations for Natural Languages: Design, Models and Applications | El-Beltagy, Samhaa R. and Qiu, Xipeng | dec | 2022 | Abu Dubai, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-tutorials.1/ | Flanigan, Jeffrey and Jindal, Ishan and Li, Yunyao and O{'}Gorman, Tim and Palmer, Martha and Xue, Nianwen | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | 1--8 | This tutorial reviews the design of common meaning representations, SoTA models for predicting meaning representations, and the applications of meaning representations in a wide range of downstream NLP tasks and real-world applications. Reporting by a diverse team of NLP researchers from academia and industry with extensive experience in designing, building and using meaning representations, our tutorial has three components: (1) an introduction to common meaning representations, including basic concepts and design challenges; (2) a review of SoTA methods on building models for meaning representations; and (3) an overview of applications of meaning representations in downstream NLP tasks and real-world applications. We will also present qualitative comparisons of common meaning representations and a quantitative study on how their differences impact model performance. Finally, we will share best practices in choosing the right meaning representation for downstream tasks. | null | null | 10.18653/v1/2022.emnlp-tutorials.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,960 |
inproceedings | habash-2022-arabic | {A}rabic Natural Language Processing | El-Beltagy, Samhaa R. and Qiu, Xipeng | dec | 2022 | Abu Dubai, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-tutorials.2/ | Habash, Nizar | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | 9--10 | The Arabic language continues to be the focus of an increasing number of projects in natural language processing (NLP) and computational linguistics (CL). This tutorial provides NLP/CL system developers and researchers (computer scientists and linguists alike) with the necessary background information for working with Arabic in its various forms: Classical, Modern Standard and Dialectal. We discuss various Arabic linguistic phenomena and review the state-of-the-art in Arabic processing from enabling technologies and resources, to common tasks and applications. The tutorial will explain important concepts, common wisdom, and common pitfalls in Arabic processing. Given the wide range of possible issues, we invite tutorial attendees to bring up interesting challenges and problems they are working on to discuss during the tutorial. | null | null | 10.18653/v1/2022.emnlp-tutorials.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,961 |
inproceedings | baroni-etal-2022-emergent | Emergent Language-Based Coordination In Deep Multi-Agent Systems | El-Beltagy, Samhaa R. and Qiu, Xipeng | dec | 2022 | Abu Dubai, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-tutorials.3/ | Baroni, Marco and Dessi, Roberto and Lazaridou, Angeliki | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | 11--16 | Large pre-trained deep networks are the standard building blocks of modern AI applications. This raises fundamental questions about how to control their behaviour and how to make them efficiently interact with each other. Deep net emergent communication tackles these challenges by studying how to induce communication protocols between neural network agents, and how to include humans in the communication loop. Traditionally, this research had focussed on relatively small-scale experiments where two networks had to develop a discrete code from scratch for referential communication. However, with the rise of large pre-trained language models that can work well on many tasks, the emphasis is now shifting on how to let these models interact through a language-like channel to engage in more complex behaviors. By reviewing several representative papers, we will provide an introduction to deep net emergent communication, we will cover various central topics from the present and recent past, as well as discussing current shortcomings and suggest future directions. The presentation is complemented by a hands-on section where participants will implement and analyze two emergent communications setups from the literature. The tutorial should be of interest to researchers wanting to develop more flexible AI systems, but also to cognitive scientists and linguists interested in the evolution of communication systems. | null | null | 10.18653/v1/2022.emnlp-tutorials.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,962 |
inproceedings | jin-etal-2022-causalnlp | {C}ausal{NLP} Tutorial: An Introduction to Causality for Natural Language Processing | El-Beltagy, Samhaa R. and Qiu, Xipeng | dec | 2022 | Abu Dubai, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-tutorials.4/ | Jin, Zhijing and Feder, Amir and Zhang, Kun | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | 17--22 | Causal inference is becoming an increasingly important topic in deep learning, with the potential to help with critical deep learning problems such as model robustness, interpretability, and fairness. In addition, causality is naturally widely used in various disciplines of science, to discover causal relationships among variables and estimate causal effects of interest. In this tutorial, we introduce the fundamentals of causal discovery and causal effect estimation to the natural language processing (NLP) audience, provide an overview of causal perspectives to NLP problems, and aim to inspire novel approaches to NLP further. This tutorial is inclusive to a variety of audiences and is expected to facilitate the community`s developments in formulating and addressing new, important NLP problems in light of emerging causal principles and methodologies. | null | null | 10.18653/v1/2022.emnlp-tutorials.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,963 |
inproceedings | ruder-etal-2022-modular | Modular and Parameter-Efficient Fine-Tuning for {NLP} Models | El-Beltagy, Samhaa R. and Qiu, Xipeng | dec | 2022 | Abu Dubai, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-tutorials.5/ | Ruder, Sebastian and Pfeiffer, Jonas and Vuli{\'c}, Ivan | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | 23--29 | State-of-the-art language models in NLP perform best when fine-tuned even on small datasets, but due to their increasing size, fine-tuning and downstream usage have become extremely compute-intensive. Being able to efficiently and effectively fine-tune the largest pre-trained models is thus key in order to reap the benefits of the latest advances in NLP. In this tutorial, we provide a comprehensive overview of parameter-efficient fine-tuning methods. We highlight their similarities and differences by presenting them in a unified view. We explore the benefits and usage scenarios of a neglected property of such parameter-efficient models{---}modularity{---}such as composition of modules to deal with previously unseen data conditions. We finally highlight how both properties{---}{---}parameter efficiency and modularity{---}{---}can be useful in the real-world setting of adapting pre-trained models to under-represented languages and domains with scarce annotated data for several downstream applications. | null | null | 10.18653/v1/2022.emnlp-tutorials.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,964 |
inproceedings | feng-shao-2022-non | Non-Autoregressive Models for Fast Sequence Generation | El-Beltagy, Samhaa R. and Qiu, Xipeng | dec | 2022 | Abu Dubai, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-tutorials.6/ | Feng, Yang and Shao, Chenze | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | 30--35 | Autoregressive (AR) models have achieved great success in various sequence generation tasks. However, AR models can only generate target sequence word-by-word due to the AR mechanism and hence suffer from slow inference. Recently, non-autoregressive (NAR) models, which generate all the tokens in parallel by removing the sequential dependencies within the target sequence, have received increasing attention in sequence generation tasks such as neural machine translation (NMT), automatic speech recognition (ASR), and text to speech (TTS). In this tutorial, we will provide a comprehensive introduction to non-autoregressive sequence generation. | null | null | 10.18653/v1/2022.emnlp-tutorials.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,965 |
inproceedings | jin-etal-2022-cogktr | {C}og{KTR}: A Knowledge-Enhanced Text Representation Toolkit for Natural Language Understanding | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.1/ | Jin, Zhuoran and Men, Tianyi and Yuan, Hongbang and Zhou, Yuyang and Cao, Pengfei and Chen, Yubo and Xue, Zhipeng and Liu, Kang and Zhao, Jun | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 1--11 | As the first step of modern natural language processing, text representation encodes discrete texts as continuous embeddings. Pre-trained language models (PLMs) have demonstrated strong ability in text representation and significantly promoted the development of natural language understanding (NLU). However, existing PLMs represent a text solely by its context, which is not enough to support knowledge-intensive NLU tasks. Knowledge is power, and fusing external knowledge explicitly into PLMs can provide knowledgeable text representations. Since previous knowledge-enhanced methods differ in many aspects, making it difficult for us to reproduce previous methods, implement new methods, and transfer between different methods. It is highly desirable to have a unified paradigm to encompass all kinds of methods in one framework. In this paper, we propose CogKTR, a knowledge-enhanced text representation toolkit for natural language understanding. According to our proposed Unified Knowledge-Enhanced Paradigm (UniKEP), CogKTR consists of four key stages, including knowledge acquisition, knowledge representation, knowledge injection, and knowledge application. CogKTR currently supports easy-to-use knowledge acquisition interfaces, multi-source knowledge embeddings, diverse knowledge-enhanced models, and various knowledge-intensive NLU tasks. Our unified, knowledgeable and modular toolkit is publicly available at GitHub, with an online system and a short instruction video. | null | null | 10.18653/v1/2022.emnlp-demos.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,967 |
inproceedings | geva-etal-2022-lm | {LM}-Debugger: An Interactive Tool for Inspection and Intervention in Transformer-Based Language Models | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.2/ | Geva, Mor and Caciularu, Avi and Dar, Guy and Roit, Paul and Sadde, Shoval and Shlain, Micah and Tamir, Bar and Goldberg, Yoav | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 12--21 | The opaque nature and unexplained behavior of transformer-based language models (LMs) have spurred a wide interest in interpreting their predictions. However, current interpretation methods mostly focus on probing models from outside, executing behavioral tests, and analyzing salience input features, while the internal prediction construction process is largely not understood. In this work, we introduce LM-Debugger, an interactive debugger tool for transformer-based LMs, which provides a fine-grained interpretation of the model`s internal prediction process, as well as a powerful framework for intervening in LM behavior. For its backbone, LM-Debugger relies on a recent method that interprets the inner token representations and their updates by the feed-forward layers in the vocabulary space. We demonstrate the utility of LM-Debugger for single-prediction debugging, by inspecting the internal disambiguation process done by GPT2. Moreover, we show how easily LM-Debugger allows to shift model behavior in a direction of the user`s choice, by identifying a few vectors in the network and inducing effective interventions to the prediction process. We release LM-Debugger as an open-source tool and a demo over GPT2 models. | null | null | 10.18653/v1/2022.emnlp-demos.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,968 |
inproceedings | wang-etal-2022-easynlp | {E}asy{NLP}: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.3/ | Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 22--29 | Pre-Trained Models (PTMs) have reshaped the development of Natural Language Processing (NLP) and achieved significant improvement in various benchmarks. Yet, it is not easy for industrial practitioners to obtain high-performing PTM-based models without a large amount of labeled training data and deploy them online with fast inference speed. To bridge this gap, EasyNLP is designed to make it easy to build NLP applications, which supports a comprehensive suite of NLP algorithms. It further features knowledge-enhanced pre-training, knowledge distillation and few-shot learning functionalities, and provides a unified framework of model training, inference and deployment for real-world applications. EasyNLP has powered over ten business units within Alibaba Group and is seamlessly integrated to the Platform of AI (PAI) products on Alibaba Cloud. The source code of EasyNLP is released at GitHub (\url{https://github.com/alibaba/EasyNLP}). | null | null | 10.18653/v1/2022.emnlp-demos.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,969 |
inproceedings | zhao-etal-2022-explainable | An Explainable Toolbox for Evaluating Pre-trained Vision-Language Models | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.4/ | Zhao, Tiancheng and Zhang, Tianqi and Zhu, Mingwei and Shen, Haozhan and Lee, Kyusong and Lu, Xiaopeng and Yin, Jianwei | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 30--37 | We introduce VL-CheckList, a toolbox for evaluating Vision-Language Pretraining (VLP) models, including the preliminary datasets that deepen the image-texting ability of a VLP model. Most existing VLP works evaluated their systems by comparing the fine-tuned downstream task performance. However, only average downstream task accuracy provides little information about the pros and cons of each VLP method. In this paper, we demonstrate how minor input changes in language and vision will affect the prediction outputs. Then, we describe the detailed user guidelines to utilize and contribute to the community. We show new findings on one of the representative VLP models to provide an example analysis. The data/code is available at \url{https://github.com/om-ai-lab/VL-CheckList} | null | null | 10.18653/v1/2022.emnlp-demos.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,970 |
inproceedings | camacho-collados-etal-2022-tweetnlp | {T}weet{NLP}: Cutting-Edge Natural Language Processing for Social Media | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.5/ | Camacho-collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa Anke, Luis and Liu, Fangyu and Mart{\'i}nez C{\'a}mara, Eugenio | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 38--49 | In this paper we present TweetNLP, an integrated platform for Natural Language Processing (NLP) in social media. TweetNLP supports a diverse set of NLP tasks, including generic focus areas such as sentiment analysis and named entity recognition, as well as social media-specific tasks such as emoji prediction and offensive language identification. Task-specific systems are powered by reasonably-sized Transformer-based language models specialized on social media text (in particular, Twitter) which can be run without the need for dedicated hardware or cloud services. The main contributions of TweetNLP are: (1) an integrated Python library for a modern toolkit supporting social media analysis using our various task-specific models adapted to the social domain; (2) an interactive online demo for codeless experimentation using our models; and (3) a tutorial covering a wide variety of typical social media applications. | null | null | 10.18653/v1/2022.emnlp-demos.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,971 |
inproceedings | ohta-etal-2022-joeys2t | {J}oey{S}2{T}: Minimalistic Speech-to-Text Modeling with {J}oey{NMT} | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.6/ | Ohta, Mayumi and Kreutzer, Julia and Riezler, Stefan | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 50--59 | JoeyS2T is a JoeyNMT extension for speech-to-text tasks such as automatic speech recognition and end-to-end speech translation. It inherits the core philosophy of JoeyNMT, a minimalist NMT toolkit built on PyTorch, seeking simplicity and accessibility. JoeyS2T`s workflow is self-contained, starting from data pre-processing, over model training and prediction to evaluation, and is seamlessly integrated into JoeyNMT`s compact and simple code base. On top of JoeyNMT`s state-of-the-art Transformer-based Encoder-Decoder architecture, JoeyS2T provides speech-oriented components such as convolutional layers, SpecAugment, CTC-loss, and WER evaluation. Despite its simplicity compared to prior implementations, JoeyS2T performs competitively on English speech recognition and English-to-German speech translation benchmarks. The implementation is accompanied by a walk-through tutorial and available on \url{https://github.com/may-/joeys2t}. | null | null | 10.18653/v1/2022.emnlp-demos.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,972 |
inproceedings | han-etal-2022-fairlib | {F}air{L}ib: A Unified Framework for Assessing and Improving Fairness | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.7/ | Han, Xudong and Shen, Aili and Li, Yitong and Frermann, Lea and Baldwin, Timothy and Cohn, Trevor | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 60--71 | This paper presents FairLib, an open-source python library for assessing and improving model fairness. It provides a systematic framework for quickly accessing benchmark datasets, reproducing existing debiasing baseline models, developing new methods, evaluating models with different metrics, and visualizing their results. Its modularity and extensibility enable the framework to be used for diverse types of inputs, including natural language, images, and audio. We implement 14 debiasing methods, including pre-processing,at-training-time, and post-processing approaches. The built-in metrics cover the most commonly acknowledged fairness criteria and can be further generalized and customized for fairness evaluation. | null | null | 10.18653/v1/2022.emnlp-demos.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,973 |
inproceedings | bast-etal-2022-elevant | {ELEVANT}: A Fully Automatic Fine-Grained Entity Linking Evaluation and Analysis Tool | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.8/ | Bast, Hannah and Hertel, Matthias and Prange, Natalie | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 72--79 | We present Elevant, a tool for the fully automatic fine-grained evaluation of a set of entity linkers on a set of benchmarks. Elevant provides an automatic breakdown of the performance by various error categories and by entity type. Elevant also provides a rich and compact, yet very intuitive and self-explanatory visualization of the results of a linker on a benchmark in comparison to the ground truth. A live demo, the link to the complete code base on GitHub and a link to a demo video are provided under \url{https://elevant.cs.uni-freiburg.de} . | null | null | 10.18653/v1/2022.emnlp-demos.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,974 |
inproceedings | maufe-etal-2022-pipeline | A Pipeline for Generating, Annotating and Employing Synthetic Data for Real World Question Answering | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.9/ | Maufe, Matt and Ravenscroft, James and Procter, Rob and Liakata, Maria | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 80--97 | Question Answering (QA) is a growing area of research, often used to facilitate the extraction of information from within documents. State-of-the-art QA models are usually pre-trained on domain-general corpora like Wikipedia and thus tend to struggle on out-of-domain documents without fine-tuning. We demonstrate that synthetic domain-specific datasets can be generated easily using domain-general models, while still providing significant improvements to QA performance. We present two new tools for this task: A flexible pipeline for validating the synthetic QA data and training down stream models on it, and an online interface to facilitate human annotation of this generated data. Using this interface, crowdworkers labelled 1117 synthetic QA pairs, which we then used to fine-tune downstream models and improve domain-specific QA performance by 8.75 F1. | null | null | 10.18653/v1/2022.emnlp-demos.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,975 |
inproceedings | zhang-etal-2022-deepke | {D}eep{KE}: A Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.10/ | Zhang, Ningyu and Xu, Xin and Tao, Liankuan and Yu, Haiyang and Ye, Hongbin and Qiao, Shuofei and Xie, Xin and Chen, Xiang and Li, Zhoubo and Li, Lei | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 98--108 | We present an open-source and extensible knowledge extraction toolkit DeepKE, supporting complicated low-resource, document-level and multimodal scenarios in the knowledge base population. DeepKE implements various information extraction tasks, including named entity recognition, relation extraction and attribute extraction. With a unified framework, DeepKE allows developers and researchers to customize datasets and models to extract information from unstructured data according to their requirements. Specifically, DeepKE not only provides various functional modules and model implementation for different tasks and scenarios but also organizes all components by consistent frameworks to maintain sufficient modularity and extensibility. We release the source code at GitHub in \url{https://github.com/zjunlp/DeepKE} with Google Colab tutorials and comprehensive documents for beginners. Besides, we present an online system in \url{http://deepke.openkg.cn/EN/re_doc_show.html} for real-time extraction of various tasks, and a demo video. | null | null | 10.18653/v1/2022.emnlp-demos.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,976 |
inproceedings | kim-etal-2022-anemic | {A}n{EMIC}: A Framework for Benchmarking {ICD} Coding Models | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.11/ | Kim, Juyong and Sharma, Abheesht and Shanbhogue, Suhas and Weiss, Jeremy and Ravikumar, Pradeep | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 109--120 | Diagnostic coding, or ICD coding, is the task of assigning diagnosis codes defined by the ICD (International Classification of Diseases) standard to patient visits based on clinical notes. The current process of manual ICD coding is time-consuming and often error-prone, which suggests the need for automatic ICD coding. However, despite the long history of automatic ICD coding, there have been no standardized frameworks for benchmarking ICD coding models. We open-source an easy-to-use tool named \textit{AnEMIC}, which provides a streamlined pipeline for preprocessing, training, and evaluating for automatic ICD coding. We correct errors in preprocessing by existing works, and provide key models and weights trained on the correctly preprocessed datasets. We also provide an interactive demo performing real-time inference from custom inputs, and visualizations drawn from explainable AI to analyze the models. We hope the framework helps move the research of ICD coding forward and helps professionals explore the potential of ICD coding. The framework and the associated code are available here. | null | null | 10.18653/v1/2022.emnlp-demos.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,977 |
inproceedings | abhishek-etal-2022-spear | {SPEAR} : Semi-supervised Data Programming in Python | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.12/ | Abhishek, Guttu and Ingole, Harshad and Laturia, Parth and Dorna, Vineeth and Maheshwari, Ayush and Ramakrishnan, Ganesh and Iyer, Rishabh | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 121--127 | We present SPEAR, an open-source python library for data programming with semi supervision. The package implements several recent data programming approaches including facility to programmatically label and build training data. SPEAR facilitates weak supervision in the form of heuristics (or rules) and association of noisy labels to the training dataset. These noisy labels are aggregated to assign labels to the unlabeled data for downstream tasks. We have implemented several label aggregation approaches that aggregate the noisy labels and then train using the noisily labeled set in a cascaded manner. Our implementation also includes other approaches that jointly aggregate and train the model for text classification tasks. Thus, in our python package, we integrate several cascade and joint data-programming approaches while also providing the facility of data programming by letting the user define labeling functions or rules. The code and tutorial notebooks are available at \url{https://github.com/decile-team/spear}. Further, extensive documentation can be found at \url{https://spear-decile.readthedocs.io/}. Video tutorials demonstrating the usage of our package are available \url{https://youtube.com/playlist?list=PLW8agt_HvkVnOJoJAqBpaerFb-z-ZlqlP}. We also present some real-world use cases of SPEAR. | null | null | 10.18653/v1/2022.emnlp-demos.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,978 |
inproceedings | von-werra-etal-2022-evaluate | Evaluate {\&} Evaluation on the Hub: Better Best Practices for Data and Model Measurements | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.13/ | Von Werra, Leandro and Tunstall, Lewis and Thakur, Abhishek and Luccioni, Sasha and Thrush, Tristan and Piktus, Aleksandra and Marty, Felix and Rajani, Nazneen and Mustar, Victor and Ngo, Helen | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 128--136 | Evaluation is a key part of machine learning (ML), yet there is a lack of support and tooling to enable its informed and systematic practice. We introduce Evaluate and Evaluation on the Hub{---}a set of tools to facilitate the evaluation of models and datasets in ML. Evaluate is a library to support best practices for measurements, metrics, and comparisons of data and models. Its goal is to support reproducibility of evaluation, centralize and document the evaluation process, and broaden evaluation to cover more facets of model performance. It includes over 50 efficient canonical implementations for a variety of domains and scenarios, interactive documentation, and the ability to easily share implementations and outcomes. The library is available at \url{https://github.com/huggingface/evaluate}. In addition, we introduce Evaluation on the Hub, a platform that enables the large-scale evaluation of over 75,000 models and 11,000 datasets on the Hugging Face Hub, for free, at the click of a button. Evaluation on the Hub is available at \url{https://huggingface.co/autoevaluate}. | null | null | 10.18653/v1/2022.emnlp-demos.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,979 |
inproceedings | voigt-etal-2022-keywordscape | {K}eyword{S}cape: Visual Document Exploration using Contextualized Keyword Embeddings | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.14/ | Voigt, Henrik and Meuschke, Monique and Zarrie{\ss}, Sina and Lawonn, Kai | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 137--147 | Although contextualized word embeddings have led to great improvements in automatic language understanding, their potential for practical applications in document exploration and visualization has been little explored. Common visualization techniques used for, e.g., model analysis usually provide simple scatter plots of token-level embeddings that do not provide insight into their contextual use. In this work, we propose KeywordScape, a visual exploration tool that allows to overview, summarize, and explore the semantic content of documents based on their keywords. While existing keyword-based exploration tools assume that keywords have static meanings, our tool represents keywords in terms of their contextualized embeddings. Our application visualizes these embeddings in a semantic landscape that represents keywords as islands on a spherical map. This keeps keywords with similar context close to each other, allowing for a more precise search and comparison of documents. | null | null | 10.18653/v1/2022.emnlp-demos.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,980 |
inproceedings | xia-etal-2022-medconqa | {M}ed{C}on{QA}: Medical Conversational Question Answering System based on Knowledge Graphs | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.15/ | Xia, Fei and Li, Bin and Weng, Yixuan and He, Shizhu and Liu, Kang and Sun, Bin and Li, Shutao and Zhao, Jun | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 148--158 | The medical conversational system can relieve doctors' burden and improve healthcare efficiency, especially during the COVID-19 pandemic. However, the existing medical dialogue systems have the problems of weak scalability, insufficient knowledge, and poor controllability. Thus, we propose a medical conversational question-answering (CQA) system based on the knowledge graph, namely MedConQA, which is designed as a pipeline framework to maintain high flexibility. Our system utilizes automated medical procedures, including medical triage, consultation, image-text drug recommendation, and record. Each module has been open-sourced as a tool, which can be used alone or in combination, with robust scalability. Besides, to conduct knowledge-grounded dialogues with users, we first construct a Chinese Medical Knowledge Graph (CMKG) and collect a large-scale Chinese Medical CQA (CMCQA) dataset, and we design a series of methods for reasoning more intellectually. Finally, we use several state-of-the-art (SOTA) techniques to keep the final generated response more controllable, which is further assured by hospital and professional evaluations. We have open-sourced related code, datasets, web pages, and tools, hoping to advance future research. | null | null | 10.18653/v1/2022.emnlp-demos.15 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,981 |
inproceedings | shnarch-etal-2022-label | Label Sleuth: From Unlabeled Text to a Classifier in a Few Hours | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.16/ | Shnarch, Eyal and Halfon, Alon and Gera, Ariel and Danilevsky, Marina and Katsis, Yannis and Choshen, Leshem and Santillan Cooper, Martin and Epelboim, Dina and Zhang, Zheng and Wang, Dakuo and Yip, Lucy and Ein-Dor, Liat and Dankin, Lena and Shnayderman, Ilya and Aharonov, Ranit and Li, Yunyao and Liberman, Naftali and Levin Slesarev, Philip and Newton, Gwilym and Ofek-Koifman, Shila and Slonim, Noam and Katz, Yoav | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 159--168 | Label Sleuth is an open source platform for building text classifiers which does not require coding skills nor machine learning knowledge.- Project website: [\url{https://www.label-sleuth.org/}](\url{https://www.label-sleuth.org/})- Link to screencast video: [\url{https://vimeo.com/735675461}](\url{https://vimeo.com/735675461}){\#}{\#}{\#} AbstractText classification can be useful in many real-world scenarios, saving a lot of time for end users. However, building a classifier generally requires coding skills and ML knowledge, which poses a significant barrier for many potential users. To lift this barrier we introduce *Label Sleuth*, a free open source system for labeling and creating text classifiers. This system is unique for: - being a no-code system, making NLP accessible for non-experts. - guiding its users throughout the entire labeling process until they obtain their desired classifier, making the process efficient - from cold start to a classifier in a few hours. - being open for configuration and extension by developers. By open sourcing Label Sleuth we hope to build a community of users and developers that will widen the utilization of NLP models. | null | null | 10.18653/v1/2022.emnlp-demos.16 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,982 |
inproceedings | chan-etal-2022-agree | {AGR}e{E}: A system for generating Automated Grammar Reading Exercises | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.17/ | Chan, Sophia and Somasundaran, Swapna and Ghosh, Debanjan and Zhao, Mengxuan | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 169--177 | We describe the AGReE system, which takes user-submitted passages as input and automatically generates grammar practice exercises that can be completed while reading. Multiple-choice practice items are generated for a variety of different grammar constructs: punctuation, articles, conjunctions, pronouns, prepositions, verbs, and nouns. We also conducted a large-scale human evaluation with around 4,500 multiple-choice practice items. We notice for 95{\%} of items, a majority of raters out of five were able to identify the correct answer, for 85{\%} of cases, raters agree that there is only one correct answer among the choices. Finally, the error analysis shows that raters made the most mistakes for punctuation and conjunctions. | null | null | 10.18653/v1/2022.emnlp-demos.17 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,983 |
inproceedings | wang-etal-2022-botsim | {B}ot{SIM}: An End-to-End Bot Simulation Framework for Commercial Task-Oriented Dialog Systems | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.18/ | Wang, Guangsen and Tan, Samson and Joty, Shafiq and Wu, Gang and Au, Jimmy and Hoi, Steven C.h. | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 178--190 | We present BotSIM, a data-efficient end-to-end Bot SIMulation framework for commercial task-oriented dialog (TOD) systems. BotSIM consists of three major components: 1) a Generator that can infer semantic-level dialog acts and entities from bot definitions and generate user queries via model-based paraphrasing; 2) an agenda-based dialog user Simulator (ABUS) to simulate conversations with the dialog agents; 3) a Remediator to analyze the simulated conversations, visualize the bot health reports and provide actionable remediation suggestions for bot troubleshooting and improvement. We demonstrate BotSIM`s effectiveness in end-to-end evaluation, remediation and multi-intent dialog generation via case studies on two commercial bot platforms. BotSIM`s {\textquotedblleft}generation-simulation-remediation{\textquotedblright} paradigm accelerates the end-to-end bot evaluation and iteration process by: 1) reducing manual test cases creation efforts; 2) enabling a holistic gauge of the bot in terms of NLU and end-to-end performance via extensive dialog simulation; 3) improving the bot troubleshooting process with actionable suggestions. A demo of our system can be found at \url{https://tinyurl.com/mryu74cd} and a demo video at \url{https://youtu.be/qLPJm6_UOKY}. | null | null | 10.18653/v1/2022.emnlp-demos.18 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,984 |
inproceedings | golobokov-etal-2022-deepgen | {D}eep{G}en: Diverse Search Ad Generation and Real-Time Customization | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.19/ | Golobokov, Konstantin and Chai, Junyi and Dong, Victor Ye and Gu, Mandy and Chi, Bingyu and Cao, Jie and Yan, Yulan and Liu, Yi | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 191--199 | Demo: \url{https://youtu.be/WQLL93TPB-cAbstract:We} present DeepGen, a system deployed at web scale for automatically creating sponsored search advertisements (ads) for BingAds customers. We leverage state-of-the-art natural language generation (NLG) models to generate fluent ads from advertiser`s web pages in an abstractive fashion and solve practical issues such as factuality and inference speed. In addition, our system creates a customized ad in real-time in response to the user`s search query, therefore highlighting different aspects of the same product based on what the user is looking for. To achieve this, our system generates a diverse choice of smaller pieces of the ad ahead of time and, at query time, selects the most relevant ones to be stitched into a complete ad. We improve generation diversity by training a controllable NLG model to generate multiple ads for the same web page highlighting different selling points. Our system design further improves diversity horizontally by first running an ensemble of generation models trained with different objectives and then using a diversity sampling algorithm to pick a diverse subset of generation results for online selection. Experimental results show the effectiveness of our proposed system design. Our system is currently deployed in production, serving {\textasciitilde}4{\%} of global ads served in Bing. | null | null | 10.18653/v1/2022.emnlp-demos.19 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,985 |
inproceedings | murthy-etal-2022-accord | {ACC}o{RD}: A Multi-Document Approach to Generating Diverse Descriptions of Scientific Concepts | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.20/ | Murthy, Sonia and Lo, Kyle and King, Daniel and Bhagavatula, Chandra and Kuehl, Bailey and Johnson, Sophie and Borchardt, Jonathan and Weld, Daniel and Hope, Tom and Downey, Doug | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 200--213 | Systems that automatically define unfamiliar terms hold the promise of improving the accessibility of scientific texts, especially for readers who may lack prerequisite background knowledge. However, current systems assume a single {\textquotedblleft}best{\textquotedblright} description per concept, which fails to account for the many ways a concept can be described. We present ACCoRD, an end-to-end system tackling the novel task of generating sets of descriptions of scientific concepts. Our system takes advantage of the myriad ways a concept is mentioned across the scientific literature to produce distinct, diverse descriptions oftarget concepts in terms of different reference concepts. In a user study, we find that users prefer (1) descriptions produced by our end-to-end system, and (2) multiple descriptions to a single {\textquotedblleft}best{\textquotedblright} description. We release the ACCoRD corpus which includes 1,275 labeled contexts and 1,787 expert-authored concept descriptions to support research on our task. | null | null | 10.18653/v1/2022.emnlp-demos.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,986 |
inproceedings | zhang-etal-2022-automatic-comment | Automatic Comment Generation for {C}hinese Student Narrative Essays | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.21/ | Zhang, Zhexin and Guan, Jian and Xu, Guowei and Tian, Yixiang and Huang, Minlie | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 214--223 | Automatic essay evaluation can help reduce teachers' workload and enable students to refine their works rapidly. Previous studies focus mainly on giving discrete scores for either the holistic quality orseveral distinct traits. However, real-world teachers usually provide detailed comments in natural language, which are more informative than single scores. In this paper, we present the comment generation task, which aims to generate commentsfor specified segments from given student narrative essays. To tackle this task, we propose a planning-based generation model, which first plans a sequence of keywords, and then expands these keywords into a complete comment. To improve the correctness and informativeness of generated comments, we adopt two following techniques: (1) training an error correction module to filter out incorrect keywords, and (2) recognizing fine-grained structured features from source essays to enrich the keywords. To support the evaluation of the task, we collect a human-written Chinese dataset, which contains 22,399 essay-comment pairs. Extensive experiments show that our model outperforms strong baselines significantly. Moreover, we exert explicit control on our model to generate comments to describe the strengths or weaknesses of inputs with a 91{\%} success rate. We deploy the model at \url{http://coai.cs.tsinghua.edu.cn/static/essayComment/}. A demo video is available at \url{https://youtu.be/IuFVk8dUxbI}. Our code and data are available at \url{https://github.com/thu-coai/EssayCommentGen}. | null | null | 10.18653/v1/2022.emnlp-demos.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,987 |
inproceedings | yu-etal-2022-mic | {MIC}: A Multi-task Interactive Curation Tool | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.22/ | Yu, Shi and Yang, Mingfeng and Parker, Jerrod and Brock, Stephen | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 224--231 | This paper introduces MIC, a Multi-task Interactive Curation tool, a human-machine collaborative curation tool for multiple NLP tasks. The tool aims to borrow recent advances in literature to solve pain-points in real NLP tasks. Firstly, it supports multiple projects with multiple users which enables collaborative annotations. Secondly, MIC allows easy integration of pre-trained models, rules, and dictionaries to auto label the text and speed up the labeling process. Thirdly, MIC supports annotation at different scales (span of characters and words, tokens and lines, or document) and different types (free text, sentence labels, entity labels, and relationship triplets) with easy GUI operations. | null | null | 10.18653/v1/2022.emnlp-demos.22 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,988 |
inproceedings | syed-etal-2022-summary | {SUMMARY} {WORKBENCH}: Unifying Application and Evaluation of Text Summarization Models | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.23/ | Syed, Shahbaz and Schwabe, Dominik and Potthast, Martin | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 232--241 | This paper presents Summary Workbench, a new tool for developing and evaluating text summarization models. New models and evaluation measures can be easily integrated as Docker-based plugins, allowing to examine the quality of their summaries against any input and to evaluate them using various evaluation measures. Visual analyses combining multiple measures provide insights into the models' strengths and weaknesses. The tool is hosted at \url{https://tldr.demo.webis.de} and also supports local deployment for private resources. | null | null | 10.18653/v1/2022.emnlp-demos.23 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,989 |
inproceedings | hazim-etal-2022-arabic | {A}rabic Word-level Readability Visualization for Assisted Text Simplification | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.24/ | Hazim, Reem and Saddiki, Hind and Alhafni, Bashar and Al Khalil, Muhamed and Habash, Nizar | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 242--249 | This demo paper presents a Google Docs add-on for automatic Arabic word-level readability visualization. The add-on includes a lemmatization component that is connected to a five-level readability lexicon and Arabic WordNet-based substitution suggestions. The add-on can be used for assessing the reading difficulty of a text and identifying difficult words as part of the task of manual text simplification. We make our add-on and its code publicly available. | null | null | 10.18653/v1/2022.emnlp-demos.24 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,990 |
inproceedings | helwe-etal-2022-logitorch | {L}ogi{T}orch: A {P}y{T}orch-based library for logical reasoning on natural language | Che, Wanxiang and Shutova, Ekaterina | dec | 2022 | Abu Dhabi, UAE | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-demos.25/ | Helwe, Chadi and Clavel, Chlo{\'e} and Suchanek, Fabian | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 250--257 | Logical reasoning on natural language is one of the most challenging tasks for deep learning models. There has been an increasing interest in developing new benchmarks to evaluate the reasoning capabilities of language models such as BERT. In parallel, new models based on transformers have emerged to achieve ever better performance on these datasets. However, there is currently no library for logical reasoning that includes such benchmarks and models. This paper introduces LogiTorch, a PyTorch-based library that includes different logical reasoning benchmarks, different models, as well as utility functions such as co-reference resolution. This makes it easy to directly use the preprocessed datasets, to run the models, or to finetune them with different hyperparameters. LogiTorch is open source and can be found on GitHub. | null | null | 10.18653/v1/2022.emnlp-demos.25 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,991 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.