entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | elsner-shain-2017-speech | Speech segmentation with a neural encoder model of working memory | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1112/ | Elsner, Micha and Shain, Cory | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1070--1080 | We present the first unsupervised LSTM speech segmenter as a cognitive model of the acquisition of words from unsegmented input. Cognitive biases toward phonological and syntactic predictability in speech are rooted in the limitations of human memory (Baddeley et al., 1998); compressed representations are easier to acquire and retain in memory. To model the biases introduced by these memory limitations, our system uses an LSTM-based encoder-decoder with a small number of hidden units, then searches for a segmentation that minimizes autoencoding loss. Linguistically meaningful segments (e.g. words) should share regular patterns of features that facilitate decoder performance in comparison to random segmentations, and we show that our learner discovers these patterns when trained on either phoneme sequences or raw acoustics. To our knowledge, ours is the first fully unsupervised system to be able to segment both symbolic and acoustic representations of speech. | null | null | 10.18653/v1/D17-1112 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,600 |
inproceedings | bulat-etal-2017-speaking | Speaking, Seeing, Understanding: Correlating semantic models with conceptual representation in the brain | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1113/ | Bulat, Luana and Clark, Stephen and Shutova, Ekaterina | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1081--1091 | Research in computational semantics is increasingly guided by our understanding of human semantic processing. However, semantic models are typically studied in the context of natural language processing system performance. In this paper, we present a systematic evaluation and comparison of a range of widely-used, state-of-the-art semantic models in their ability to predict patterns of conceptual representation in the human brain. Our results provide new insights both for the design of computational semantic models and for further research in cognitive neuroscience. | null | null | 10.18653/v1/D17-1113 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,601 |
inproceedings | li-etal-2017-multi | Multi-modal Summarization for Asynchronous Collection of Text, Image, Audio and Video | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1114/ | Li, Haoran and Zhu, Junnan and Ma, Cong and Zhang, Jiajun and Zong, Chengqing | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1092--1102 | The rapid increase of the multimedia data over the Internet necessitates multi-modal summarization from collections of text, image, audio and video. In this work, we propose an extractive Multi-modal Summarization (MMS) method which can automatically generate a textual summary given a set of documents, images, audios and videos related to a specific topic. The key idea is to bridge the semantic gaps between multi-modal contents. For audio information, we design an approach to selectively use its transcription. For vision information, we learn joint representations of texts and images using a neural network. Finally, all the multi-modal aspects are considered to generate the textural summary by maximizing the salience, non-redundancy, readability and coverage through budgeted optimization of submodular functions. We further introduce an MMS corpus in English and Chinese. The experimental results on this dataset demonstrate that our method outperforms other competitive baseline methods. | null | null | 10.18653/v1/D17-1114 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,602 |
inproceedings | zadeh-etal-2017-tensor | Tensor Fusion Network for Multimodal Sentiment Analysis | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1115/ | Zadeh, Amir and Chen, Minghai and Poria, Soujanya and Cambria, Erik and Morency, Louis-Philippe | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1103--1114 | Multimodal sentiment analysis is an increasingly popular research area, which extends the conventional language-based definition of sentiment analysis to a multimodal setup where other relevant modalities accompany language. In this paper, we pose the problem of multimodal sentiment analysis as modeling intra-modality and inter-modality dynamics. We introduce a novel model, termed Tensor Fusion Networks, which learns both such dynamics end-to-end. The proposed approach is tailored for the volatile nature of spoken language in online videos as well as accompanying gestures and voice. In the experiments, our model outperforms state-of-the-art approaches for both multimodal and unimodal sentiment analysis. | null | null | 10.18653/v1/D17-1115 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,603 |
inproceedings | joseph-etal-2017-constance | {C}on{S}tance: Modeling Annotation Contexts to Improve Stance Classification | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1116/ | Joseph, Kenneth and Friedland, Lisa and Hobbs, William and Lazer, David and Tsur, Oren | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1115--1124 | Manual annotations are a prerequisite for many applications of machine learning. However, weaknesses in the annotation process itself are easy to overlook. In particular, scholars often choose what information to give to annotators without examining these decisions empirically. For subjective tasks such as sentiment analysis, sarcasm, and stance detection, such choices can impact results. Here, for the task of political stance detection on Twitter, we show that providing too little context can result in noisy and uncertain annotations, whereas providing too strong a context may cause it to outweigh other signals. To characterize and reduce these biases, we develop ConStance, a general model for reasoning about annotations across information conditions. Given conflicting labels produced by multiple annotators seeing the same instances with different contexts, ConStance simultaneously estimates gold standard labels and also learns a classifier for new instances. We show that the classifier learned by ConStance outperforms a variety of baselines at predicting political stance, while the model`s interpretable parameters shed light on the effects of each context. | null | null | 10.18653/v1/D17-1116 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,604 |
inproceedings | pavlopoulos-etal-2017-deeper | Deeper Attention to Abusive User Content Moderation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1117/ | Pavlopoulos, John and Malakasiotis, Prodromos and Androutsopoulos, Ion | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1125--1135 | Experimenting with a new dataset of 1.6M user comments from a news portal and an existing dataset of 115K Wikipedia talk page comments, we show that an RNN operating on word embeddings outpeforms the previous state of the art in moderation, which used logistic regression or an MLP classifier with character or word n-grams. We also compare against a CNN operating on word embeddings, and a word-list baseline. A novel, deep, classificationspecific attention mechanism improves the performance of the RNN further, and can also highlight suspicious words for free, without including highlighted words in the training data. We consider both fully automatic and semi-automatic moderation. | null | null | 10.18653/v1/D17-1117 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,605 |
inproceedings | dubossarsky-etal-2017-outta | Outta Control: Laws of Semantic Change and Inherent Biases in Word Representation Models | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1118/ | Dubossarsky, Haim and Weinshall, Daphna and Grossman, Eitan | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1136--1145 | This article evaluates three proposed laws of semantic change. Our claim is that in order to validate a putative law of semantic change, the effect should be observed in the genuine condition but absent or reduced in a suitably matched control condition, in which no change can possibly have taken place. Our analysis shows that the effects reported in recent literature must be substantially revised: (i) the proposed negative correlation between meaning change and word frequency is shown to be largely an artefact of the models of word representation used; (ii) the proposed negative correlation between meaning change and prototypicality is shown to be much weaker than what has been claimed in prior art; and (iii) the proposed positive correlation between meaning change and polysemy is largely an artefact of word frequency. These empirical observations are corroborated by analytical proofs that show that count representations introduce an inherent dependence on word frequency, and thus word frequency cannot be evaluated as an independent factor with these representations. | null | null | 10.18653/v1/D17-1118 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,606 |
inproceedings | lynn-etal-2017-human | Human Centered {NLP} with User-Factor Adaptation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1119/ | Lynn, Veronica and Son, Youngseo and Kulkarni, Vivek and Balasubramanian, Niranjan and Schwartz, H. Andrew | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1146--1155 | We pose the general task of user-factor adaptation {--} adapting supervised learning models to real-valued user factors inferred from a background of their language, reflecting the idea that a piece of text should be understood within the context of the user that wrote it. We introduce a continuous adaptation technique, suited for real-valued user factors that are common in social science and bringing us closer to personalized NLP, adapting to each user uniquely. We apply this technique with known user factors including age, gender, and personality traits, as well as latent factors, evaluating over five tasks: POS tagging, PP-attachment, sentiment analysis, sarcasm detection, and stance detection. Adaptation provides statistically significant benefits for 3 of the 5 tasks: up to +1.2 points for PP-attachment, +3.4 points for sarcasm, and +3.0 points for stance. | null | null | 10.18653/v1/D17-1119 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,607 |
inproceedings | raganato-etal-2017-neural | Neural Sequence Learning Models for Word Sense Disambiguation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1120/ | Raganato, Alessandro and Delli Bovi, Claudio and Navigli, Roberto | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1156--1167 | Word Sense Disambiguation models exist in many flavors. Even though supervised ones tend to perform best in terms of accuracy, they often lose ground to more flexible knowledge-based solutions, which do not require training by a word expert for every disambiguation target. To bridge this gap we adopt a different perspective and rely on sequence learning to frame the disambiguation problem: we propose and study in depth a series of end-to-end neural architectures directly tailored to the task, from bidirectional Long Short-Term Memory to encoder-decoder models. Our extensive evaluation over standard benchmarks and in multiple languages shows that sequence learning enables more versatile all-words models that consistently lead to state-of-the-art results, even against word experts with engineered features. | null | null | 10.18653/v1/D17-1120 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,608 |
inproceedings | rosin-etal-2017-learning | Learning Word Relatedness over Time | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1121/ | Rosin, Guy D. and Adar, Eytan and Radinsky, Kira | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1168--1178 | Search systems are often focused on providing relevant results for the {\textquotedblleft}now{\textquotedblright}, assuming both corpora and user needs that focus on the present. However, many corpora today reflect significant longitudinal collections ranging from 20 years of the Web to hundreds of years of digitized newspapers and books. Understanding the temporal intent of the user and retrieving the most relevant historical content has become a significant challenge. Common search features, such as query expansion, leverage the relationship between terms but cannot function well across all times when relationships vary temporally. In this work, we introduce a temporal relationship model that is extracted from longitudinal data collections. The model supports the task of identifying, given two words, when they relate to each other. We present an algorithmic framework for this task and show its application for the task of query expansion, achieving high gain. | null | null | 10.18653/v1/D17-1121 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,609 |
inproceedings | shen-etal-2017-inter | Inter-Weighted Alignment Network for Sentence Pair Modeling | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1122/ | Shen, Gehui and Yang, Yunlun and Deng, Zhi-Hong | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1179--1189 | Sentence pair modeling is a crucial problem in the field of natural language processing. In this paper, we propose a model to measure the similarity of a sentence pair focusing on the interaction information. We utilize the word level similarity matrix to discover fine-grained alignment of two sentences. It should be emphasized that each word in a sentence has a different importance from the perspective of semantic composition, so we exploit two novel and efficient strategies to explicitly calculate a weight for each word. Although the proposed model only use a sequential LSTM for sentence modeling without any external resource such as syntactic parser tree and additional lexicon features, experimental results show that our model achieves state-of-the-art performance on three datasets of two tasks. | null | null | 10.18653/v1/D17-1122 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,610 |
inproceedings | wang-etal-2017-short | A Short Survey on Taxonomy Learning from Text Corpora: Issues, Resources and Recent Advances | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1123/ | Wang, Chengyu and He, Xiaofeng and Zhou, Aoying | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1190--1203 | A taxonomy is a semantic hierarchy, consisting of concepts linked by is-a relations. While a large number of taxonomies have been constructed from human-compiled resources (e.g., Wikipedia), learning taxonomies from text corpora has received a growing interest and is essential for long-tailed and domain-specific knowledge acquisition. In this paper, we overview recent advances on taxonomy construction from free texts, reorganizing relevant subtasks into a complete framework. We also overview resources for evaluation and discuss challenges for future research. | null | null | 10.18653/v1/D17-1123 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,611 |
inproceedings | liu-etal-2017-idiom | Idiom-Aware Compositional Distributed Semantics | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1124/ | Liu, Pengfei and Qian, Kaiyu and Qiu, Xipeng and Huang, Xuanjing | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1204--1213 | Idioms are peculiar linguistic constructions that impose great challenges for representing the semantics of language, especially in current prevailing end-to-end neural models, which assume that the semantics of a phrase or sentence can be literally composed from its constitutive words. In this paper, we propose an idiom-aware distributed semantic model to build representation of sentences on the basis of understanding their contained idioms. Our models are grounded in the literal-first psycholinguistic hypothesis, which can adaptively learn semantic compositionality of a phrase literally or idiomatically. To better evaluate our models, we also construct an idiom-enriched sentiment classification dataset with considerable scale and abundant peculiarities of idioms. The qualitative and quantitative experimental analyses demonstrate the efficacy of our models. | null | null | 10.18653/v1/D17-1124 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,612 |
inproceedings | zhang-etal-2017-macro | Macro Grammars and Holistic Triggering for Efficient Semantic Parsing | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1125/ | Zhang, Yuchen and Pasupat, Panupong and Liang, Percy | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1214--1223 | To learn a semantic parser from denotations, a learning algorithm must search over a combinatorially large space of logical forms for ones consistent with the annotated denotations. We propose a new online learning algorithm that searches faster as training progresses. The two key ideas are using macro grammars to cache the abstract patterns of useful logical forms found thus far, and holistic triggering to efficiently retrieve the most relevant patterns based on sentence similarity. On the WikiTableQuestions dataset, we first expand the search space of an existing model to improve the state-of-the-art accuracy from 38.7{\%} to 42.7{\%}, and then use macro grammars and holistic triggering to achieve an 11x speedup and an accuracy of 43.7{\%}. | null | null | 10.18653/v1/D17-1125 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,613 |
inproceedings | lan-etal-2017-continuously | A Continuously Growing Dataset of Sentential Paraphrases | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1126/ | Lan, Wuwei and Qiu, Siyu and He, Hua and Xu, Wei | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1224--1234 | A major challenge in paraphrase research is the lack of parallel corpora. In this paper, we present a new method to collect large-scale sentential paraphrases from Twitter by linking tweets through shared URLs. The main advantage of our method is its simplicity, as it gets rid of the classifier or human in the loop needed to select data before annotation and subsequent application of paraphrase identification algorithms in the previous work. We present the largest human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking for automatic paraphrase identification. In addition, we show that more than 30,000 new sentential paraphrases can be easily and continuously captured every month at {\textasciitilde}70{\%} precision, and demonstrate their utility for downstream NLP tasks through phrasal paraphrase extraction. We make our code and data freely available. | null | null | 10.18653/v1/D17-1126 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,614 |
inproceedings | su-yan-2017-cross | Cross-domain Semantic Parsing via Paraphrasing | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1127/ | Su, Yu and Yan, Xifeng | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1235--1246 | Existing studies on semantic parsing mainly focus on the in-domain setting. We formulate cross-domain semantic parsing as a domain adaptation problem: train a semantic parser on some source domains and then adapt it to the target domain. Due to the diversity of logical forms in different domains, this problem presents unique and intriguing challenges. By converting logical forms into canonical utterances in natural language, we reduce semantic parsing to paraphrasing, and develop an attentive sequence-to-sequence paraphrase model that is general and flexible to adapt to different domains. We discover two problems, small micro variance and large macro variance, of pre-trained word embeddings that hinder their direct use in neural networks, and propose standardization techniques as a remedy. On the popular Overnight dataset, which contains eight domains, we show that both cross-domain training and standardized pre-trained word embeddings can bring significant improvement. | null | null | 10.18653/v1/D17-1127 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,615 |
inproceedings | yang-mitchell-2017-joint | A Joint Sequential and Relational Model for Frame-Semantic Parsing | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1128/ | Yang, Bishan and Mitchell, Tom | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1247--1256 | We introduce a new method for frame-semantic parsing that significantly improves the prior state of the art. Our model leverages the advantages of a deep bidirectional LSTM network which predicts semantic role labels word by word and a relational network which predicts semantic roles for individual text expressions in relation to a predicate. The two networks are integrated into a single model via knowledge distillation, and a unified graphical model is employed to jointly decode frames and semantic roles during inference. Experiments on the standard FrameNet data show that our model significantly outperforms existing neural and non-neural approaches, achieving a 5.7 F1 gain over the current state of the art, for full frame structure extraction. | null | null | 10.18653/v1/D17-1128 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,616 |
inproceedings | wang-xue-2017-getting | Getting the Most out of {AMR} Parsing | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1129/ | Wang, Chuan and Xue, Nianwen | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1257--1268 | This paper proposes to tackle the AMR parsing bottleneck by improving two components of an AMR parser: concept identification and alignment. We first build a Bidirectional LSTM based concept identifier that is able to incorporate richer contextual information to learn sparse AMR concept labels. We then extend an HMM-based word-to-concept alignment model with graph distance distortion and a rescoring method during decoding to incorporate the structural information in the AMR graph. We show integrating the two components into an existing AMR parser results in consistently better performance over the state of the art on various datasets. | null | null | 10.18653/v1/D17-1129 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,617 |
inproceedings | ballesteros-al-onaizan-2017-amr | {AMR} Parsing using Stack-{LSTM}s | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1130/ | Ballesteros, Miguel and Al-Onaizan, Yaser | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1269--1275 | We present a transition-based AMR parser that directly generates AMR parses from plain text. We use Stack-LSTMs to represent our parser state and make decisions greedily. In our experiments, we show that our parser achieves very competitive scores on English using only AMR training data. Adding additional information, such as POS tags and dependency trees, improves the results further. | null | null | 10.18653/v1/D17-1130 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,618 |
inproceedings | zhao-etal-2017-end | An End-to-End Deep Framework for Answer Triggering with a Novel Group-Level Objective | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1131/ | Zhao, Jie and Su, Yu and Guan, Ziyu and Sun, Huan | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1276--1282 | Given a question and a set of answer candidates, answer triggering determines whether the candidate set contains any correct answers. If yes, it then outputs a correct one. In contrast to existing pipeline methods which first consider individual candidate answers separately and then make a prediction based on a threshold, we propose an end-to-end deep neural network framework, which is trained by a novel group-level objective function that directly optimizes the answer triggering performance. Our objective function penalizes three potential types of error and allows training the framework in an end-to-end manner. Experimental results on the WikiQA benchmark show that our framework outperforms the state of the arts by a 6.6{\%} absolute gain under F1 measure. | null | null | 10.18653/v1/D17-1131 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,619 |
inproceedings | liu-lapata-2017-learning | Learning Contextually Informed Representations for Linear-Time Discourse Parsing | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1133/ | Liu, Yang and Lapata, Mirella | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1289--1298 | Recent advances in RST discourse parsing have focused on two modeling paradigms: (a) high order parsers which jointly predict the tree structure of the discourse and the relations it encodes; or (b) linear-time parsers which are efficient but mostly based on local features. In this work, we propose a linear-time parser with a novel way of representing discourse constituents based on neural networks which takes into account global contextual information and is able to capture long-distance dependencies. Experimental results show that our parser obtains state-of-the art performance on benchmark datasets, while being efficient (with time complexity linear in the number of sentences in the document) and requiring minimal feature engineering. | null | null | 10.18653/v1/D17-1133 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,621 |
inproceedings | lan-etal-2017-multi | Multi-task Attention-based Neural Networks for Implicit Discourse Relationship Representation and Identification | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1134/ | Lan, Man and Wang, Jianxiang and Wu, Yuanbin and Niu, Zheng-Yu and Wang, Haifeng | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1299--1308 | We present a novel multi-task attention based neural network model to address implicit discourse relationship representation and identification through two types of representation learning, an attention based neural network for learning discourse relationship representation with two arguments and a multi-task framework for learning knowledge from annotated and unannotated corpora. The extensive experiments have been performed on two benchmark corpora (i.e., PDTB and CoNLL-2016 datasets). Experimental results show that our proposed model outperforms the state-of-the-art systems on benchmark corpora. | null | null | 10.18653/v1/D17-1134 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,622 |
inproceedings | yin-etal-2017-chinese | {C}hinese Zero Pronoun Resolution with Deep Memory Network | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1135/ | Yin, Qingyu and Zhang, Yu and Zhang, Weinan and Liu, Ting | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1309--1318 | Existing approaches for Chinese zero pronoun resolution typically utilize only syntactical and lexical features while ignoring semantic information. The fundamental reason is that zero pronouns have no descriptive information, which brings difficulty in explicitly capturing their semantic similarities with antecedents. Meanwhile, representing zero pronouns is challenging since they are merely gaps that convey no actual content. In this paper, we address this issue by building a deep memory network that is capable of encoding zero pronouns into vector representations with information obtained from their contexts and potential antecedents. Consequently, our resolver takes advantage of semantic information by using these continuous distributed representations. Experiments on the OntoNotes 5.0 dataset show that the proposed memory network could substantially outperform the state-of-the-art systems in various experimental settings. | null | null | 10.18653/v1/D17-1135 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,623 |
inproceedings | morey-etal-2017-much | How much progress have we made on {RST} discourse parsing? A replication study of recent results on the {RST}-{DT} | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1136/ | Morey, Mathieu and Muller, Philippe and Asher, Nicholas | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1319--1324 | This article evaluates purported progress over the past years in RST discourse parsing. Several studies report a relative error reduction of 24 to 51{\%} on all metrics that authors attribute to the introduction of distributed representations of discourse units. We replicate the standard evaluation of 9 parsers, 5 of which use distributed representations, from 8 studies published between 2013 and 2017, using their predictions on the test set of the RST-DT. Our main finding is that most recently reported increases in RST discourse parser performance are an artefact of differences in implementations of the evaluation procedure. We evaluate all these parsers with the standard Parseval procedure to provide a more accurate picture of the actual RST discourse parsers performance in standard evaluation settings. Under this more stringent procedure, the gains attributable to distributed representations represent at most a 16{\%} relative error reduction on fully-labelled structures. | null | null | 10.18653/v1/D17-1136 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,624 |
inproceedings | loaiciga-etal-2017-disambiguating | What is it? Disambiguating the different readings of the pronoun {\textquoteleft}it' | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1137/ | Lo{\'a}iciga, Sharid and Guillou, Liane and Hardmeier, Christian | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1325--1331 | In this paper, we address the problem of predicting one of three functions for the English pronoun {\textquoteleft}it': anaphoric, event reference or pleonastic. This disambiguation is valuable in the context of machine translation and coreference resolution. We present experiments using a MAXENT classifier trained on gold-standard data and self-training experiments of an RNN trained on silver-standard data, annotated using the MAXENT classifier. Lastly, we report on an analysis of the strengths of these two models. | null | null | 10.18653/v1/D17-1137 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,625 |
inproceedings | heinzerling-etal-2017-revisiting | Revisiting Selectional Preferences for Coreference Resolution | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1138/ | Heinzerling, Benjamin and Moosavi, Nafise Sadat and Strube, Michael | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1332--1339 | Selectional preferences have long been claimed to be essential for coreference resolution. However, they are modeled only implicitly by current coreference resolvers. We propose a dependency-based embedding model of selectional preferences which allows fine-grained compatibility judgments with high coverage. Incorporating our model improves performance, matching state-of-the-art results of a more complex system. However, it comes with a cost that makes it debatable how worthwhile are such improvements. | null | null | 10.18653/v1/D17-1138 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,626 |
inproceedings | wang-etal-2017-learning | Learning to Rank Semantic Coherence for Topic Segmentation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1139/ | Wang, Liang and Li, Sujian and Lv, Yajuan and Wang, Houfeng | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1340--1344 | Topic segmentation plays an important role for discourse parsing and information retrieval. Due to the absence of training data, previous work mainly adopts unsupervised methods to rank semantic coherence between paragraphs for topic segmentation. In this paper, we present an intuitive and simple idea to automatically create a {\textquotedblleft}quasi{\textquotedblright} training dataset, which includes a large amount of text pairs from the same or different documents with different semantic coherence. With the training corpus, we design a symmetric CNN neural network to model text pairs and rank the semantic coherence within the learning to rank framework. Experiments show that our algorithm is able to achieve competitive performance over strong baselines on several real-world datasets. | null | null | 10.18653/v1/D17-1139 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,627 |
inproceedings | shnarch-etal-2017-grasp | {GRASP}: Rich Patterns for Argumentation Mining | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1140/ | Shnarch, Eyal and Levy, Ran and Raykar, Vikas and Slonim, Noam | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1345--1350 | GRASP (GReedy Augmented Sequential Patterns) is an algorithm for automatically extracting patterns that characterize subtle linguistic phenomena. To that end, GRASP augments each term of input text with multiple layers of linguistic information. These different facets of the text terms are systematically combined to reveal rich patterns. We report highly promising experimental results in several challenging text analysis tasks within the field of Argumentation Mining. We believe that GRASP is general enough to be useful for other domains too. For example, each of the following sentences includes a claim for a [topic]: 1. Opponents often argue that the open primary is unconstitutional. [Open Primaries] 2. Prof. Smith suggested that affirmative action devalues the accomplishments of the chosen. [Affirmative Action] 3. The majority stated that the First Amendment does not guarantee the right to offend others. [Freedom of Speech] These sentences share almost no words in common, however, they are similar at a more abstract level. A human observer may notice the following underlying common structure, or pattern: [someone][argue/suggest/state][that][topic term][sentiment term]. GRASP aims to automatically capture such underlying structures of the given data. For the above examples it finds the pattern [noun][express][that][noun,topic][sentiment], where [express] stands for all its (in)direct hyponyms, and [noun,topic] means a noun which is also related to the topic. | null | null | 10.18653/v1/D17-1140 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,628 |
inproceedings | al-khatib-etal-2017-patterns | Patterns of Argumentation Strategies across Topics | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1141/ | Al-Khatib, Khalid and Wachsmuth, Henning and Hagen, Matthias and Stein, Benno | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1351--1357 | This paper presents an analysis of argumentation strategies in news editorials within and across topics. Given nearly 29,000 argumentative editorials from the New York Times, we develop two machine learning models, one for determining an editorial`s topic, and one for identifying evidence types in the editorial. Based on the distribution and structure of the identified types, we analyze the usage patterns of argumentation strategies among 12 different topics. We detect several common patterns that provide insights into the manifestation of argumentation strategies. Also, our experiments reveal clear correlations between the topics and the detected patterns. | null | null | 10.18653/v1/D17-1141 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,629 |
inproceedings | liu-etal-2017-using | Using Argument-based Features to Predict and Analyse Review Helpfulness | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1142/ | Liu, Haijing and Gao, Yang and Lv, Pin and Li, Mengxue and Geng, Shiqiang and Li, Minglan and Wang, Hao | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1358--1363 | We study the helpful product reviews identification problem in this paper. We observe that the evidence-conclusion discourse relations, also known as arguments, often appear in product reviews, and we hypothesise that some argument-based features, e.g. the percentage of argumentative sentences, the evidences-conclusions ratios, are good indicators of helpful reviews. To validate this hypothesis, we manually annotate arguments in 110 hotel reviews, and investigate the effectiveness of several combinations of argument-based features. Experiments suggest that, when being used together with the argument-based features, the state-of-the-art baseline features can enjoy a performance boost (in terms of F1) of 11.01{\%} in average. | null | null | 10.18653/v1/D17-1142 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,630 |
inproceedings | potash-etal-2017-heres | Here`s My Point: Joint Pointer Architecture for Argument Mining | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1143/ | Potash, Peter and Romanov, Alexey and Rumshisky, Anna | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1364--1373 | In order to determine argument structure in text, one must understand how individual components of the overall argument are linked. This work presents the first neural network-based approach to link extraction in argument mining. Specifically, we propose a novel architecture that applies Pointer Network sequence-to-sequence attention modeling to structural prediction in discourse parsing tasks. We then develop a joint model that extends this architecture to simultaneously address the link extraction task and the classification of argument components. The proposed joint model achieves state-of-the-art results on two separate evaluation corpora, showing far superior performance than the previously proposed corpus-specific and heavily feature-engineered models. Furthermore, our results demonstrate that jointly optimizing for both tasks is crucial for high performance. | null | null | 10.18653/v1/D17-1143 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,631 |
inproceedings | cocarascu-toni-2017-identifying | Identifying attack and support argumentative relations using deep learning | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1144/ | Cocarascu, Oana and Toni, Francesca | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1374--1379 | We propose a deep learning architecture to capture argumentative relations of attack and support from one piece of text to another, of the kind that naturally occur in a debate. The architecture uses two (unidirectional or bidirectional) Long Short-Term Memory networks and (trained or non-trained) word embeddings, and allows to considerably improve upon existing techniques that use syntactic features and supervised classifiers for the same form of (relation-based) argument mining. | null | null | 10.18653/v1/D17-1144 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,632 |
inproceedings | sperber-etal-2017-neural | Neural Lattice-to-Sequence Models for Uncertain Inputs | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1145/ | Sperber, Matthias and Neubig, Graham and Niehues, Jan and Waibel, Alex | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1380--1389 | The input to a neural sequence-to-sequence model is often determined by an up-stream system, e.g. a word segmenter, part of speech tagger, or speech recognizer. These up-stream models are potentially error-prone. Representing inputs through word lattices allows making this uncertainty explicit by capturing alternative sequences and their posterior probabilities in a compact form. In this work, we extend the TreeLSTM (Tai et al., 2015) into a LatticeLSTM that is able to consume word lattices, and can be used as encoder in an attentional encoder-decoder model. We integrate lattice posterior scores into this architecture by extending the TreeLSTM`s child-sum and forget gates and introducing a bias term into the attention mechanism. We experiment with speech translation lattices and report consistent improvements over baselines that translate either the 1-best hypothesis or the lattice without posterior scores. | null | null | 10.18653/v1/D17-1145 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,633 |
inproceedings | feng-etal-2017-memory | Memory-augmented Neural Machine Translation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1146/ | Feng, Yang and Zhang, Shiyue and Zhang, Andi and Wang, Dong and Abel, Andrew | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1390--1399 | Neural machine translation (NMT) has achieved notable success in recent times, however it is also widely recognized that this approach has limitations with handling infrequent words and word pairs. This paper presents a novel memory-augmented NMT (M-NMT) architecture, which stores knowledge about how words (usually infrequently encountered ones) should be translated in a memory and then utilizes them to assist the neural model. We use this memory mechanism to combine the knowledge learned from a conventional statistical machine translation system and the rules learned by an NMT system, and also propose a solution for out-of-vocabulary (OOV) words based on this framework. Our experiments on two Chinese-English translation tasks demonstrated that the M-NMT architecture outperformed the NMT baseline by 9.0 and 2.7 BLEU points on the two tasks, respectively. Additionally, we found this architecture resulted in a much more effective OOV treatment compared to competitive methods. | null | null | 10.18653/v1/D17-1146 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,634 |
inproceedings | van-der-wees-etal-2017-dynamic | Dynamic Data Selection for Neural Machine Translation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1147/ | van der Wees, Marlies and Bisazza, Arianna and Monz, Christof | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1400--1410 | Intelligent selection of training data has proven a successful technique to simultaneously increase training efficiency and translation performance for phrase-based machine translation (PBMT). With the recent increase in popularity of neural machine translation (NMT), we explore in this paper to what extent and how NMT can also benefit from data selection. While state-of-the-art data selection (Axelrod et al., 2011) consistently performs well for PBMT, we show that gains are substantially lower for NMT. Next, we introduce {\textquoteleft}dynamic data selection' for NMT, a method in which we vary the selected subset of training data between different training epochs. Our experiments show that the best results are achieved when applying a technique we call {\textquoteleft}gradual fine-tuning', with improvements up to +2.6 BLEU over the original data selection approach and up to +3.1 BLEU over a general baseline. | null | null | 10.18653/v1/D17-1147 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,635 |
inproceedings | dahlmann-etal-2017-neural | Neural Machine Translation Leveraging Phrase-based Models in a Hybrid Search | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1148/ | Dahlmann, Leonard and Matusov, Evgeny and Petrushkov, Pavel and Khadivi, Shahram | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1411--1420 | In this paper, we introduce a hybrid search for attention-based neural machine translation (NMT). A target phrase learned with statistical MT models extends a hypothesis in the NMT beam search when the attention of the NMT model focuses on the source words translated by this phrase. Phrases added in this way are scored with the NMT model, but also with SMT features including phrase-level translation probabilities and a target language model. Experimental results on German-to-English news domain and English-to-Russian e-commerce domain translation tasks show that using phrase-based models in NMT search improves MT quality by up to 2.3{\%} BLEU absolute as compared to a strong NMT baseline. | null | null | 10.18653/v1/D17-1148 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,636 |
inproceedings | wang-etal-2017-translating | Translating Phrases in Neural Machine Translation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1149/ | Wang, Xing and Tu, Zhaopeng and Xiong, Deyi and Zhang, Min | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1421--1431 | Phrases play an important role in natural language understanding and machine translation (Sag et al., 2002; Villavicencio et al., 2005). However, it is difficult to integrate them into current neural machine translation (NMT) which reads and generates sentences word by word. In this work, we propose a method to translate phrases in NMT by integrating a phrase memory storing target phrases from a phrase-based statistical machine translation (SMT) system into the encoder-decoder architecture of NMT. At each decoding step, the phrase memory is first re-written by the SMT model, which dynamically generates relevant target phrases with contextual information provided by the NMT model. Then the proposed model reads the phrase memory to make probability estimations for all phrases in the phrase memory. If phrase generation is carried on, the NMT decoder selects an appropriate phrase from the memory to perform phrase translation and updates its decoding state by consuming the words in the selected phrase. Otherwise, the NMT decoder generates a word from the vocabulary as the general NMT decoder does. Experiment results on the Chinese to English translation show that the proposed model achieves significant improvements over the baseline on various test sets. | null | null | 10.18653/v1/D17-1149 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,637 |
inproceedings | yang-etal-2017-towards | Towards Bidirectional Hierarchical Representations for Attention-based Neural Machine Translation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1150/ | Yang, Baosong and Wong, Derek F. and Xiao, Tong and Chao, Lidia S. and Zhu, Jingbo | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1432--1441 | This paper proposes a hierarchical attentional neural translation model which focuses on enhancing source-side hierarchical representations by covering both local and global semantic information using a bidirectional tree-based encoder. To maximize the predictive likelihood of target words, a weighted variant of an attention mechanism is used to balance the attentive information between lexical and phrase vectors. Using a tree-based rare word encoding, the proposed model is extended to sub-word level to alleviate the out-of-vocabulary (OOV) problem. Empirical results reveal that the proposed model significantly outperforms sequence-to-sequence attention-based and tree-based neural translation models in English-Chinese translation tasks. | null | null | 10.18653/v1/D17-1150 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,638 |
inproceedings | britz-etal-2017-massive | Massive Exploration of Neural Machine Translation Architectures | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1151/ | Britz, Denny and Goldie, Anna and Luong, Minh-Thang and Le, Quoc | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1442--1451 | Neural Machine Translation (NMT) has shown remarkable progress over the past few years, with production systems now being deployed to end-users. As the field is moving rapidly, it has become unclear which elements of NMT architectures have a significant impact on translation quality. In this work, we present a large-scale analysis of the sensitivity of NMT architectures to common hyperparameters. We report empirical results and variance numbers for several hundred experimental runs, corresponding to over 250,000 GPU hours on a WMT English to German translation task. Our experiments provide practical insights into the relative importance of factors such as embedding size, network depth, RNN cell type, residual connections, attention mechanism, and decoding heuristics. As part of this contribution, we also release an open-source NMT framework in TensorFlow to make it easy for others to reproduce our results and perform their own experiments. | null | null | 10.18653/v1/D17-1151 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,639 |
inproceedings | wijaya-etal-2017-learning | Learning Translations via Matrix Completion | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1152/ | Wijaya, Derry Tanti and Callahan, Brendan and Hewitt, John and Gao, Jie and Ling, Xiao and Apidianaki, Marianna and Callison-Burch, Chris | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1452--1463 | Bilingual Lexicon Induction is the task of learning word translations without bilingual parallel corpora. We model this task as a matrix completion problem, and present an effective and extendable framework for completing the matrix. This method harnesses diverse bilingual and monolingual signals, each of which may be incomplete or noisy. Our model achieves state-of-the-art performance for both high and low resource languages. | null | null | 10.18653/v1/D17-1152 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,640 |
inproceedings | nguyen-etal-2017-reinforcement | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1153/ | Nguyen, Khanh and Daum{\'e} III, Hal and Boyd-Graber, Jordan | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1464--1474 | Machine translation is a natural candidate problem for reinforcement learning from human feedback: users provide quick, dirty ratings on candidate translations to guide a system to improve. Yet, current neural machine translation training focuses on expensive human-generated reference translations. We describe a reinforcement learning algorithm that improves neural machine translation systems from simulated human feedback. Our algorithm combines the advantage actor-critic algorithm (Mnih et al., 2016) with the attention-based neural encoder-decoder architecture (Luong et al., 2015). This algorithm (a) is well-designed for problems with a large action space and delayed rewards, (b) effectively optimizes traditional corpus-level machine translation metrics, and (c) is robust to skewed, high-variance, granular feedback modeled after actual human behaviors. | null | null | 10.18653/v1/D17-1153 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,641 |
inproceedings | zhang-etal-2017-towards | Towards Compact and Fast Neural Machine Translation Using a Combined Method | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1154/ | Zhang, Xiaowei and Chen, Wei and Wang, Feng and Xu, Shuang and Xu, Bo | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1475--1481 | Neural Machine Translation (NMT) lays intensive burden on computation and memory cost. It is a challenge to deploy NMT models on the devices with limited computation and memory budgets. This paper presents a four stage pipeline to compress model and speed up the decoding for NMT. Our method first introduces a compact architecture based on convolutional encoder and weight shared embeddings. Then weight pruning is applied to obtain a sparse model. Next, we propose a fast sequence interpolation approach which enables the greedy decoding to achieve performance on par with the beam search. Hence, the time-consuming beam search can be replaced by simple greedy decoding. Finally, vocabulary selection is used to reduce the computation of softmax layer. Our final model achieves 10 times speedup, 17 times parameters reduction, less than 35MB storage size and comparable performance compared to the baseline model. | null | null | 10.18653/v1/D17-1154 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,642 |
inproceedings | wang-etal-2017-instance | Instance Weighting for Neural Machine Translation Domain Adaptation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1155/ | Wang, Rui and Utiyama, Masao and Liu, Lemao and Chen, Kehai and Sumita, Eiichiro | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1482--1488 | Instance weighting has been widely applied to phrase-based machine translation domain adaptation. However, it is challenging to be applied to Neural Machine Translation (NMT) directly, because NMT is not a linear model. In this paper, two instance weighting technologies, i.e., sentence weighting and domain weighting with a dynamic weight learning strategy, are proposed for NMT domain adaptation. Empirical results on the IWSLT English-German/French tasks show that the proposed methods can substantially improve NMT performance by up to 2.7-6.7 BLEU points, outperforming the existing baselines by up to 1.6-3.6 BLEU points. | null | null | 10.18653/v1/D17-1155 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,643 |
inproceedings | miceli-barone-etal-2017-regularization | Regularization techniques for fine-tuning in neural machine translation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1156/ | Miceli Barone, Antonio Valerio and Haddow, Barry and Germann, Ulrich and Sennrich, Rico | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1489--1494 | We investigate techniques for supervised domain adaptation for neural machine translation where an existing model trained on a large out-of-domain dataset is adapted to a small in-domain dataset. In this scenario, overfitting is a major challenge. We investigate a number of techniques to reduce overfitting and improve transfer learning, including regularization techniques such as dropout and L2-regularization towards an out-of-domain prior. In addition, we introduce tuneout, a novel regularization technique inspired by dropout. We apply these techniques, alone and in combination, to neural machine translation, obtaining improvements on IWSLT datasets for English{\textrightarrow}German and English{\textrightarrow}Russian. We also investigate the amounts of in-domain training data needed for domain adaptation in NMT, and find a logarithmic relationship between the amount of training data and gain in BLEU score. | null | null | 10.18653/v1/D17-1156 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,644 |
inproceedings | chang-collins-2017-source | Source-Side Left-to-Right or Target-Side Left-to-Right? An Empirical Comparison of Two Phrase-Based Decoding Algorithms | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1157/ | Chang, Yin-Wen and Collins, Michael | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1495--1499 | This paper describes an empirical study of the phrase-based decoding algorithm proposed by Chang and Collins (2017). The algorithm produces a translation by processing the source-language sentence in strictly left-to-right order, differing from commonly used approaches that build the target-language sentence in left-to-right order. Our results show that the new algorithm is competitive with Moses (Koehn et al., 2007) in terms of both speed and BLEU scores. | null | null | 10.18653/v1/D17-1157 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,645 |
inproceedings | domhan-hieber-2017-using | Using Target-side Monolingual Data for Neural Machine Translation through Multi-task Learning | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1158/ | Domhan, Tobias and Hieber, Felix | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1500--1505 | The performance of Neural Machine Translation (NMT) models relies heavily on the availability of sufficient amounts of parallel data, and an efficient and effective way of leveraging the vastly available amounts of monolingual data has yet to be found. We propose to modify the decoder in a neural sequence-to-sequence model to enable multi-task learning for two strongly related tasks: target-side language modeling and translation. The decoder predicts the next target word through two channels, a target-side language model on the lowest layer, and an attentional recurrent model which is conditioned on the source representation. This architecture allows joint training on both large amounts of monolingual and moderate amounts of bilingual data to improve NMT performance. Initial results in the news domain for three language pairs show moderate but consistent improvements over a baseline trained on bilingual data only. | null | null | 10.18653/v1/D17-1158 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,646 |
inproceedings | marcheggiani-titov-2017-encoding | Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1159/ | Marcheggiani, Diego and Titov, Ivan | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1506--1515 | Semantic role labeling (SRL) is the task of identifying the predicate-argument structure of a sentence. It is typically regarded as an important step in the standard NLP pipeline. As the semantic representations are closely related to syntactic ones, we exploit syntactic information in our model. We propose a version of graph convolutional networks (GCNs), a recent class of neural networks operating on graphs, suited to model syntactic dependency graphs. GCNs over syntactic dependency trees are used as sentence encoders, producing latent feature representations of words in a sentence. We observe that GCN layers are complementary to LSTM ones: when we stack both GCN and LSTM layers, we obtain a substantial improvement over an already state-of-the-art LSTM SRL model, resulting in the best reported scores on the standard benchmark (CoNLL-2009) both for Chinese and English. | null | null | 10.18653/v1/D17-1159 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,647 |
inproceedings | krishnamurthy-etal-2017-neural | Neural Semantic Parsing with Type Constraints for Semi-Structured Tables | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1160/ | Krishnamurthy, Jayant and Dasigi, Pradeep and Gardner, Matt | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1516--1526 | We present a new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables. Our parser is an encoder-decoder neural network with two key technical innovations: (1) a grammar for the decoder that only generates well-typed logical forms; and (2) an entity embedding and linking module that identifies entity mentions while generalizing across tables. We also introduce a novel method for training our neural model with question-answer supervision. On the WikiTableQuestions data set, our parser achieves a state-of-the-art accuracy of 43.3{\%} for a single model and 45.9{\%} for a 5-model ensemble, improving on the best prior score of 38.7{\%} set by a 15-model ensemble. These results suggest that type constraints and entity linking are valuable components to incorporate in neural semantic parsers. | null | null | 10.18653/v1/D17-1160 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,648 |
inproceedings | srivastava-etal-2017-joint | Joint Concept Learning and Semantic Parsing from Natural Language Explanations | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1161/ | Srivastava, Shashank and Labutov, Igor and Mitchell, Tom | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1527--1536 | Natural language constitutes a predominant medium for much of human learning and pedagogy. We consider the problem of concept learning from natural language explanations, and a small number of labeled examples of the concept. For example, in learning the concept of a phishing email, one might say {\textquoteleft}this is a phishing email because it asks for your bank account number'. Solving this problem involves both learning to interpret open ended natural language statements, and learning the concept itself. We present a joint model for (1) language interpretation (semantic parsing) and (2) concept learning (classification) that does not require labeling statements with logical forms. Instead, the model prefers discriminative interpretations of statements in context of observable features of the data as a weak signal for parsing. On a dataset of email-related concepts, our approach yields across-the-board improvements in classification performance, with a 30{\%} relative improvement in F1 score over competitive methods in the low data regime. | null | null | 10.18653/v1/D17-1161 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,649 |
inproceedings | rei-etal-2017-grasping | Grasping the Finer Point: A Supervised Similarity Network for Metaphor Detection | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1162/ | Rei, Marek and Bulat, Luana and Kiela, Douwe and Shutova, Ekaterina | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1537--1546 | The ubiquity of metaphor in our everyday communication makes it an important problem for natural language understanding. Yet, the majority of metaphor processing systems to date rely on hand-engineered features and there is still no consensus in the field as to which features are optimal for this task. In this paper, we present the first deep learning architecture designed to capture metaphorical composition. Our results demonstrate that it outperforms the existing approaches in the metaphor identification task. | null | null | 10.18653/v1/D17-1162 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,650 |
inproceedings | keith-etal-2017-identifying | Identifying civilians killed by police with distantly supervised entity-event extraction | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1163/ | Keith, Katherine and Handler, Abram and Pinkham, Michael and Magliozzi, Cara and McDuffie, Joshua and O{'}Connor, Brendan | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1547--1557 | We propose a new, socially-impactful task for natural language processing: from a news corpus, extract names of persons who have been killed by police. We present a newly collected police fatality corpus, which we release publicly, and present a model to solve this problem that uses EM-based distant supervision with logistic regression and convolutional neural network classifiers. Our model outperforms two off-the-shelf event extractor systems, and it can suggest candidate victim names in some cases faster than one of the major manually-collected police fatality databases. | null | null | 10.18653/v1/D17-1163 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,651 |
inproceedings | zhang-etal-2017-asking | Asking too much? The rhetorical role of questions in political discourse | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1164/ | Zhang, Justine and Spirling, Arthur and Danescu-Niculescu-Mizil, Cristian | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1558--1572 | Questions play a prominent role in social interactions, performing rhetorical functions that go beyond that of simple informational exchange. The surface form of a question can signal the intention and background of the person asking it, as well as the nature of their relation with the interlocutor. While the informational nature of questions has been extensively examined in the context of question-answering applications, their rhetorical aspects have been largely understudied. In this work we introduce an unsupervised methodology for extracting surface motifs that recur in questions, and for grouping them according to their latent rhetorical role. By applying this framework to the setting of question sessions in the UK parliament, we show that the resulting typology encodes key aspects of the political discourse{---}such as the bifurcation in questioning behavior between government and opposition parties{---}and reveals new insights into the effects of a legislator`s tenure and political career ambitions. | null | null | 10.18653/v1/D17-1164 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,652 |
inproceedings | vilares-he-2017-detecting | Detecting Perspectives in Political Debates | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1165/ | Vilares, David and He, Yulan | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1573--1582 | We explore how to detect people`s perspectives that occupy a certain proposition. We propose a Bayesian modelling approach where topics (or propositions) and their associated perspectives (or viewpoints) are modeled as latent variables. Words associated with topics or perspectives follow different generative routes. Based on the extracted perspectives, we can extract the top associated sentences from text to generate a succinct summary which allows a quick glimpse of the main viewpoints in a document. The model is evaluated on debates from the House of Commons of the UK Parliament, revealing perspectives from the debates without the use of labelled data and obtaining better results than previous related solutions under a variety of evaluations. | null | null | 10.18653/v1/D17-1165 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,653 |
inproceedings | swamy-etal-2017-feeling | {\textquotedblleft}i have a feeling trump will win..................{\textquotedblright}: Forecasting Winners and Losers from User Predictions on {T}witter | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1166/ | Swamy, Sandesh and Ritter, Alan and de Marneffe, Marie-Catherine | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1583--1592 | Social media users often make explicit predictions about upcoming events. Such statements vary in the degree of certainty the author expresses toward the outcome: {\textquotedblleft}Leonardo DiCaprio will win Best Actor{\textquotedblright} vs. {\textquotedblleft}Leonardo DiCaprio may win{\textquotedblright} or {\textquotedblleft}No way Leonardo wins!{\textquotedblright}. Can popular beliefs on social media predict who will win? To answer this question, we build a corpus of tweets annotated for veridicality on which we train a log-linear classifier that detects positive veridicality with high precision. We then forecast uncertain outcomes using the wisdom of crowds, by aggregating users' explicit predictions. Our method for forecasting winners is fully automated, relying only on a set of contenders as input. It requires no training data of past outcomes and outperforms sentiment and tweet volume baselines on a broad range of contest prediction tasks. We further demonstrate how our approach can be used to measure the reliability of individual accounts' predictions and retrospectively identify surprise outcomes. | null | null | 10.18653/v1/D17-1166 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,654 |
inproceedings | gui-etal-2017-question | A Question Answering Approach for Emotion Cause Extraction | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1167/ | Gui, Lin and Hu, Jiannan and He, Yulan and Xu, Ruifeng and Lu, Qin and Du, Jiachen | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1593--1602 | Emotion cause extraction aims to identify the reasons behind a certain emotion expressed in text. It is a much more difficult task compared to emotion classification. Inspired by recent advances in using deep memory networks for question answering (QA), we propose a new approach which considers emotion cause identification as a reading comprehension task in QA. Inspired by convolutional neural networks, we propose a new mechanism to store relevant context in different memory slots to model context information. Our proposed approach can extract both word level sequence features and lexical features. Performance evaluation shows that our method achieves the state-of-the-art performance on a recently released emotion cause dataset, outperforming a number of competitive baselines by at least 3.01{\%} in F-measure. | null | null | 10.18653/v1/D17-1167 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,655 |
inproceedings | chaturvedi-etal-2017-story | Story Comprehension for Predicting What Happens Next | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1168/ | Chaturvedi, Snigdha and Peng, Haoruo and Roth, Dan | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1603--1614 | Automatic story comprehension is a fundamental challenge in Natural Language Understanding, and can enable computers to learn about social norms, human behavior and commonsense. In this paper, we present a story comprehension model that explores three distinct semantic aspects: (i) the sequence of events described in the story, (ii) its emotional trajectory, and (iii) its plot consistency. We judge the model`s understanding of real-world stories by inquiring if, like humans, it can develop an expectation of what will happen next in a given story. Specifically, we use it to predict the correct ending of a given short story from possible alternatives. The model uses a hidden variable to weigh the semantic aspects in the context of the story. Our experiments demonstrate the potential of our approach to characterize these semantic aspects, and the strength of the hidden variable based approach. The model outperforms the state-of-the-art approaches and achieves best results on a publicly available dataset. | null | null | 10.18653/v1/D17-1168 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,656 |
inproceedings | felbo-etal-2017-using | Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1169/ | Felbo, Bjarke and Mislove, Alan and S{\o}gaard, Anders and Rahwan, Iyad and Lehmann, Sune | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1615--1625 | NLP tasks are often limited by scarcity of manually annotated data. In social media sentiment analysis and related tasks, researchers have therefore used binarized emoticons and specific hashtags as forms of distant supervision. Our paper shows that by extending the distant supervision to a more diverse set of noisy labels, the models can learn richer representations. Through emoji prediction on a dataset of 1246 million tweets containing one of 64 common emojis we obtain state-of-the-art performance on 8 benchmark datasets within emotion, sentiment and sarcasm detection using a single pretrained model. Our analyses confirm that the diversity of our emotional labels yield a performance improvement over previous distant supervision approaches. | null | null | 10.18653/v1/D17-1169 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,657 |
inproceedings | wang-zhang-2017-opinion | Opinion Recommendation Using A Neural Model | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1170/ | Wang, Zhongqing and Zhang, Yue | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1626--1637 | We present opinion recommendation, a novel task of jointly generating a review with a rating score that a certain user would give to a certain product which is unreviewed by the user, given existing reviews to the product by other users, and the reviews that the user has given to other products. A characteristic of opinion recommendation is the reliance of multiple data sources for multi-task joint learning. We use a single neural network to model users and products, generating customised product representations using a deep memory network, from which customised ratings and reviews are constructed jointly. Results show that our opinion recommendation system gives ratings that are closer to real user ratings on Yelp.com data compared with Yelp`s own ratings. our methods give better results compared to several pipelines baselines. | null | null | 10.18653/v1/D17-1170 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,658 |
inproceedings | cai-etal-2017-crf | {CRF} Autoencoder for Unsupervised Dependency Parsing | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1171/ | Cai, Jiong and Jiang, Yong and Tu, Kewei | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1638--1643 | Unsupervised dependency parsing, which tries to discover linguistic dependency structures from unannotated data, is a very challenging task. Almost all previous work on this task focuses on learning generative models. In this paper, we develop an unsupervised dependency parsing model based on the CRF autoencoder. The encoder part of our model is discriminative and globally normalized which allows us to use rich features as well as universal linguistic priors. We propose an exact algorithm for parsing as well as a tractable learning algorithm. We evaluated the performance of our model on eight multilingual treebanks and found that our model achieved comparable performance with state-of-the-art approaches. | null | null | 10.18653/v1/D17-1171 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,659 |
inproceedings | corro-etal-2017-efficient | Efficient Discontinuous Phrase-Structure Parsing via the Generalized Maximum Spanning Arborescence | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1172/ | Corro, Caio and Le Roux, Joseph and Lacroix, Mathieu | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1644--1654 | We present a new method for the joint task of tagging and non-projective dependency parsing. We demonstrate its usefulness with an application to discontinuous phrase-structure parsing where decoding lexicalized spines and syntactic derivations is performed jointly. The main contributions of this paper are (1) a reduction from joint tagging and non-projective dependency parsing to the Generalized Maximum Spanning Arborescence problem, and (2) a novel decoding algorithm for this problem through Lagrangian relaxation. We evaluate this model and obtain state-of-the-art results despite strong independence assumptions. | null | null | 10.18653/v1/D17-1172 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,660 |
inproceedings | zheng-2017-incremental | Incremental Graph-based Neural Dependency Parsing | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1173/ | Zheng, Xiaoqing | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1655--1665 | Very recently, some studies on neural dependency parsers have shown advantage over the traditional ones on a wide variety of languages. However, for graph-based neural dependency parsing systems, they either count on the long-term memory and attention mechanism to implicitly capture the high-order features or give up the global exhaustive inference algorithms in order to harness the features over a rich history of parsing decisions. The former might miss out the important features for specific headword predictions without the help of the explicit structural information, and the latter may suffer from the error propagation as false early structural constraints are used to create features when making future predictions. We explore the feasibility of explicitly taking high-order features into account while remaining the main advantage of global inference and learning for graph-based parsing. The proposed parser first forms an initial parse tree by head-modifier predictions based on the first-order factorization. High-order features (such as grandparent, sibling, and uncle) then can be defined over the initial tree, and used to refine the parse tree in an iterative fashion. Experimental results showed that our model (called INDP) archived competitive performance to existing benchmark parsers on both English and Chinese datasets. | null | null | 10.18653/v1/D17-1173 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,661 |
inproceedings | stanojevic-alhama-2017-neural | Neural Discontinuous Constituency Parsing | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1174/ | Stanojevi{\'c}, Milo{\v{s}} and Alhama, Raquel G. | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1666--1676 | One of the most pressing issues in discontinuous constituency transition-based parsing is that the relevant information for parsing decisions could be located in any part of the stack or the buffer. In this paper, we propose a solution to this problem by replacing the structured perceptron model with a recursive neural model that computes a global representation of the configuration, therefore allowing even the most remote parts of the configuration to influence the parsing decisions. We also provide a detailed analysis of how this representation should be built out of sub-representations of its core elements (words, trees and stack). Additionally, we investigate how different types of swap oracles influence the results. Our model is the first neural discontinuous constituency parser, and it outperforms all the previously published models on three out of four datasets while on the fourth it obtains second place by a tiny difference. | null | null | 10.18653/v1/D17-1174 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,662 |
inproceedings | zhang-etal-2017-stack | Stack-based Multi-layer Attention for Transition-based Dependency Parsing | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1175/ | Zhang, Zhirui and Liu, Shujie and Li, Mu and Zhou, Ming and Chen, Enhong | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1677--1682 | Although sequence-to-sequence (seq2seq) network has achieved significant success in many NLP tasks such as machine translation and text summarization, simply applying this approach to transition-based dependency parsing cannot yield a comparable performance gain as in other state-of-the-art methods, such as stack-LSTM and head selection. In this paper, we propose a stack-based multi-layer attention model for seq2seq learning to better leverage structural linguistics information. In our method, two binary vectors are used to track the decoding stack in transition-based parsing, and multi-layer attention is introduced to capture multiple word dependencies in partial trees. We conduct experiments on PTB and CTB datasets, and the results show that our proposed model achieves state-of-the-art accuracy and significant improvement in labeled precision with respect to the baseline seq2seq model. | null | null | 10.18653/v1/D17-1175 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,663 |
inproceedings | han-etal-2017-dependency | Dependency Grammar Induction with Neural Lexicalization and Big Training Data | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1176/ | Han, Wenjuan and Jiang, Yong and Tu, Kewei | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1683--1688 | We study the impact of big models (in terms of the degree of lexicalization) and big data (in terms of the training corpus size) on dependency grammar induction. We experimented with L-DMV, a lexicalized version of Dependency Model with Valence (Klein and Manning, 2004) and L-NDMV, our lexicalized extension of the Neural Dependency Model with Valence (Jiang et al., 2016). We find that L-DMV only benefits from very small degrees of lexicalization and moderate sizes of training corpora. L-NDMV can benefit from big training data and lexicalization of greater degrees, especially when enhanced with good model initialization, and it achieves a result that is competitive with the current state-of-the-art. | null | null | 10.18653/v1/D17-1176 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,664 |
inproceedings | jiang-etal-2017-combining | Combining Generative and Discriminative Approaches to Unsupervised Dependency Parsing via Dual Decomposition | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1177/ | Jiang, Yong and Han, Wenjuan and Tu, Kewei | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1689--1694 | Unsupervised dependency parsing aims to learn a dependency parser from unannotated sentences. Existing work focuses on either learning generative models using the expectation-maximization algorithm and its variants, or learning discriminative models using the discriminative clustering algorithm. In this paper, we propose a new learning strategy that learns a generative model and a discriminative model jointly based on the dual decomposition method. Our method is simple and general, yet effective to capture the advantages of both models and improve their learning results. We tested our method on the UD treebank and achieved a state-of-the-art performance on thirty languages. | null | null | 10.18653/v1/D17-1177 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,665 |
inproceedings | stern-etal-2017-effective | Effective Inference for Generative Neural Parsing | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1178/ | Stern, Mitchell and Fried, Daniel and Klein, Dan | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1695--1700 | Generative neural models have recently achieved state-of-the-art results for constituency parsing. However, without a feasible search procedure, their use has so far been limited to reranking the output of external parsers in which decoding is more tractable. We describe an alternative to the conventional action-level beam search used for discriminative neural models that enables us to decode directly in these generative models. We then show that by improving our basic candidate selection strategy and using a coarse pruning function, we can improve accuracy while exploring significantly less of the search space. Applied to the model of Choe and Charniak (2016), our inference procedure obtains 92.56 F1 on section 23 of the Penn Treebank, surpassing prior state-of-the-art results for single-model systems. | null | null | 10.18653/v1/D17-1178 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,666 |
inproceedings | zhang-etal-2017-semi | Semi-supervised Structured Prediction with Neural {CRF} Autoencoder | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1179/ | Zhang, Xiao and Jiang, Yong and Peng, Hao and Tu, Kewei and Goldwasser, Dan | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1701--1711 | In this paper we propose an end-to-end neural CRF autoencoder (NCRF-AE) model for semi-supervised learning of sequential structured prediction problems. Our NCRF-AE consists of two parts: an encoder which is a CRF model enhanced by deep neural networks, and a decoder which is a generative model trying to reconstruct the input. Our model has a unified structure with different loss functions for labeled and unlabeled data with shared parameters. We developed a variation of the EM algorithm for optimizing both the encoder and the decoder simultaneously by decoupling their parameters. Our Experimental results over the Part-of-Speech (POS) tagging task on eight different languages, show that our model can outperform competitive systems in both supervised and semi-supervised scenarios. | null | null | 10.18653/v1/D17-1179 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,667 |
inproceedings | kasai-etal-2017-tag | {TAG} Parsing with Neural Networks and Vector Representations of Supertags | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1180/ | Kasai, Jungo and Frank, Bob and McCoy, Tom and Rambow, Owen and Nasr, Alexis | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1712--1722 | We present supertagging-based models for Tree Adjoining Grammar parsing that use neural network architectures and dense vector representation of supertags (elementary trees) to achieve state-of-the-art performance in unlabeled and labeled attachment scores. The shift-reduce parsing model eschews lexical information entirely, and uses only the 1-best supertags to parse a sentence, providing further support for the claim that supertagging is {\textquotedblleft}almost parsing.{\textquotedblright} We demonstrate that the embedding vector representations the parser induces for supertags possess linguistically interpretable structure, supporting analogies between grammatical structures like those familiar from recent work in distributional semantics. This dense representation of supertags overcomes the drawbacks for statistical models of TAG as compared to CCG parsing, raising the possibility that TAG is a viable alternative for NLP tasks that require the assignment of richer structural descriptions to sentences. | null | null | 10.18653/v1/D17-1180 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,668 |
inproceedings | adel-schutze-2017-global | Global Normalization of Convolutional Neural Networks for Joint Entity and Relation Classification | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1181/ | Adel, Heike and Sch{\"utze, Hinrich | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1723--1729 | We introduce globally normalized convolutional neural networks for joint entity classification and relation extraction. In particular, we propose a way to utilize a linear-chain conditional random field output layer for predicting entity types and relations between entities at the same time. Our experiments show that global normalization outperforms a locally normalized softmax layer on a benchmark dataset. | null | null | 10.18653/v1/D17-1181 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,669 |
inproceedings | zhang-etal-2017-end | End-to-End Neural Relation Extraction with Global Optimization | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1182/ | Zhang, Meishan and Zhang, Yue and Fu, Guohong | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1730--1740 | Neural networks have shown promising results for relation extraction. State-of-the-art models cast the task as an end-to-end problem, solved incrementally using a local classifier. Yet previous work using statistical models have demonstrated that global optimization can achieve better performances compared to local classification. We build a globally optimized neural model for end-to-end relation extraction, proposing novel LSTM features in order to better learn context representations. In addition, we present a novel method to integrate syntactic information to facilitate global learning, yet requiring little background on syntactic grammars thus being easy to extend. Experimental results show that our proposed model is highly effective, achieving the best performances on two standard benchmarks. | null | null | 10.18653/v1/D17-1182 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,670 |
inproceedings | ojha-talukdar-2017-kgeval | {KGE}val: Accuracy Estimation of Automatically Constructed Knowledge Graphs | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1183/ | Ojha, Prakhar and Talukdar, Partha | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1741--1750 | Automatic construction of large knowledge graphs (KG) by mining web-scale text datasets has received considerable attention recently. Estimating accuracy of such automatically constructed KGs is a challenging problem due to their size and diversity. This important problem has largely been ignored in prior research {--} we fill this gap and propose KGEval. KGEval uses coupling constraints to bind facts and crowdsources those few that can infer large parts of the graph. We demonstrate that the objective optimized by KGEval is submodular and NP-hard, allowing guarantees for our approximation algorithm. Through experiments on real-world datasets, we demonstrate that KGEval best estimates KG accuracy compared to other baselines, while requiring significantly lesser number of human evaluations. | null | null | 10.18653/v1/D17-1183 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,671 |
inproceedings | pujara-etal-2017-sparsity | Sparsity and Noise: Where Knowledge Graph Embeddings Fall Short | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1184/ | Pujara, Jay and Augustine, Eriq and Getoor, Lise | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1751--1756 | Knowledge graph (KG) embedding techniques use structured relationships between entities to learn low-dimensional representations of entities and relations. One prominent goal of these approaches is to improve the quality of knowledge graphs by removing errors and adding missing facts. Surprisingly, most embedding techniques have been evaluated on benchmark datasets consisting of dense and reliable subsets of human-curated KGs, which tend to be fairly complete and have few errors. In this paper, we consider the problem of applying embedding techniques to KGs extracted from text, which are often incomplete and contain errors. We compare the sparsity and unreliability of different KGs and perform empirical experiments demonstrating how embedding approaches degrade as sparsity and unreliability increase. | null | null | 10.18653/v1/D17-1184 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,672 |
inproceedings | glavas-ponzetto-2017-dual | Dual Tensor Model for Detecting Asymmetric Lexico-Semantic Relations | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1185/ | Glava{\v{s}}, Goran and Ponzetto, Simone Paolo | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1757--1767 | Detection of lexico-semantic relations is one of the central tasks of computational semantics. Although some fundamental relations (e.g., hypernymy) are asymmetric, most existing models account for asymmetry only implicitly and use the same concept representations to support detection of symmetric and asymmetric relations alike. In this work, we propose the Dual Tensor model, a neural architecture with which we explicitly model the asymmetry and capture the translation between unspecialized and specialized word embeddings via a pair of tensors. Although our Dual Tensor model needs only unspecialized embeddings as input, our experiments on hypernymy and meronymy detection suggest that it can outperform more complex and resource-intensive models. We further demonstrate that the model can account for polysemy and that it exhibits stable performance across languages. | null | null | 10.18653/v1/D17-1185 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,673 |
inproceedings | zeng-etal-2017-incorporating | Incorporating Relation Paths in Neural Relation Extraction | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1186/ | Zeng, Wenyuan and Lin, Yankai and Liu, Zhiyuan and Sun, Maosong | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1768--1777 | Distantly supervised relation extraction has been widely used to find novel relational facts from plain text. To predict the relation between a pair of two target entities, existing methods solely rely on those direct sentences containing both entities. In fact, there are also many sentences containing only one of the target entities, which also provide rich useful information but not yet employed by relation extraction. To address this issue, we build inference chains between two target entities via intermediate entities, and propose a path-based neural relation extraction model to encode the relational semantics from both direct sentences and inference chains. Experimental results on real-world datasets show that, our model can make full use of those sentences containing only one target entity, and achieves significant and consistent improvements on relation extraction as compared with strong baselines. The source code of this paper can be obtained from \url{https://github.com/thunlp/PathNRE}. | null | null | 10.18653/v1/D17-1186 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,674 |
inproceedings | wu-etal-2017-adversarial | Adversarial Training for Relation Extraction | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1187/ | Wu, Yi and Bamman, David and Russell, Stuart | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1778--1783 | Adversarial training is a mean of regularizing classification algorithms by generating adversarial noise to the training data. We apply adversarial training in relation extraction within the multi-instance multi-label learning framework. We evaluate various neural network architectures on two different datasets. Experimental results demonstrate that adversarial training is generally effective for both CNN and RNN models and significantly improves the precision of predicted relations. | null | null | 10.18653/v1/D17-1187 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,675 |
inproceedings | sorokin-gurevych-2017-context | Context-Aware Representations for Knowledge Base Relation Extraction | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1188/ | Sorokin, Daniil and Gurevych, Iryna | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1784--1789 | We demonstrate that for sentence-level relation extraction it is beneficial to consider other relations in the sentential context while predicting the target relation. Our architecture uses an LSTM-based encoder to jointly learn representations for all relations in a single sentence. We combine the context representations with an attention mechanism to make the final prediction. We use the Wikidata knowledge base to construct a dataset of multiple relations per sentence and to evaluate our approach. Compared to a baseline system, our method results in an average error reduction of 24 on a held-out set of relations. The code and the dataset to replicate the experiments are made available at \url{https://github.com/ukplab/}. | null | null | 10.18653/v1/D17-1188 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,676 |
inproceedings | liu-etal-2017-soft | A Soft-label Method for Noise-tolerant Distantly Supervised Relation Extraction | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1189/ | Liu, Tianyu and Wang, Kexiang and Chang, Baobao and Sui, Zhifang | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1790--1795 | Distant-supervised relation extraction inevitably suffers from wrong labeling problems because it heuristically labels relational facts with knowledge bases. Previous sentence level denoise models don`t achieve satisfying performances because they use hard labels which are determined by distant supervision and immutable during training. To this end, we introduce an entity-pair level denoise method which exploits semantic information from correctly labeled entity pairs to correct wrong labels dynamically during training. We propose a joint score function which combines the relational scores based on the entity-pair representation and the confidence of the hard label to obtain a new label, namely a soft label, for certain entity pair. During training, soft labels instead of hard labels serve as gold labels. Experiments on the benchmark dataset show that our method dramatically reduces noisy instances and outperforms other state-of-the-art systems. | null | null | 10.18653/v1/D17-1189 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,677 |
inproceedings | choubey-huang-2017-sequential | A Sequential Model for Classifying Temporal Relations between Intra-Sentence Events | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1190/ | Choubey, Prafulla Kumar and Huang, Ruihong | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1796--1802 | We present a sequential model for temporal relation classification between intra-sentence events. The key observation is that the overall syntactic structure and compositional meanings of the multi-word context between events are important for distinguishing among fine-grained temporal relations. Specifically, our approach first extracts a sequence of context words that indicates the temporal relation between two events, which well align with the dependency path between two event mentions. The context word sequence, together with a parts-of-speech tag sequence and a dependency relation sequence that are generated corresponding to the word sequence, are then provided as input to bidirectional recurrent neural network (LSTM) models. The neural nets learn compositional syntactic and semantic representations of contexts surrounding the two events and predict the temporal relation between them. Evaluation of the proposed approach on TimeBank corpus shows that sequential modeling is capable of accurately recognizing temporal relations between events, which outperforms a neural net model using various discrete features as input that imitates previous feature based models. | null | null | 10.18653/v1/D17-1190 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,678 |
inproceedings | huang-wang-2017-deep | Deep Residual Learning for Weakly-Supervised Relation Extraction | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1191/ | Huang, Yi Yao and Wang, William Yang | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1803--1807 | Deep residual learning (ResNet) is a new method for training very deep neural networks using identity mapping for shortcut connections. ResNet has won the ImageNet ILSVRC 2015 classification task, and achieved state-of-the-art performances in many computer vision tasks. However, the effect of residual learning on noisy natural language processing tasks is still not well understood. In this paper, we design a novel convolutional neural network (CNN) with residual learning, and investigate its impacts on the task of distantly supervised noisy relation extraction. In contradictory to popular beliefs that ResNet only works well for very deep networks, we found that even with 9 layers of CNNs, using identity mapping could significantly improve the performance for distantly-supervised relation extraction. | null | null | 10.18653/v1/D17-1191 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,679 |
inproceedings | zhang-wang-2017-noise | Noise-Clustered Distant Supervision for Relation Extraction: A Nonparametric {B}ayesian Perspective | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1192/ | Zhang, Qing and Wang, Houfeng | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1808--1813 | For the task of relation extraction, distant supervision is an efficient approach to generate labeled data by aligning knowledge base with free texts. The essence of it is a challenging incomplete multi-label classification problem with sparse and noisy features. To address the challenge, this work presents a novel nonparametric Bayesian formulation for the task. Experiment results show substantially higher top precision improvements over the traditional state-of-the-art approaches. | null | null | 10.18653/v1/D17-1192 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,680 |
inproceedings | gabor-etal-2017-exploring | Exploring Vector Spaces for Semantic Relations | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1193/ | G{\'abor, Kata and Zargayouna, Ha{\"ifa and Tellier, Isabelle and Buscaldi, Davide and Charnois, Thierry | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1814--1823 | Word embeddings are used with success for a variety of tasks involving lexical semantic similarities between individual words. Using unsupervised methods and just cosine similarity, encouraging results were obtained for analogical similarities. In this paper, we explore the potential of pre-trained word embeddings to identify generic types of semantic relations in an unsupervised experiment. We propose a new relational similarity measure based on the combination of word2vec`s CBOW input and output vectors which outperforms concurrent vector representations, when used for unsupervised clustering on SemEval 2010 Relation Classification data. | null | null | 10.18653/v1/D17-1193 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,681 |
inproceedings | kutuzov-etal-2017-temporal | Temporal dynamics of semantic relations in word embeddings: an application to predicting armed conflict participants | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1194/ | Kutuzov, Andrey and Velldal, Erik and {\O}vrelid, Lilja | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1824--1829 | This paper deals with using word embedding models to trace the temporal dynamics of semantic relations between pairs of words. The set-up is similar to the well-known analogies task, but expanded with a time dimension. To this end, we apply incremental updating of the models with new training texts, including incremental vocabulary expansion, coupled with learned transformation matrices that let us map between members of the relation. The proposed approach is evaluated on the task of predicting insurgent armed groups based on geographical locations. The gold standard data for the time span 1994{--}2010 is extracted from the UCDP Armed Conflicts dataset. The results show that the method is feasible and outperforms the baselines, but also that important work still remains to be done. | null | null | 10.18653/v1/D17-1194 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,682 |
inproceedings | ji-etal-2017-dynamic | Dynamic Entity Representations in Neural Language Models | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1195/ | Ji, Yangfeng and Tan, Chenhao and Martschat, Sebastian and Choi, Yejin and Smith, Noah A. | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1830--1839 | Understanding a long document requires tracking how entities are introduced and evolve over time. We present a new type of language model, EntityNLM, that can explicitly model entities, dynamically update their representations, and contextually generate their mentions. Our model is generative and flexible; it can model an arbitrary number of entities in context while generating each entity mention at an arbitrary length. In addition, it can be used for several different tasks such as language modeling, coreference resolution, and entity prediction. Experimental results with all these tasks demonstrate that our model consistently outperforms strong baselines and prior work. | null | null | 10.18653/v1/D17-1195 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,683 |
inproceedings | basile-tamburini-2017-towards | Towards Quantum Language Models | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1196/ | Basile, Ivano and Tamburini, Fabio | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1840--1849 | This paper presents a new approach for building Language Models using the Quantum Probability Theory, a Quantum Language Model (QLM). It mainly shows that relying on this probability calculus it is possible to build stochastic models able to benefit from quantum correlations due to interference and entanglement. We extensively tested our approach showing its superior performances, both in terms of model perplexity and inserting it into an automatic speech recognition evaluation setting, when compared with state-of-the-art language modelling techniques. | null | null | 10.18653/v1/D17-1196 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,684 |
inproceedings | yang-etal-2017-reference | Reference-Aware Language Models | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1197/ | Yang, Zichao and Blunsom, Phil and Dyer, Chris and Ling, Wang | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1850--1859 | We propose a general class of language models that treat reference as discrete stochastic latent variables. This decision allows for the creation of entity mentions by accessing external databases of referents (required by, e.g., dialogue generation) or past internal state (required to explicitly model coreferentiality). Beyond simple copying, our coreference model can additionally refer to a referent using varied mention forms (e.g., a reference to {\textquotedblleft}Jane{\textquotedblright} can be realized as {\textquotedblleft}she{\textquotedblright}), a characteristic feature of reference in natural languages. Experiments on three representative applications show our model variants outperform models based on deterministic attention and standard language modeling baselines. | null | null | 10.18653/v1/D17-1197 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,685 |
inproceedings | melamud-etal-2017-simple | A Simple Language Model based on {PMI} Matrix Approximations | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1198/ | Melamud, Oren and Dagan, Ido and Goldberger, Jacob | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1860--1865 | In this study, we introduce a new approach for learning language models by training them to estimate word-context pointwise mutual information (PMI), and then deriving the desired conditional probabilities from PMI at test time. Specifically, we show that with minor modifications to word2vec`s algorithm, we get principled language models that are closely related to the well-established Noise Contrastive Estimation (NCE) based language models. A compelling aspect of our approach is that our models are trained with the same simple negative sampling objective function that is commonly used in word2vec to learn word embeddings. | null | null | 10.18653/v1/D17-1198 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,686 |
inproceedings | frermann-szarvas-2017-inducing | Inducing Semantic Micro-Clusters from Deep Multi-View Representations of Novels | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1200/ | Frermann, Lea and Szarvas, Gy{\"orgy | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1873--1883 | Automatically understanding the plot of novels is important both for informing literary scholarship and applications such as summarization or recommendation. Various models have addressed this task, but their evaluation has remained largely intrinsic and qualitative. Here, we propose a principled and scalable framework leveraging expert-provided semantic tags (e.g., mystery, pirates) to evaluate plot representations in an extrinsic fashion, assessing their ability to produce locally coherent groupings of novels (micro-clusters) in model space. We present a deep recurrent autoencoder model that learns richly structured multi-view plot representations, and show that they i) yield better micro-clusters than less structured representations; and ii) are interpretable, and thus useful for further literary analysis or labeling of the emerging micro-clusters. | null | null | 10.18653/v1/D17-1200 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,688 |
inproceedings | li-etal-2017-initializing | Initializing Convolutional Filters with Semantic Features for Text Classification | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1201/ | Li, Shen and Zhao, Zhe and Liu, Tao and Hu, Renfen and Du, Xiaoyong | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1884--1889 | Convolutional Neural Networks (CNNs) are widely used in NLP tasks. This paper presents a novel weight initialization method to improve the CNNs for text classification. Instead of randomly initializing the convolutional filters, we encode semantic features into them, which helps the model focus on learning useful features at the beginning of the training. Experiments demonstrate the effectiveness of the initialization technique on seven text classification tasks, including sentiment analysis and topic classification. | null | null | 10.18653/v1/D17-1201 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,689 |
inproceedings | nikolentzos-etal-2017-shortest | Shortest-Path Graph Kernels for Document Similarity | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1202/ | Nikolentzos, Giannis and Meladianos, Polykarpos and Rousseau, Fran{\c{c}}ois and Stavrakas, Yannis and Vazirgiannis, Michalis | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1890--1900 | In this paper, we present a novel document similarity measure based on the definition of a graph kernel between pairs of documents. The proposed measure takes into account both the terms contained in the documents and the relationships between them. By representing each document as a graph-of-words, we are able to model these relationships and then determine how similar two documents are by using a modified shortest-path graph kernel. We evaluate our approach on two tasks and compare it against several baseline approaches using various performance metrics such as DET curves and macro-average F1-score. Experimental results on a range of datasets showed that our proposed approach outperforms traditional techniques and is capable of measuring more accurately the similarity between two documents. | null | null | 10.18653/v1/D17-1202 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,690 |
inproceedings | yang-etal-2017-adapting | Adapting Topic Models using Lexical Associations with Tree Priors | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1203/ | Yang, Weiwei and Boyd-Graber, Jordan and Resnik, Philip | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1901--1906 | Models work best when they are optimized taking into account the evaluation criteria that people care about. For topic models, people often care about interpretability, which can be approximated using measures of lexical association. We integrate lexical association into topic optimization using tree priors, which provide a flexible framework that can take advantage of both first order word associations and the higher-order associations captured by word embeddings. Tree priors improve topic interpretability without hurting extrinsic performance. | null | null | 10.18653/v1/D17-1203 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,691 |
inproceedings | parde-nielsen-2017-finding | Finding Patterns in Noisy Crowds: Regression-based Annotation Aggregation for Crowdsourced Data | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1204/ | Parde, Natalie and Nielsen, Rodney | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1907--1912 | Crowdsourcing offers a convenient means of obtaining labeled data quickly and inexpensively. However, crowdsourced labels are often noisier than expert-annotated data, making it difficult to aggregate them meaningfully. We present an aggregation approach that learns a regression model from crowdsourced annotations to predict aggregated labels for instances that have no expert adjudications. The predicted labels achieve a correlation of 0.594 with expert labels on our data, outperforming the best alternative aggregation method by 11.9{\%}. Our approach also outperforms the alternatives on third-party datasets. | null | null | 10.18653/v1/D17-1204 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,692 |
inproceedings | wang-etal-2017-crowd | {CROWD}-{IN}-{THE}-{LOOP}: A Hybrid Approach for Annotating Semantic Roles | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1205/ | Wang, Chenguang and Akbik, Alan and Chiticariu, Laura and Li, Yunyao and Xia, Fei and Xu, Anbang | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1913--1922 | Crowdsourcing has proven to be an effective method for generating labeled data for a range of NLP tasks. However, multiple recent attempts of using crowdsourcing to generate gold-labeled training data for semantic role labeling (SRL) reported only modest results, indicating that SRL is perhaps too difficult a task to be effectively crowdsourced. In this paper, we postulate that while producing SRL annotation does require expert involvement in general, a large subset of SRL labeling tasks is in fact appropriate for the crowd. We present a novel workflow in which we employ a classifier to identify difficult annotation tasks and route each task either to experts or crowd workers according to their difficulties. Our experimental evaluation shows that the proposed approach reduces the workload for experts by over two-thirds, and thus significantly reduces the cost of producing SRL annotation at little loss in quality. | null | null | 10.18653/v1/D17-1205 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,693 |
inproceedings | hashimoto-etal-2017-joint | A Joint Many-Task Model: Growing a Neural Network for Multiple {NLP} Tasks | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1206/ | Hashimoto, Kazuma and Xiong, Caiming and Tsuruoka, Yoshimasa and Socher, Richard | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1923--1933 | Transfer and multi-task learning have traditionally focused on either a single source-target pair or very few, similar tasks. Ideally, the linguistic levels of morphology, syntax and semantics would benefit each other by being trained in a single model. We introduce a joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks. Higher layers include shortcut connections to lower-level task predictions to reflect linguistic hierarchies. We use a simple regularization term to allow for optimizing all model weights to improve one task`s loss without exhibiting catastrophic interference of the other tasks. Our single end-to-end model obtains state-of-the-art or competitive results on five different tasks from tagging, parsing, relatedness, and entailment tasks. | null | null | 10.18653/v1/D17-1206 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,694 |
inproceedings | zhang-etal-2017-earth | Earth Mover`s Distance Minimization for Unsupervised Bilingual Lexicon Induction | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1207/ | Zhang, Meng and Liu, Yang and Luan, Huanbo and Sun, Maosong | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1934--1945 | Cross-lingual natural language processing hinges on the premise that there exists invariance across languages. At the word level, researchers have identified such invariance in the word embedding semantic spaces of different languages. However, in order to connect the separate spaces, cross-lingual supervision encoded in parallel data is typically required. In this paper, we attempt to establish the cross-lingual connection without relying on any cross-lingual supervision. By viewing word embedding spaces as distributions, we propose to minimize their earth mover`s distance, a measure of divergence between distributions. We demonstrate the success on the unsupervised bilingual lexicon induction task. In addition, we reveal an interesting finding that the earth mover`s distance shows potential as a measure of language difference. | null | null | 10.18653/v1/D17-1207 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,695 |
inproceedings | stahlberg-byrne-2017-unfolding | Unfolding and Shrinking Neural Machine Translation Ensembles | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1208/ | Stahlberg, Felix and Byrne, Bill | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1946--1956 | Ensembling is a well-known technique in neural machine translation (NMT) to improve system performance. Instead of a single neural net, multiple neural nets with the same topology are trained separately, and the decoder generates predictions by averaging over the individual models. Ensembling often improves the quality of the generated translations drastically. However, it is not suitable for production systems because it is cumbersome and slow. This work aims to reduce the runtime to be on par with a single system without compromising the translation quality. First, we show that the ensemble can be unfolded into a single large neural network which imitates the output of the ensemble system. We show that unfolding can already improve the runtime in practice since more work can be done on the GPU. We proceed by describing a set of techniques to shrink the unfolded network by reducing the dimensionality of layers. On Japanese-English we report that the resulting network has the size and decoding speed of a single NMT network but performs on the level of a 3-ensemble system. | null | null | 10.18653/v1/D17-1208 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,696 |
inproceedings | bastings-etal-2017-graph | Graph Convolutional Encoders for Syntax-aware Neural Machine Translation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1209/ | Bastings, Jasmijn and Titov, Ivan and Aziz, Wilker and Marcheggiani, Diego and Sima{'}an, Khalil | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1957--1967 | We present a simple and effective approach to incorporating syntactic structure into neural attention-based encoder-decoder models for machine translation. We rely on graph-convolutional networks (GCNs), a recent class of neural networks developed for modeling graph-structured data. Our GCNs use predicted syntactic dependency trees of source sentences to produce representations of words (i.e. hidden states of the encoder) that are sensitive to their syntactic neighborhoods. GCNs take word representations as input and produce word representations as output, so they can easily be incorporated as layers into standard encoders (e.g., on top of bidirectional RNNs or convolutional neural networks). We evaluate their effectiveness with English-German and English-Czech translation experiments for different types of encoders and observe substantial improvements over their syntax-agnostic versions in all the considered setups. | null | null | 10.18653/v1/D17-1209 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,697 |
inproceedings | gu-etal-2017-trainable | Trainable Greedy Decoding for Neural Machine Translation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1210/ | Gu, Jiatao and Cho, Kyunghyun and Li, Victor O.K. | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1968--1978 | Recent research in neural machine translation has largely focused on two aspects; neural network architectures and end-to-end learning algorithms. The problem of decoding, however, has received relatively little attention from the research community. In this paper, we solely focus on the problem of decoding given a trained neural machine translation model. Instead of trying to build a new decoding algorithm for any specific decoding objective, we propose the idea of trainable decoding algorithm in which we train a decoding algorithm to find a translation that maximizes an arbitrary decoding objective. More specifically, we design an actor that observes and manipulates the hidden state of the neural machine translation decoder and propose to train it using a variant of deterministic policy gradient. We extensively evaluate the proposed algorithm using four language pairs and two decoding objectives and show that we can indeed train a trainable greedy decoder that generates a better translation (in terms of a target decoding objective) with minimal computational overhead. | null | null | 10.18653/v1/D17-1210 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,698 |
inproceedings | yang-etal-2017-satirical | Satirical News Detection and Analysis using Attention Mechanism and Linguistic Features | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1211/ | Yang, Fan and Mukherjee, Arjun and Dragut, Eduard | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1979--1989 | Satirical news is considered to be entertainment, but it is potentially deceptive and harmful. Despite the embedded genre in the article, not everyone can recognize the satirical cues and therefore believe the news as true news. We observe that satirical cues are often reflected in certain paragraphs rather than the whole document. Existing works only consider document-level features to detect the satire, which could be limited. We consider paragraph-level linguistic features to unveil the satire by incorporating neural network and attention mechanism. We investigate the difference between paragraph-level features and document-level features, and analyze them on a large satirical news dataset. The evaluation shows that the proposed model detects satirical news effectively and reveals what features are important at which level. | null | null | 10.18653/v1/D17-1211 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,699 |
inproceedings | fetahu-etal-2017-fine | Fine Grained Citation Span for References in {W}ikipedia | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1212/ | Fetahu, Besnik and Markert, Katja and Anand, Avishek | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 1990--1999 | Verifiability is one of the core editing principles in Wikipedia, where editors are encouraged to provide citations for the added content. For a Wikipedia article determining what content is covered by a citation or the citation span is not trivial, an important aspect for automated citation finding for uncovered content, or fact assessments. We address the problem of determining the citation span in Wikipedia articles. We approach this problem by classifying which textual fragments in an article are covered or hold true given a citation. We propose a sequence classification approach where for a paragraph and a citation, we determine the citation span at a fine-grained level. We provide a thorough experimental evaluation and compare our approach against baselines adopted from the scientific domain, where we show improvement for all evaluation metrics. | null | null | 10.18653/v1/D17-1212 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,700 |
inproceedings | yang-etal-2017-identifying-semantic | Identifying Semantic Edit Intentions from Revisions in {W}ikipedia | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1213/ | Yang, Diyi and Halfaker, Aaron and Kraut, Robert and Hovy, Eduard | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 2000--2010 | Most studies on human editing focus merely on syntactic revision operations, failing to capture the intentions behind revision changes, which are essential for facilitating the single and collaborative writing process. In this work, we develop in collaboration with Wikipedia editors a 13-category taxonomy of the semantic intention behind edits in Wikipedia articles. Using labeled article edits, we build a computational classifier of intentions that achieved a micro-averaged F1 score of 0.621. We use this model to investigate edit intention effectiveness: how different types of edits predict the retention of newcomers and changes in the quality of articles, two key concerns for Wikipedia today. Our analysis shows that the types of edits that users make in their first session predict their subsequent survival as Wikipedia editors, and articles in different stages need different types of edits. | null | null | 10.18653/v1/D17-1213 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,701 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.