entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
pinter-etal-2017-mimicking
Mimicking Word Embeddings using Subword {RNN}s
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1010/
Pinter, Yuval and Guthrie, Robert and Eisenstein, Jacob
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
102--112
Word embeddings improve generalization over lexical features by placing each word in a lower-dimensional space, using distributional information obtained from unlabeled data. However, the effectiveness of word embeddings for downstream NLP tasks is limited by out-of-vocabulary (OOV) words, for which embeddings do not exist. In this paper, we present MIMICK, an approach to generating OOV word embeddings compositionally, by learning a function from spellings to distributional embeddings. Unlike prior work, MIMICK does not require re-training on the original word embedding corpus; instead, learning is performed at the type level. Intrinsic and extrinsic evaluations demonstrate the power of this simple approach. On 23 languages, MIMICK improves performance over a word-based baseline for tagging part-of-speech and morphosyntactic attributes. It is competitive with (and complementary to) a supervised character-based model in low resource settings.
null
null
10.18653/v1/D17-1010
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,498
inproceedings
asgari-schutze-2017-past
Past, Present, Future: A Computational Investigation of the Typology of Tense in 1000 Languages
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1011/
Asgari, Ehsaneddin and Sch{\"utze, Hinrich
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
113--124
We present SuperPivot, an analysis method for low-resource languages that occur in a superparallel corpus, i.e., in a corpus that contains an order of magnitude more languages than parallel corpora currently in use. We show that SuperPivot performs well for the crosslingual analysis of the linguistic phenomenon of tense. We produce analysis results for more than 1000 languages, conducting {--} to the best of our knowledge {--} the largest crosslingual computational study performed to date. We extend existing methodology for leveraging parallel corpora for typological analysis by overcoming a limiting assumption of earlier work: We only require that a linguistic feature is overtly marked in a few of thousands of languages as opposed to requiring that it be marked in all languages under investigation.
null
null
10.18653/v1/D17-1011
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,499
inproceedings
hashimoto-tsuruoka-2017-neural
Neural Machine Translation with Source-Side Latent Graph Parsing
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1012/
Hashimoto, Kazuma and Tsuruoka, Yoshimasa
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
125--135
This paper presents a novel neural machine translation model which jointly learns translation and source-side latent graph representations of sentences. Unlike existing pipelined approaches using syntactic parsers, our end-to-end model learns a latent graph parser as part of the encoder of an attention-based neural machine translation model, and thus the parser is optimized according to the translation objective. In experiments, we first show that our model compares favorably with state-of-the-art sequential and pipelined syntax-based NMT models. We also show that the performance of our model can be further improved by pre-training it with a small amount of treebank annotations. Our final ensemble model significantly outperforms the previous best models on the standard English-to-Japanese translation dataset.
null
null
10.18653/v1/D17-1012
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,500
inproceedings
weng-etal-2017-neural
Neural Machine Translation with Word Predictions
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1013/
Weng, Rongxiang and Huang, Shujian and Zheng, Zaixiang and Dai, Xinyu and Chen, Jiajun
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
136--145
In the encoder-decoder architecture for neural machine translation (NMT), the hidden states of the recurrent structures in the encoder and decoder carry the crucial information about the sentence. These vectors are generated by parameters which are updated by back-propagation of translation errors through time. We argue that propagating errors through the end-to-end recurrent structures are not a direct way of control the hidden vectors. In this paper, we propose to use word predictions as a mechanism for direct supervision. More specifically, we require these vectors to be able to predict the vocabulary in target sentence. Our simple mechanism ensures better representations in the encoder and decoder without using any extra data or annotation. It is also helpful in reducing the target side vocabulary and improving the decoding efficiency. Experiments on Chinese-English machine translation task show an average BLEU improvement by 4.53, respectively.
null
null
10.18653/v1/D17-1013
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,501
inproceedings
hoang-etal-2017-towards
Towards Decoding as Continuous Optimisation in Neural Machine Translation
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1014/
Hoang, Cong Duy Vu and Haffari, Gholamreza and Cohn, Trevor
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
146--156
We propose a novel decoding approach for neural machine translation (NMT) based on continuous optimisation. We reformulate decoding, a discrete optimization problem, into a continuous problem, such that optimization can make use of efficient gradient-based techniques. Our powerful decoding framework allows for more accurate decoding for standard neural machine translation models, as well as enabling decoding in intractable models such as intersection of several different NMT models. Our empirical results show that our decoding framework is effective, and can leads to substantial improvements in translations, especially in situations where greedy search and beam search are not feasible. Finally, we show how the technique is highly competitive with, and complementary to, reranking.
null
null
10.18653/v1/D17-1014
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,502
inproceedings
kitaev-klein-2017-misty
Where is Misty? Interpreting Spatial Descriptors by Modeling Regions in Space
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1015/
Kitaev, Nikita and Klein, Dan
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
157--166
We present a model for locating regions in space based on natural language descriptions. Starting with a 3D scene and a sentence, our model is able to associate words in the sentence with regions in the scene, interpret relations such as {\textquoteleft}on top of' or {\textquoteleft}next to,' and finally locate the region described in the sentence. All components form a single neural network that is trained end-to-end without prior knowledge of object segmentation. To evaluate our model, we construct and release a new dataset consisting of Minecraft scenes with crowdsourced natural language descriptions. We achieve a 32{\%} relative error reduction compared to a strong neural baseline.
null
null
10.18653/v1/D17-1015
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,503
inproceedings
rahimi-etal-2017-continuous
Continuous Representation of Location for Geolocation and Lexical Dialectology using Mixture Density Networks
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1016/
Rahimi, Afshin and Baldwin, Timothy and Cohn, Trevor
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
167--176
We propose a method for embedding two-dimensional locations in a continuous vector space using a neural network-based model incorporating mixtures of Gaussian distributions, presenting two model variants for text-based geolocation and lexical dialectology. Evaluated over Twitter data, the proposed model outperforms conventional regression-based geolocation and provides a better estimate of uncertainty. We also show the effectiveness of the representation for predicting words from location in lexical dialectology, and evaluate it using the DARE dataset.
null
null
10.18653/v1/D17-1016
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,504
inproceedings
yin-ordonez-2017-obj2text
{O}bj2{T}ext: Generating Visually Descriptive Language from Object Layouts
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1017/
Yin, Xuwang and Ordonez, Vicente
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
177--187
Generating captions for images is a task that has recently received considerable attention. Another type of visual inputs are abstract scenes or object layouts where the only information provided is a set of objects and their locations. This type of imagery is commonly found in many applications in computer graphics, virtual reality, and storyboarding. We explore in this paper OBJ2TEXT, a sequence-to-sequence model that encodes a set of objects and their locations as an input sequence using an LSTM network, and decodes this representation using an LSTM language model. We show in our paper that this model despite using a sequence encoder can effectively represent complex spatial object-object relationships and produce descriptions that are globally coherent and semantically relevant. We test our approach for the task of describing object layouts in the MS-COCO dataset by producing sentences given only object annotations. We additionally show that our model combined with a state-of-the-art object detector can improve the accuracy of an image captioning model.
null
null
10.18653/v1/D17-1017
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,505
inproceedings
lee-etal-2017-end
End-to-end Neural Coreference Resolution
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1018/
Lee, Kenton and He, Luheng and Lewis, Mike and Zettlemoyer, Luke
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
188--197
We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or hand-engineered mention detector. The key idea is to directly consider all spans in a document as potential mentions and learn distributions over possible antecedents for each. The model computes span embeddings that combine context-dependent boundary representations with a head-finding attention mechanism. It is trained to maximize the marginal likelihood of gold antecedent spans from coreference clusters and is factored to enable aggressive pruning of potential mentions. Experiments demonstrate state-of-the-art performance, with a gain of 1.5 F1 on the OntoNotes benchmark and by 3.1 F1 using a 5-model ensemble, despite the fact that this is the first approach to be successfully trained with no external resources.
null
null
10.18653/v1/D17-1018
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,506
inproceedings
li-jurafsky-2017-neural
Neural Net Models of Open-domain Discourse Coherence
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1019/
Li, Jiwei and Jurafsky, Dan
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
198--209
Discourse coherence is strongly associated with text quality, making it important to natural language generation and understanding. Yet existing models of coherence focus on measuring individual aspects of coherence (lexical overlap, rhetorical structure, entity centering) in narrow domains. In this paper, we describe domain-independent neural models of discourse coherence that are capable of measuring multiple aspects of coherence in existing sentences and can maintain coherence while generating new sentences. We study both discriminative models that learn to distinguish coherent from incoherent discourse, and generative models that produce coherent text, including a novel neural latent-variable Markovian generative model that captures the latent discourse dependencies between sentences in a text. Our work achieves state-of-the-art performance on multiple coherence evaluations, and marks an initial step in generating coherent texts given discourse contexts.
null
null
10.18653/v1/D17-1019
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,507
inproceedings
wang-etal-2017-affinity
Affinity-Preserving Random Walk for Multi-Document Summarization
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1020/
Wang, Kexiang and Liu, Tianyu and Sui, Zhifang and Chang, Baobao
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
210--220
Multi-document summarization provides users with a short text that summarizes the information in a set of related documents. This paper introduces affinity-preserving random walk to the summarization task, which preserves the affinity relations of sentences by an absorbing random walk model. Meanwhile, we put forward adjustable affinity-preserving random walk to enforce the diversity constraint of summarization in the random walk process. The ROUGE evaluations on DUC 2003 topic-focused summarization task and DUC 2004 generic summarization task show the good performance of our method, which has the best ROUGE-2 recall among the graph-based ranking methods.
null
null
10.18653/v1/D17-1020
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,508
inproceedings
marasovic-etal-2017-mention
A Mention-Ranking Model for Abstract Anaphora Resolution
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1021/
Marasovi{\'c}, Ana and Born, Leo and Opitz, Juri and Frank, Anette
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
221--232
Resolving abstract anaphora is an important, but difficult task for text understanding. Yet, with recent advances in representation learning this task becomes a more tangible aim. A central property of abstract anaphora is that it establishes a relation between the anaphor embedded in the anaphoric sentence and its (typically non-nominal) antecedent. We propose a mention-ranking model that learns how abstract anaphors relate to their antecedents with an LSTM-Siamese Net. We overcome the lack of training data by generating artificial anaphoric sentence{--}antecedent pairs. Our model outperforms state-of-the-art results on shell noun resolution. We also report first benchmark results on an abstract anaphora subset of the ARRAU corpus. This corpus presents a greater challenge due to a mixture of nominal and pronominal anaphors and a greater range of confounders. We found model variants that outperform the baselines for nominal anaphors, without training on individual anaphor data, but still lag behind for pronominal anaphors. Our model selects syntactically plausible candidates and {--} if disregarding syntax {--} discriminates candidates using deeper features.
null
null
10.18653/v1/D17-1021
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,509
inproceedings
nguyen-etal-2017-hierarchical
Hierarchical Embeddings for Hypernymy Detection and Directionality
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1022/
Nguyen, Kim Anh and K{\"oper, Maximilian and Schulte im Walde, Sabine and Vu, Ngoc Thang
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
233--243
We present a novel neural model HyperVec to learn hierarchical embeddings for hypernymy detection and directionality. While previous embeddings have shown limitations on prototypical hypernyms, HyperVec represents an unsupervised measure where embeddings are learned in a specific order and capture the hypernym{--}hyponym distributional hierarchy. Moreover, our model is able to generalize over unseen hypernymy pairs, when using only small sets of training data, and by mapping to other languages. Results on benchmark datasets show that HyperVec outperforms both state-of-the-art unsupervised measures and embedding models on hypernymy detection and directionality, and on predicting graded lexical entailment.
null
null
10.18653/v1/D17-1022
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,510
inproceedings
zhao-etal-2017-ngram2vec
{N}gram2vec: Learning Improved Word Representations from Ngram Co-occurrence Statistics
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1023/
Zhao, Zhe and Liu, Tao and Li, Shen and Li, Bofang and Du, Xiaoyong
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
244--253
The existing word representation methods mostly limit their information source to word co-occurrence statistics. In this paper, we introduce ngrams into four representation methods: SGNS, GloVe, PPMI matrix, and its SVD factorization. Comprehensive experiments are conducted on word analogy and similarity tasks. The results show that improved word representations are learned from ngram co-occurrence statistics. We also demonstrate that the trained ngram representations are useful in many aspects such as finding antonyms and collocations. Besides, a novel approach of building co-occurrence matrix is proposed to alleviate the hardware burdens brought by ngrams.
null
null
10.18653/v1/D17-1023
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,511
inproceedings
tissier-etal-2017-dict2vec
{D}ict2vec : Learning Word Embeddings using Lexical Dictionaries
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1024/
Tissier, Julien and Gravier, Christophe and Habrard, Amaury
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
254--263
Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks. The most efficient and popular approaches learn or retrofit such representations using additional external data. Resulting embeddings are generally better than their corpus-only counterparts, although such resources cover a fraction of words in the vocabulary. In this paper, we propose a new approach, Dict2vec, based on one of the largest yet refined datasource for describing words {--} natural language dictionaries. Dict2vec builds new word pairs from dictionary entries so that semantically-related words are moved closer, and negative sampling filters out pairs whose words are unrelated in dictionaries. We evaluate the word representations obtained using Dict2vec on eleven datasets for the word similarity task and on four datasets for a text classification task.
null
null
10.18653/v1/D17-1024
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,512
inproceedings
su-lee-2017-learning
Learning {C}hinese Word Representations From Glyphs Of Characters
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1025/
Su, Tzu-Ray and Lee, Hung-Yi
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
264--273
In this paper, we propose new methods to learn Chinese word representations. Chinese characters are composed of graphical components, which carry rich semantics. It is common for a Chinese learner to comprehend the meaning of a word from these graphical components. As a result, we propose models that enhance word representations by character glyphs. The character glyph features are directly learned from the bitmaps of characters by convolutional auto-encoder(convAE), and the glyph features improve Chinese word representations which are already enhanced by character embeddings. Another contribution in this paper is that we created several evaluation datasets in traditional Chinese and made them public.
null
null
10.18653/v1/D17-1025
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,513
inproceedings
wieting-etal-2017-learning
Learning Paraphrastic Sentence Embeddings from Back-Translated Bitext
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1026/
Wieting, John and Mallinson, Jonathan and Gimpel, Kevin
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
274--285
We consider the problem of learning general-purpose, paraphrastic sentence embeddings in the setting of Wieting et al. (2016b). We use neural machine translation to generate sentential paraphrases via back-translation of bilingual sentence pairs. We evaluate the paraphrase pairs by their ability to serve as training data for learning paraphrastic sentence embeddings. We find that the data quality is stronger than prior work based on bitext and on par with manually-written English paraphrase pairs, with the advantage that our approach can scale up to generate large training sets for many languages and domains. We experiment with several language pairs and data sources, and develop a variety of data filtering techniques. In the process, we explore how neural machine translation output differs from human-written sentences, finding clear differences in length, the amount of repetition, and the use of rare words.
null
null
10.18653/v1/D17-1026
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,514
inproceedings
yu-etal-2017-joint
Joint Embeddings of {C}hinese Words, Characters, and Fine-grained Subcharacter Components
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1027/
Yu, Jinxing and Jian, Xun and Xin, Hao and Song, Yangqiu
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
286--291
Word embeddings have attracted much attention recently. Different from alphabetic writing systems, Chinese characters are often composed of subcharacter components which are also semantically informative. In this work, we propose an approach to jointly embed Chinese words as well as their characters and fine-grained subcharacter components. We use three likelihoods to evaluate whether the context words, characters, and components can predict the current target word, and collected 13,253 subcharacter components to demonstrate the existing approaches of decomposing Chinese characters are not enough. Evaluation on both word similarity and word analogy tasks demonstrates the superior performance of our model.
null
null
10.18653/v1/D17-1027
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,515
inproceedings
gupta-etal-2017-exploiting
Exploiting Morphological Regularities in Distributional Word Representations
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1028/
Gupta, Arihant and Akhtar, Syed Sarfaraz and Vajpayee, Avijit and Srivastava, Arjit and Jhanwar, Madan Gopal and Shrivastava, Manish
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
292--297
We present an unsupervised, language agnostic approach for exploiting morphological regularities present in high dimensional vector spaces. We propose a novel method for generating embeddings of words from their morphological variants using morphological transformation operators. We evaluate this approach on MSR word analogy test set with an accuracy of 85{\%} which is 12{\%} higher than the previous best known system.
null
null
10.18653/v1/D17-1028
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,516
inproceedings
wang-etal-2017-exploiting
Exploiting Word Internal Structures for Generic {C}hinese Sentence Representation
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1029/
Wang, Shaonan and Zhang, Jiajun and Zong, Chengqing
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
298--303
We introduce a novel mixed characterword architecture to improve Chinese sentence representations, by utilizing rich semantic information of word internal structures. Our architecture uses two key strategies. The first is a mask gate on characters, learning the relation among characters in a word. The second is a maxpooling operation on words, adaptively finding the optimal mixture of the atomic and compositional word representations. Finally, the proposed architecture is applied to various sentence composition models, which achieves substantial performance gains over baseline models on sentence similarity task.
null
null
10.18653/v1/D17-1029
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,517
inproceedings
herbelot-baroni-2017-high
High-risk learning: acquiring new word vectors from tiny data
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1030/
Herbelot, Aur{\'e}lie and Baroni, Marco
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
304--309
Distributional semantics models are known to struggle with small data. It is generally accepted that in order to learn {\textquoteleft}a good vector' for a word, a model must have sufficient examples of its usage. This contradicts the fact that humans can guess the meaning of a word from a few occurrences only. In this paper, we show that a neural language model such as Word2Vec only necessitates minor modifications to its standard architecture to learn new terms from tiny data, using background knowledge from a previously learnt semantic space. We test our model on word definitions and on a nonce task involving 2-6 sentences' worth of context, showing a large increase in performance over state-of-the-art models on the definitional task.
null
null
10.18653/v1/D17-1030
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,518
inproceedings
sanu-etal-2017-word
Word Embeddings based on Fixed-Size Ordinally Forgetting Encoding
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1031/
Sanu, Joseph and Xu, Mingbin and Jiang, Hui and Liu, Quan
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
310--315
In this paper, we propose to learn word embeddings based on the recent fixed-size ordinally forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence into a fixed-size representation. We use FOFE to fully encode the left and right context of each word in a corpus to construct a novel word-context matrix, which is further weighted and factorized using truncated SVD to generate low-dimension word embedding vectors. We evaluate this alternate method in encoding word-context statistics and show the new FOFE method has a notable effect on the resulting word embeddings. Experimental results on several popular word similarity tasks have demonstrated that the proposed method outperforms other SVD models that use canonical count based techniques to generate word context matrices.
null
null
10.18653/v1/D17-1031
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,519
inproceedings
fernandez-etal-2017-vecshare
{V}ec{S}hare: A Framework for Sharing Word Representation Vectors
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1032/
Fernandez, Jared and Yu, Zhaocheng and Downey, Doug
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
316--320
Many Natural Language Processing (NLP) models rely on distributed vector representations of words. Because the process of training word vectors can require large amounts of data and computation, NLP researchers and practitioners often utilize pre-trained embeddings downloaded from the Web. However, finding the best embeddings for a given task is difficult, and can be computationally prohibitive. We present a framework, called VecShare, that makes it easy to share and retrieve word embeddings on the Web. The framework leverages a public data-sharing infrastructure to host embedding sets, and provides automated mechanisms for retrieving the embeddings most similar to a given corpus. We perform an experimental evaluation of VecShare`s similarity strategies, and show that they are effective at efficiently retrieving embeddings that boost accuracy in a document classification task. Finally, we provide an open-source Python library for using the VecShare framework.
null
null
10.18653/v1/D17-1032
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,520
inproceedings
hasan-curry-2017-word
Word Re-Embedding via Manifold Dimensionality Retention
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1033/
Hasan, Souleiman and Curry, Edward
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
321--326
Word embeddings seek to recover a Euclidean metric space by mapping words into vectors, starting from words co-occurrences in a corpus. Word embeddings may underestimate the similarity between nearby words, and overestimate it between distant words in the Euclidean metric space. In this paper, we re-embed pre-trained word embeddings with a stage of manifold learning which retains dimensionality. We show that this approach is theoretically founded in the metric recovery paradigm, and empirically show that it can improve on state-of-the-art embeddings in word similarity tasks 0.5 - 5.0{\%} points depending on the original space.
null
null
10.18653/v1/D17-1033
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,521
inproceedings
lee-chen-2017-muse
{MUSE}: Modularizing Unsupervised Sense Embeddings
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1034/
Lee, Guang-He and Chen, Yun-Nung
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
327--337
This paper proposes to address the word sense ambiguity issue in an unsupervised manner, where word sense representations are learned along a word sense selection mechanism given contexts. Prior work focused on designing a single model to deliver both mechanisms, and thus suffered from either coarse-grained representation learning or inefficient sense selection. The proposed modular approach, MUSE, implements flexible modules to optimize distinct mechanisms, achieving the first purely sense-level representation learning system with linear-time sense selection. We leverage reinforcement learning to enable joint training on the proposed modules, and introduce various exploration techniques on sense selection for better robustness. The experiments on benchmark data show that the proposed approach achieves the state-of-the-art performance on synonym selection as well as on contextual word similarities in terms of MaxSimC.
null
null
10.18653/v1/D17-1034
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,522
inproceedings
reimers-gurevych-2017-reporting
Reporting Score Distributions Makes a Difference: Performance Study of {LSTM}-networks for Sequence Tagging
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1035/
Reimers, Nils and Gurevych, Iryna
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
338--348
In this paper we show that reporting a single performance score is insufficient to compare non-deterministic approaches. We demonstrate for common sequence tagging tasks that the seed value for the random number generator can result in statistically significant ($p < 10^{-4}$) differences for state-of-the-art systems. For two recent systems for NER, we observe an absolute difference of one percentage point F₁-score depending on the selected seed value, making these systems perceived either as state-of-the-art or mediocre. Instead of publishing and reporting single performance scores, we propose to compare score distributions based on multiple executions. Based on the evaluation of 50.000 LSTM-networks for five sequence tagging tasks, we present network architectures that produce both superior performance as well as are more stable with respect to the remaining hyperparameters.
null
null
10.18653/v1/D17-1035
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,523
inproceedings
martins-kreutzer-2017-learning
Learning What`s Easy: Fully Differentiable Neural Easy-First Taggers
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1036/
Martins, Andr{\'e} F. T. and Kreutzer, Julia
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
349--362
We introduce a novel neural easy-first decoder that learns to solve sequence tagging tasks in a flexible order. In contrast to previous easy-first decoders, our models are end-to-end differentiable. The decoder iteratively updates a {\textquotedblleft}sketch{\textquotedblright} of the predictions over the sequence. At its core is an attention mechanism that controls which parts of the input are strategically the best to process next. We present a new constrained softmax transformation that ensures the same cumulative attention to every word, and show how to efficiently evaluate and backpropagate over it. Our models compare favourably to BILSTM taggers on three sequence tagging tasks.
null
null
10.18653/v1/D17-1036
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,524
inproceedings
kaji-kobayashi-2017-incremental
Incremental Skip-gram Model with Negative Sampling
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1037/
Kaji, Nobuhiro and Kobayashi, Hayato
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
363--371
This paper explores an incremental training strategy for the skip-gram model with negative sampling (SGNS) from both empirical and theoretical perspectives. Existing methods of neural word embeddings, including SGNS, are multi-pass algorithms and thus cannot perform incremental model update. To address this problem, we present a simple incremental extension of SGNS and provide a thorough theoretical analysis to demonstrate its validity. Empirical experiments demonstrated the correctness of the theoretical analysis as well as the practical usefulness of the incremental algorithm.
null
null
10.18653/v1/D17-1037
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,525
inproceedings
ruder-plank-2017-learning
Learning to select data for transfer learning with {B}ayesian Optimization
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1038/
Ruder, Sebastian and Plank, Barbara
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
372--382
Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. Inspired by work on curriculum learning, we propose to learn data selection measures using Bayesian Optimization and evaluate them across models, domains and tasks. Our learned measures outperform existing domain similarity measures significantly on three tasks: sentiment analysis, part-of-speech tagging, and parsing. We show the importance of complementing similarity with diversity, and that learned measures are{--}to some degree{--}transferable across models, domains, and even tasks.
null
null
10.18653/v1/D17-1038
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,526
inproceedings
ramachandran-etal-2017-unsupervised
Unsupervised Pretraining for Sequence to Sequence Learning
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1039/
Ramachandran, Prajit and Liu, Peter and Le, Quoc
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
383--391
This work presents a general unsupervised learning method to improve the accuracy of sequence to sequence (seq2seq) models. In our method, the weights of the encoder and decoder of a seq2seq model are initialized with the pretrained weights of two language models and then fine-tuned with labeled data. We apply this method to challenging benchmarks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models. Our main result is that pretraining improves the generalization of seq2seq models. We achieve state-of-the-art results on the WMT English{\textrightarrow}German task, surpassing a range of methods using both phrase-based machine translation and neural machine translation. Our method achieves a significant improvement of 1.3 BLEU from th previous best models on both WMT`14 and WMT`15 English{\textrightarrow}German. We also conduct human evaluations on abstractive summarization and find that our method outperforms a purely supervised learning baseline in a statistically significant manner.
null
null
10.18653/v1/D17-1039
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,527
inproceedings
britz-etal-2017-efficient
Efficient Attention using a Fixed-Size Memory Representation
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1040/
Britz, Denny and Guan, Melody and Luong, Minh-Thang
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
392--400
The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20{\%} for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.
null
null
10.18653/v1/D17-1040
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,528
inproceedings
park-etal-2017-rotated
Rotated Word Vector Representations and their Interpretability
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1041/
Park, Sungjoon and Bak, JinYeong and Oh, Alice
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
401--411
Vector representation of words improves performance in various NLP tasks, but the high dimensional word vectors are very difficult to interpret. We apply several rotation algorithms to the vector representation of words to improve the interpretability. Unlike previous approaches that induce sparsity, the rotated vectors are interpretable while preserving the expressive performance of the original vectors. Furthermore, any prebuilt word vector representation can be rotated for improved interpretability. We apply rotation to skipgrams and glove and compare the expressive power and interpretability with the original vectors and the sparse overcomplete vectors. The results show that the rotated vectors outperform the original and the sparse overcomplete vectors for interpretability and expressiveness tasks.
null
null
10.18653/v1/D17-1041
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,529
inproceedings
alvarez-melis-jaakkola-2017-causal
A causal framework for explaining the predictions of black-box sequence-to-sequence models
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1042/
Alvarez-Melis, David and Jaakkola, Tommi
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
412--421
We interpret the predictions of any black-box structured input-structured output model around a specific input-output pair. Our method returns an {\textquotedblleft}explanation{\textquotedblright} consisting of groups of input-output tokens that are causally related. These dependencies are inferred by querying the model with perturbed inputs, generating a graph over tokens from the responses, and solving a partitioning problem to select the most relevant components. We focus the general approach on sequence-to-sequence problems, adopting a variational autoencoder to yield meaningful input perturbations. We test our method across several NLP sequence generation tasks.
null
null
10.18653/v1/D17-1042
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,530
inproceedings
lavergne-yvon-2017-learning
Learning the Structure of Variable-Order {CRF}s: a finite-state perspective
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1044/
Lavergne, Thomas and Yvon, Fran{\c{c}}ois
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
433--439
The computational complexity of linear-chain Conditional Random Fields (CRFs) makes it difficult to deal with very large label sets and long range dependencies. Such situations are not rare and arise when dealing with morphologically rich languages or joint labelling tasks. We extend here recent proposals to consider variable order CRFs. Using an effective finite-state representation of variable-length dependencies, we propose new ways to perform feature selection at large scale and report experimental results where we outperform strong baselines on a tagging task.
null
null
10.18653/v1/D17-1044
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,532
inproceedings
aji-heafield-2017-sparse
Sparse Communication for Distributed Gradient Descent
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1045/
Aji, Alham Fikri and Heafield, Kenneth
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
440--445
We make distributed stochastic gradient descent faster by exchanging sparse updates instead of dense updates. Gradient updates are positively skewed as most updates are near zero, so we map the 99{\%} smallest updates (by absolute value) to zero then exchange sparse matrices. This method can be combined with quantization to further improve the compression. We explore different configurations and apply them to neural machine translation and MNIST image classification tasks. Most configurations work on MNIST, whereas different configurations reduce convergence rate on the more complex translation task. Our experiments show that we can achieve up to 49{\%} speed up on MNIST and 22{\%} on NMT without damaging the final accuracy or BLEU.
null
null
10.18653/v1/D17-1045
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,533
inproceedings
lu-etal-2017-adagrad
Why {ADAGRAD} Fails for Online Topic Modeling
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1046/
Lu, You and Lund, Jeffrey and Boyd-Graber, Jordan
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
446--451
Online topic modeling, i.e., topic modeling with stochastic variational inference, is a powerful and efficient technique for analyzing large datasets, and ADAGRAD is a widely-used technique for tuning learning rates during online gradient optimization. However, these two techniques do not work well together. We show that this is because ADAGRAD uses accumulation of previous gradients as the learning rates' denominators. For online topic modeling, the magnitude of gradients is very large. It causes learning rates to shrink very quickly, so the parameters cannot fully converge until the training ends
null
null
10.18653/v1/D17-1046
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,534
inproceedings
chen-etal-2017-recurrent
Recurrent Attention Network on Memory for Aspect Sentiment Analysis
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1047/
Chen, Peng and Sun, Zhongqian and Bing, Lidong and Yang, Wei
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
452--461
We propose a novel framework based on neural networks to identify the sentiment of opinion targets in a comment/review. Our framework adopts multiple-attention mechanism to capture sentiment features separated by a long distance, so that it is more robust against irrelevant information. The results of multiple attentions are non-linearly combined with a recurrent neural network, which strengthens the expressive power of our model for handling more complications. The weighted-memory mechanism not only helps us avoid the labor-intensive feature engineering work, but also provides a tailor-made memory for different opinion targets of a sentence. We examine the merit of our model on four datasets: two are from SemEval2014, i.e. reviews of restaurants and laptops; a twitter dataset, for testing its performance on social media data; and a Chinese news comment dataset, for testing its language sensitivity. The experimental results show that our model consistently outperforms the state-of-the-art methods on different types of data.
null
null
10.18653/v1/D17-1047
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,535
inproceedings
long-etal-2017-cognition
A Cognition Based Attention Model for Sentiment Analysis
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1048/
Long, Yunfei and Lu, Qin and Xiang, Rong and Li, Minglei and Huang, Chu-Ren
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
462--471
Attention models are proposed in sentiment analysis because some words are more important than others. However,most existing methods either use local context based text information or user preference information. In this work, we propose a novel attention model trained by cognition grounded eye-tracking data. A reading prediction model is first built using eye-tracking data as dependent data and other features in the context as independent data. The predicted reading time is then used to build a cognition based attention (CBA) layer for neural sentiment analysis. As a comprehensive model, We can capture attentions of words in sentences as well as sentences in documents. Different attention mechanisms can also be incorporated to capture other aspects of attentions. Evaluations show the CBA based method outperforms the state-of-the-art local context based attention methods significantly. This brings insight to how cognition grounded data can be brought into NLP tasks.
null
null
10.18653/v1/D17-1048
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,536
inproceedings
poddar-etal-2017-author
Author-aware Aspect Topic Sentiment Model to Retrieve Supporting Opinions from Reviews
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1049/
Poddar, Lahari and Hsu, Wynne and Lee, Mong Li
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
472--481
User generated content about products and services in the form of reviews are often diverse and even contradictory. This makes it difficult for users to know if an opinion in a review is prevalent or biased. We study the problem of searching for supporting opinions in the context of reviews. We propose a framework called SURF, that first identifies opinions expressed in a review, and then finds similar opinions from other reviews. We design a novel probabilistic graphical model that captures opinions as a combination of aspect, topic and sentiment dimensions, takes into account the preferences of individual authors, as well as the quality of the entity under review, and encodes the flow of thoughts in a review by constraining the aspect distribution dynamically among successive review segments. We derive a similarity measure that considers both lexical and semantic similarity to find supporting opinions. Experiments on TripAdvisor hotel reviews and Yelp restaurant reviews show that our model outperforms existing methods for modeling opinions, and the proposed framework is effective in finding supporting opinions.
null
null
10.18653/v1/D17-1049
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,537
inproceedings
ghosh-veale-2017-magnets
Magnets for Sarcasm: Making Sarcasm Detection Timely, Contextual and Very Personal
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1050/
Ghosh, Aniruddha and Veale, Tony
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
482--491
Sarcasm is a pervasive phenomenon in social media, permitting the concise communication of meaning, affect and attitude. Concision requires wit to produce and wit to understand, which demands from each party knowledge of norms, context and a speaker`s mindset. Insight into a speaker`s psychological profile at the time of production is a valuable source of context for sarcasm detection. Using a neural architecture, we show significant gains in detection accuracy when knowledge of the speaker`s mood at the time of production can be inferred. Our focus is on sarcasm detection on Twitter, and show that the mood exhibited by a speaker over tweets leading up to a new post is as useful a cue for sarcasm as the topical context of the post itself. The work opens the door to an empirical exploration not just of sarcasm in text but of the sarcastic state of mind.
null
null
10.18653/v1/D17-1050
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,538
inproceedings
morales-zhai-2017-identifying
Identifying Humor in Reviews using Background Text Sources
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1051/
Morales, Alex and Zhai, Chengxiang
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
492--501
We study the problem of automatically identifying humorous text from a new kind of text data, i.e., online reviews. We propose a generative language model, based on the theory of incongruity, to model humorous text, which allows us to leverage background text sources, such as Wikipedia entry descriptions, and enables construction of multiple features for identifying humorous reviews. Evaluation of these features using supervised learning for classifying reviews into humorous and non-humorous reviews shows that the features constructed based on the proposed generative model are much more effective than the major features proposed in the existing literature, allowing us to achieve almost 86{\%} accuracy. These humorous review predictions can also supply good indicators for identifying helpful reviews.
null
null
10.18653/v1/D17-1051
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,539
inproceedings
wang-xia-2017-sentiment
Sentiment Lexicon Construction with Representation Learning Based on Hierarchical Sentiment Supervision
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1052/
Wang, Leyi and Xia, Rui
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
502--510
Sentiment lexicon is an important tool for identifying the sentiment polarity of words and texts. How to automatically construct sentiment lexicons has become a research topic in the field of sentiment analysis and opinion mining. Recently there were some attempts to employ representation learning algorithms to construct a sentiment lexicon with sentiment-aware word embedding. However, these methods were normally trained under document-level sentiment supervision. In this paper, we develop a neural architecture to train a sentiment-aware word embedding by integrating the sentiment supervision at both document and word levels, to enhance the quality of word embedding as well as the sentiment lexicon. Experiments on the SemEval 2013-2016 datasets indicate that the sentiment lexicon generated by our approach achieves the state-of-the-art performance in both supervised and unsupervised sentiment classification, in comparison with several strong sentiment lexicon construction methods.
null
null
10.18653/v1/D17-1052
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,540
inproceedings
xu-wan-2017-towards
Towards a Universal Sentiment Classifier in Multiple languages
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1053/
Xu, Kui and Wan, Xiaojun
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
511--520
Existing sentiment classifiers usually work for only one specific language, and different classification models are used in different languages. In this paper we aim to build a universal sentiment classifier with a single classification model in multiple different languages. In order to achieve this goal, we propose to learn multilingual sentiment-aware word embeddings simultaneously based only on the labeled reviews in English and unlabeled parallel data available in a few language pairs. It is not required that the parallel data exist between English and any other language, because the sentiment information can be transferred into any language via pivot languages. We present the evaluation results of our universal sentiment classifier in five languages, and the results are very promising even when the parallel data between English and the target languages are not used. Furthermore, the universal single classifier is compared with a few cross-language sentiment classifiers relying on direct parallel data between the source and target languages, and the results show that the performance of our universal sentiment classifier is very promising compared to that of different cross-language classifiers in multiple target languages.
null
null
10.18653/v1/D17-1053
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,541
inproceedings
dou-2017-capturing
Capturing User and Product Information for Document Level Sentiment Analysis with Deep Memory Network
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1054/
Dou, Zi-Yi
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
521--526
Document-level sentiment classification is a fundamental problem which aims to predict a user`s overall sentiment about a product in a document. Several methods have been proposed to tackle the problem whereas most of them fail to consider the influence of users who express the sentiment and products which are evaluated. To address the issue, we propose a deep memory network for document-level sentiment classification which could capture the user and product information at the same time. To prove the effectiveness of our algorithm, we conduct experiments on IMDB and Yelp datasets and the results indicate that our model can achieve better performance than several existing methods.
null
null
10.18653/v1/D17-1054
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,542
inproceedings
yang-etal-2017-identifying
Identifying and Tracking Sentiments and Topics from Social Media Texts during Natural Disasters
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1055/
Yang, Min and Mei, Jincheng and Ji, Heng and Zhao, Wei and Zhao, Zhou and Chen, Xiaojun
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
527--533
We study the problem of identifying the topics and sentiments and tracking their shifts from social media texts in different geographical regions during emergencies and disasters. We propose a location-based dynamic sentiment-topic model (LDST) which can jointly model topic, sentiment, time and Geolocation information. The experimental results demonstrate that LDST performs very well at discovering topics and sentiments from social media and tracking their shifts in different geographical regions during emergencies and disasters. We will release the data and source code after this work is published.
null
null
10.18653/v1/D17-1055
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,543
inproceedings
yu-etal-2017-refining
Refining Word Embeddings for Sentiment Analysis
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1056/
Yu, Liang-Chih and Wang, Jin and Lai, K. Robert and Zhang, Xuejie
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
534--539
Word embeddings that can capture semantic and syntactic information from contexts have been extensively used for various natural language processing tasks. However, existing methods for learning context-based word embeddings typically fail to capture sufficient sentiment information. This may result in words with similar vector representations having an opposite sentiment polarity (e.g., good and bad), thus degrading sentiment analysis performance. Therefore, this study proposes a word vector refinement model that can be applied to any pre-trained word vectors (e.g., Word2vec and GloVe). The refinement model is based on adjusting the vector representations of words such that they can be closer to both semantically and sentimentally similar words and further away from sentimentally dissimilar words. Experimental results show that the proposed method can improve conventional word embeddings and outperform previously proposed sentiment embeddings for both binary and fine-grained classification on Stanford Sentiment Treebank (SST).
null
null
10.18653/v1/D17-1056
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,544
inproceedings
akhtar-etal-2017-multilayer
A Multilayer Perceptron based Ensemble Technique for Fine-grained Financial Sentiment Analysis
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1057/
Akhtar, Md Shad and Kumar, Abhishek and Ghosal, Deepanway and Ekbal, Asif and Bhattacharyya, Pushpak
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
540--546
In this paper, we propose a novel method for combining deep learning and classical feature based models using a Multi-Layer Perceptron (MLP) network for financial sentiment analysis. We develop various deep learning models based on Convolutional Neural Network (CNN), Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU). These are trained on top of pre-trained, autoencoder-based, financial word embeddings and lexicon features. An ensemble is constructed by combining these deep learning models and a classical supervised model based on Support Vector Regression (SVR). We evaluate our proposed technique on a benchmark dataset of SemEval-2017 shared task on financial sentiment analysis. The propose model shows impressive results on two datasets, i.e. microblogs and news headlines datasets. Comparisons show that our proposed model performs better than the existing state-of-the-art systems for the above two datasets by 2.0 and 4.1 cosine points, respectively.
null
null
10.18653/v1/D17-1057
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,545
inproceedings
sharma-etal-2017-sentiment
Sentiment Intensity Ranking among Adjectives Using Sentiment Bearing Word Embeddings
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1058/
Sharma, Raksha and Somani, Arpan and Kumar, Lakshya and Bhattacharyya, Pushpak
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
547--552
Identification of intensity ordering among polar (positive or negative) words which have the same semantics can lead to a fine-grained sentiment analysis. For example, {\textquoteleft}master', {\textquoteleft}seasoned' and {\textquoteleft}familiar' point to different intensity levels, though they all convey the same meaning (semantics), i.e., expertise: having a good knowledge of. In this paper, we propose a semi-supervised technique that uses sentiment bearing word embeddings to produce a continuous ranking among adjectives that share common semantics. Our system demonstrates a strong Spearman`s rank correlation of 0.83 with the gold standard ranking. We show that sentiment bearing word embeddings facilitate a more accurate intensity ranking system than other standard word embeddings (word2vec and GloVe). Word2vec is the state-of-the-art for intensity ordering task.
null
null
10.18653/v1/D17-1058
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,546
inproceedings
wang-etal-2017-sentiment
Sentiment Lexicon Expansion Based on Neural {PU} Learning, Double Dictionary Lookup, and Polarity Association
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1059/
Wang, Yasheng and Zhang, Yang and Liu, Bing
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
553--563
Although many sentiment lexicons in different languages exist, most are not comprehensive. In a recent sentiment analysis application, we used a large Chinese sentiment lexicon and found that it missed a large number of sentiment words in social media. This prompted us to make a new attempt to study sentiment lexicon expansion. This paper first poses the problem as a PU learning problem, which is a new formulation. It then proposes a new PU learning method suitable for our problem using a neural network. The results are enhanced further with a new dictionary-based technique and a novel polarity classification technique. Experimental results show that the proposed approach outperforms baseline methods greatly.
null
null
10.18653/v1/D17-1059
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,547
inproceedings
xiong-etal-2017-deeppath
{D}eep{P}ath: A Reinforcement Learning Method for Knowledge Graph Reasoning
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1060/
Xiong, Wenhan and Hoang, Thien and Wang, William Yang
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
564--573
We study the problem of learning to reason in large scale knowledge graphs (KGs). More specifically, we describe a novel reinforcement learning framework for learning multi-hop relational paths: we use a policy-based agent with continuous states based on knowledge graph embeddings, which reasons in a KG vector-space by sampling the most promising relation to extend its path. In contrast to prior work, our approach includes a reward function that takes the accuracy, diversity, and efficiency into consideration. Experimentally, we show that our proposed method outperforms a path-ranking based algorithm and knowledge graph embedding methods on Freebase and Never-Ending Language Learning datasets.
null
null
10.18653/v1/D17-1060
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,548
inproceedings
nogueira-cho-2017-task
Task-Oriented Query Reformulation with Reinforcement Learning
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1061/
Nogueira, Rodrigo and Cho, Kyunghyun
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
574--583
Search engines play an important role in our everyday lives by assisting us in finding the information we need. When we input a complex query, however, results are often far from satisfactory. In this work, we introduce a query reformulation system based on a neural network that rewrites a query to maximize the number of relevant documents returned. We train this neural network with reinforcement learning. The actions correspond to selecting terms to build a reformulated query, and the reward is the document recall. We evaluate our approach on three datasets against strong baselines and show a relative improvement of 5-20{\%} in terms of recall. Furthermore, we present a simple method to estimate a conservative upper-bound performance of a model in a particular environment and verify that there is still large room for improvements.
null
null
10.18653/v1/D17-1061
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,549
inproceedings
zhang-lapata-2017-sentence
Sentence Simplification with Deep Reinforcement Learning
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1062/
Zhang, Xingxing and Lapata, Mirella
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
584--594
Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for \textbf{D}eep \textbf{RE}inforcement \textbf{S}entence \textbf{S}implification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.
null
null
10.18653/v1/D17-1062
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,550
inproceedings
fang-etal-2017-learning
Learning how to Active Learn: A Deep Reinforcement Learning Approach
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1063/
Fang, Meng and Li, Yuan and Cohn, Trevor
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
595--605
Active learning aims to select a small subset of data for annotation such that a classifier learned on the data is highly accurate. This is usually done using heuristic selection methods, however the effectiveness of such methods is limited and moreover, the performance of heuristics varies between datasets. To address these shortcomings, we introduce a novel formulation by reframing the active learning as a reinforcement learning problem and explicitly learning a data selection policy, where the policy takes the role of the active learning heuristic. Importantly, our method allows the selection policy learned using simulation to one language to be transferred to other languages. We demonstrate our method using cross-lingual named entity recognition, observing uniform improvements over traditional active learning algorithms.
null
null
10.18653/v1/D17-1063
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,551
inproceedings
narayan-etal-2017-split
Split and Rephrase
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1064/
Narayan, Shashi and Gardent, Claire and Cohen, Shay B. and Shimorina, Anastasia
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
606--616
We propose a new sentence simplification task (Split-and-Rephrase) where the aim is to split a complex sentence into a meaning preserving sequence of shorter sentences. Like sentence simplification, splitting-and-rephrasing has the potential of benefiting both natural language processing and societal applications. Because shorter sentences are generally better processed by NLP systems, it could be used as a preprocessing step which facilitates and improves the performance of parsers, semantic role labellers and machine translation systems. It should also be of use for people with reading disabilities because it allows the conversion of longer sentences into shorter ones. This paper makes two contributions towards this new task. First, we create and make available a benchmark consisting of 1,066,115 tuples mapping a single complex sentence to a sequence of sentences expressing the same meaning. Second, we propose five models (vanilla sequence-to-sequence to semantically-motivated models) to understand the difficulty of the proposed task.
null
null
10.18653/v1/D17-1064
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,552
inproceedings
xu-etal-2017-neural
Neural Response Generation via {GAN} with an Approximate Embedding Layer
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1065/
Xu, Zhen and Liu, Bingquan and Wang, Baoxun and Sun, Chengjie and Wang, Xiaolong and Wang, Zhuoran and Qi, Chao
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
617--626
This paper presents a Generative Adversarial Network (GAN) to model single-turn short-text conversations, which trains a sequence-to-sequence (Seq2Seq) network for response generation simultaneously with a discriminative classifier that measures the differences between human-produced responses and machine-generated ones. In addition, the proposed method introduces an approximate embedding layer to solve the non-differentiable problem caused by the sampling-based output decoding procedure in the Seq2Seq generative model. The GAN setup provides an effective way to avoid noninformative responses (a.k.a {\textquotedblleft}safe responses{\textquotedblright}), which are frequently observed in traditional neural response generators. The experimental results show that the proposed approach significantly outperforms existing neural response generation models in diversity metrics, with slight increases in relevance scores as well, when evaluated on both a Mandarin corpus and an English corpus.
null
null
10.18653/v1/D17-1065
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,553
inproceedings
semeniuta-etal-2017-hybrid
A Hybrid Convolutional Variational Autoencoder for Text Generation
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1066/
Semeniuta, Stanislau and Severyn, Aliaksei and Barth, Erhardt
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
627--637
In this paper we explore the effect of architectural choices on learning a variational autoencoder (VAE) for text generation. In contrast to the previously introduced VAE model for text where both the encoder and decoder are RNNs, we propose a novel hybrid architecture that blends fully feed-forward convolutional and deconvolutional components with a recurrent language model. Our architecture exhibits several attractive properties such as faster run time and convergence, ability to better handle long sequences and, more importantly, it helps to avoid the issue of the VAE collapsing to a deterministic model.
null
null
10.18653/v1/D17-1066
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,554
inproceedings
hossain-etal-2017-filling
Filling the Blanks (hint: plural noun) for Mad {L}ibs Humor
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1067/
Hossain, Nabil and Krumm, John and Vanderwende, Lucy and Horvitz, Eric and Kautz, Henry
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
638--647
Computerized generation of humor is a notoriously difficult AI problem. We develop an algorithm called Libitum that helps humans generate humor in a Mad Lib, which is a popular fill-in-the-blank game. The algorithm is based on a machine learned classifier that determines whether a potential fill-in word is funny in the context of the Mad Lib story. We use Amazon Mechanical Turk to create ground truth data and to judge humor for our classifier to mimic, and we make this data freely available. Our testing shows that Libitum successfully aids humans in filling in Mad Libs that are usually judged funnier than those filled in by humans with no computerized help. We go on to analyze why some words are better than others at making a Mad Lib funny.
null
null
10.18653/v1/D17-1067
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,555
inproceedings
santus-etal-2017-measuring
Measuring Thematic Fit with Distributional Feature Overlap
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1068/
Santus, Enrico and Chersoni, Emmanuele and Lenci, Alessandro and Blache, Philippe
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
648--658
In this paper, we introduce a new distributional method for modeling predicate-argument thematic fit judgments. We use a syntax-based DSM to build a prototypical representation of verb-specific roles: for every verb, we extract the most salient second order contexts for each of its roles (i.e. the most salient dimensions of typical role fillers), and then we compute thematic fit as a weighted overlap between the top features of candidate fillers and role prototypes. Our experiments show that our method consistently outperforms a baseline re-implementing a state-of-the-art system, and achieves better or comparable results to those reported in the literature for the other unsupervised systems. Moreover, it provides an explicit representation of the features characterizing verb-specific semantic roles.
null
null
10.18653/v1/D17-1068
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,556
inproceedings
mekala-etal-2017-scdv
{SCDV} : Sparse Composite Document Vectors using soft clustering over distributional representations
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1069/
Mekala, Dheeraj and Gupta, Vivek and Paranjape, Bhargavi and Karnick, Harish
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
659--669
We present a feature vector formation technique for documents - Sparse Composite Document Vector (SCDV) - which overcomes several shortcomings of the current distributional paragraph vector representations that are widely used for text representation. In SCDV, word embeddings are clustered to capture multiple semantic contexts in which words occur. They are then chained together to form document topic-vectors that can express complex, multi-topic documents. Through extensive experiments on multi-class and multi-label classification tasks, we outperform the previous state-of-the-art method, NTSG. We also show that SCDV embeddings perform well on heterogeneous tasks like Topic Coherence, context-sensitive Learning and Information Retrieval. Moreover, we achieve a significant reduction in training and prediction times compared to other representation methods. SCDV achieves best of both worlds - better performance with lower time and space complexity.
null
null
10.18653/v1/D17-1069
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,557
inproceedings
conneau-etal-2017-supervised
Supervised Learning of Universal Sentence Representations from Natural Language Inference Data
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1070/
Conneau, Alexis and Kiela, Douwe and Schwenk, Holger and Barrault, Lo{\"ic and Bordes, Antoine
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
670--680
Many modern NLP systems rely on word embeddings, previously trained in an unsupervised manner on large corpora, as base features. Efforts to obtain embeddings for larger chunks of text, such as sentences, have however not been so successful. Several attempts at learning unsupervised representations of sentences have not reached satisfactory enough performance to be widely adopted. In this paper, we show how universal sentence representations trained using the supervised data of the Stanford Natural Language Inference datasets can consistently outperform unsupervised methods like SkipThought vectors on a wide range of transfer tasks. Much like how computer vision uses ImageNet to obtain features, which can then be transferred to other tasks, our work tends to indicate the suitability of natural language inference for transfer learning to other NLP tasks. Our encoder is publicly available.
null
null
10.18653/v1/D17-1070
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,558
inproceedings
yanaka-etal-2017-determining
Determining Semantic Textual Similarity using Natural Deduction Proofs
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1071/
Yanaka, Hitomi and Mineshima, Koji and Mart{\'i}nez-G{\'o}mez, Pascual and Bekki, Daisuke
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
681--691
Determining semantic textual similarity is a core research subject in natural language processing. Since vector-based models for sentence representation often use shallow information, capturing accurate semantics is difficult. By contrast, logical semantic representations capture deeper levels of sentence semantics, but their symbolic nature does not offer graded notions of textual similarity. We propose a method for determining semantic textual similarity by combining shallow features with features extracted from natural deduction proofs of bidirectional entailment relations between sentence pairs. For the natural deduction proofs, we use ccg2lambda, a higher-order automatic inference system, which converts Combinatory Categorial Grammar (CCG) derivation trees into semantic representations and conducts natural deduction proofs. Experiments show that our system was able to outperform other logic-based systems and that features derived from the proofs are effective for learning textual similarity.
null
null
10.18653/v1/D17-1071
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,559
inproceedings
gong-etal-2017-multi
Multi-Grained {C}hinese Word Segmentation
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1072/
Gong, Chen and Li, Zhenghua and Zhang, Min and Jiang, Xinzhou
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
692--703
Traditionally, word segmentation (WS) adopts the single-grained formalism, where a sentence corresponds to a single word sequence. However, Sproat et al. (1997) show that the inter-native-speaker consistency ratio over Chinese word boundaries is only 76{\%}, indicating single-grained WS (SWS) imposes unnecessary challenges on both manual annotation and statistical modeling. Moreover, WS results of different granularities can be complementary and beneficial for high-level applications. This work proposes and addresses multi-grained WS (MWS). We build a large-scale pseudo MWS dataset for model training and tuning by leveraging the annotation heterogeneity of three SWS datasets. Then we manually annotate 1,500 test sentences with true MWS annotations. Finally, we propose three benchmark approaches by casting MWS as constituent parsing and sequence labeling. Experiments and analysis lead to many interesting findings.
null
null
10.18653/v1/D17-1072
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,560
inproceedings
zalmout-habash-2017-dont
Don`t Throw Those Morphological Analyzers Away Just Yet: Neural Morphological Disambiguation for {A}rabic
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1073/
Zalmout, Nasser and Habash, Nizar
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
704--713
This paper presents a model for Arabic morphological disambiguation based on Recurrent Neural Networks (RNN). We train Long Short-Term Memory (LSTM) cells in several configurations and embedding levels to model the various morphological features. Our experiments show that these models outperform state-of-the-art systems without explicit use of feature engineering. However, adding learning features from a morphological analyzer to model the space of possible analyses provides additional improvement. We make use of the resulting morphological models for scoring and ranking the analyses of the morphological analyzer for morphological disambiguation. The results show significant gains in accuracy across several evaluation metrics. Our system results in 4.4{\%} absolute increase over the state-of-the-art in full morphological analysis accuracy (30.6{\%} relative error reduction), and 10.6{\%} (31.5{\%} relative error reduction) for out-of-vocabulary words.
null
null
10.18653/v1/D17-1073
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,561
inproceedings
cotterell-etal-2017-paradigm
Paradigm Completion for Derivational Morphology
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1074/
Cotterell, Ryan and Vylomova, Ekaterina and Khayrallah, Huda and Kirov, Christo and Yarowsky, David
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
714--720
The generation of complex derived word forms has been an overlooked problem in NLP; we fill this gap by applying neural sequence-to-sequence models to the task. We overview the theoretical motivation for a paradigmatic treatment of derivational morphology, and introduce the task of derivational paradigm completion as a parallel to inflectional paradigm completion. State-of-the-art neural models adapted from the inflection task are able to learn the range of derivation patterns, and outperform a non-neural baseline by 16.4{\%}. However, due to semantic, historical, and lexical considerations involved in derivational morphology, future work will be needed to achieve performance parity with inflection-generating systems.
null
null
10.18653/v1/D17-1074
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,562
inproceedings
stratos-2017-sub
A Sub-Character Architecture for {K}orean Language Processing
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1075/
Stratos, Karl
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
721--726
We introduce a novel sub-character architecture that exploits a unique compositional structure of the Korean language. Our method decomposes each character into a small set of primitive phonetic units called jamo letters from which character- and word-level representations are induced. The jamo letters divulge syntactic and semantic information that is difficult to access with conventional character-level units. They greatly alleviate the data sparsity problem, reducing the observation space to 1.6{\%} of the original while increasing accuracy in our experiments. We apply our architecture to dependency parsing and achieve dramatic improvement over strong lexical baselines.
null
null
10.18653/v1/D17-1075
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,563
inproceedings
horsmann-zesch-2017-lstms
Do {LSTM}s really work so well for {P}o{S} tagging? {--} A replication study
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1076/
Horsmann, Tobias and Zesch, Torsten
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
727--736
A recent study by Plank et al. (2016) found that LSTM-based PoS taggers considerably improve over the current state-of-the-art when evaluated on the corpora of the Universal Dependencies project that use a coarse-grained tagset. We replicate this study using a fresh collection of 27 corpora of 21 languages that are annotated with fine-grained tagsets of varying size. Our replication confirms the result in general, and we additionally find that the advantage of LSTMs is even bigger for larger tagsets. However, we also find that for the very large tagsets of morphologically rich languages, hand-crafted morphological lexicons are still necessary to reach state-of-the-art performance.
null
null
10.18653/v1/D17-1076
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,564
inproceedings
mcconnaughey-etal-2017-labeled
The Labeled Segmentation of Printed Books
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1077/
McConnaughey, Lara and Dai, Jennifer and Bamman, David
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
737--747
We introduce the task of book structure labeling: segmenting and assigning a fixed category (such as Table of Contents, Preface, Index) to the document structure of printed books. We manually annotate the page-level structural categories for a large dataset totaling 294,816 pages in 1,055 books evenly sampled from 1750-1922, and present empirical results comparing the performance of several classes of models. The best-performing model, a bidirectional LSTM with rich features, achieves an overall accuracy of 95.8 and a class-balanced macro F-score of 71.4.
null
null
10.18653/v1/D17-1077
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,565
inproceedings
cotterell-heigold-2017-cross
Cross-lingual Character-Level Neural Morphological Tagging
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1078/
Cotterell, Ryan and Heigold, Georg
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
748--759
Even for common NLP tasks, sufficient supervision is not available in many languages {--} morphological tagging is no exception. In the work presented here, we explore a transfer learning scheme, whereby we train character-level recurrent neural taggers to predict morphological taggings for high-resource languages and low-resource languages together. Learning joint character representations among multiple related languages successfully enables knowledge transfer from the high-resource languages to the low-resource ones.
null
null
10.18653/v1/D17-1078
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,566
inproceedings
oshikiri-2017-segmentation
Segmentation-Free Word Embedding for Unsegmented Languages
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1080/
Oshikiri, Takamasa
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
767--772
In this paper, we propose a new pipeline of word embedding for unsegmented languages, called segmentation-free word embedding, which does not require word segmentation as a preprocessing step. Unlike space-delimited languages, unsegmented languages, such as Chinese and Japanese, require word segmentation as a preprocessing step. However, word segmentation, that often requires manually annotated resources, is difficult and expensive, and unavoidable errors in word segmentation affect downstream tasks. To avoid these problems in learning word vectors of unsegmented languages, we consider word co-occurrence statistics over all possible candidates of segmentations based on frequent character n-grams instead of segmented sentences provided by conventional word segmenters. Our experiments of noun category prediction tasks on raw Twitter, Weibo, and Wikipedia corpora show that the proposed method outperforms the conventional approaches that require word segmenters.
null
null
10.18653/v1/D17-1080
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,568
inproceedings
sachan-etal-2017-textbooks
From Textbooks to Knowledge: A Case Study in Harvesting Axiomatic Knowledge from Textbooks to Solve Geometry Problems
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1081/
Sachan, Mrinmaya and Dubey, Kumar and Xing, Eric
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
773--784
Textbooks are rich sources of information. Harvesting structured knowledge from textbooks is a key challenge in many educational applications. As a case study, we present an approach for harvesting structured axiomatic knowledge from math textbooks. Our approach uses rich contextual and typographical features extracted from raw textbooks. It leverages the redundancy and shared ordering across multiple textbooks to further refine the harvested axioms. These axioms are then parsed into rules that are used to improve the state-of-the-art in solving geometry problems.
null
null
10.18653/v1/D17-1081
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,569
inproceedings
lai-etal-2017-race
{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1082/
Lai, Guokun and Xie, Qizhe and Liu, Hanxiao and Yang, Yiming and Hovy, Eduard
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
785--794
We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE consists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and covers a variety of topics which are carefully designed for evaluating the students' ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models (43{\%}) and the ceiling human performance (95{\%}). We hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension. The dataset is freely available at \url{http://www.cs.cmu.edu/~glai1/data/race/}and the code is available at \url{https://github.com/qizhex/RACE_AR_baselines}.
null
null
10.18653/v1/D17-1082
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,570
inproceedings
hopkins-etal-2017-beyond
Beyond Sentential Semantic Parsing: Tackling the Math {SAT} with a Cascade of Tree Transducers
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1083/
Hopkins, Mark and Petrescu-Prahova, Cristian and Levin, Roie and Le Bras, Ronan and Herrasti, Alvaro and Joshi, Vidur
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
795--804
We present an approach for answering questions that span multiple sentences and exhibit sophisticated cross-sentence anaphoric phenomena, evaluating on a rich source of such questions {--} the math portion of the Scholastic Aptitude Test (SAT). By using a tree transducer cascade as its basic architecture, our system propagates uncertainty from multiple sources (e.g. coreference resolution or verb interpretation) until it can be confidently resolved. Experiments show the first-ever results 43{\%} recall and 91{\%} precision) on SAT algebra word problems. We also apply our system to the public Dolphin algebra question set, and improve the state-of-the-art F1-score from 73.9{\%} to 77.0{\%}.
null
null
10.18653/v1/D17-1083
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,571
inproceedings
huang-etal-2017-learning
Learning Fine-Grained Expressions to Solve Math Word Problems
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1084/
Huang, Danqing and Shi, Shuming and Lin, Chin-Yew and Yin, Jian
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
805--814
This paper presents a novel template-based method to solve math word problems. This method learns the mappings between math concept phrases in math word problems and their math expressions from training data. For each equation template, we automatically construct a rich template sketch by aggregating information from various problems with the same template. Our approach is implemented in a two-stage system. It first retrieves a few relevant equation system templates and aligns numbers in math word problems to those templates for candidate equation generation. It then does a fine-grained inference to obtain the final answer. Experiment results show that our method achieves an accuracy of 28.4{\%} on the linear Dolphin18K benchmark, which is 10{\%} (54{\%} relative) higher than previous state-of-the-art systems while achieving an accuracy increase of 12{\%} (59{\%} relative) on the TS6 benchmark subset.
null
null
10.18653/v1/D17-1084
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,572
inproceedings
liu-etal-2017-structural
Structural Embedding of Syntactic Trees for Machine Comprehension
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1085/
Liu, Rui and Hu, Junjie and Wei, Wei and Yang, Zi and Nyberg, Eric
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
815--824
Deep neural networks for machine comprehension typically utilizes only word or character embeddings without explicitly taking advantage of structured linguistic information such as constituency trees and dependency trees. In this paper, we propose structural embedding of syntactic trees (SEST), an algorithm framework to utilize structured information and encode them into vector representations that can boost the performance of algorithms for the machine comprehension. We evaluate our approach using a state-of-the-art neural attention model on the SQuAD dataset. Experimental results demonstrate that our model can accurately identify the syntactic boundaries of the sentences and extract answers that are syntactically coherent over the baseline methods.
null
null
10.18653/v1/D17-1085
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,573
inproceedings
long-etal-2017-world
World Knowledge for Reading Comprehension: Rare Entity Prediction with Hierarchical {LSTM}s Using External Descriptions
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1086/
Long, Teng and Bengio, Emmanuel and Lowe, Ryan and Cheung, Jackie Chi Kit and Precup, Doina
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
825--834
Humans interpret texts with respect to some background information, or world knowledge, and we would like to develop automatic reading comprehension systems that can do the same. In this paper, we introduce a task and several models to drive progress towards this goal. In particular, we propose the task of rare entity prediction: given a web document with several entities removed, models are tasked with predicting the correct missing entities conditioned on the document context and the lexical resources. This task is challenging due to the diversity of language styles and the extremely large number of rare entities. We propose two recurrent neural network architectures which make use of external knowledge in the form of entity descriptions. Our experiments show that our hierarchical LSTM model performs significantly better at the rare entity prediction task than those that do not make use of external resources.
null
null
10.18653/v1/D17-1086
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,574
inproceedings
golub-etal-2017-two
Two-Stage Synthesis Networks for Transfer Learning in Machine Comprehension
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1087/
Golub, David and Huang, Po-Sen and He, Xiaodong and Deng, Li
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
835--844
We develop a technique for transfer learning in machine comprehension (MC) using a novel two-stage synthesis network. Given a high performing MC model in one domain, our technique aims to answer questions about documents in another domain, where we use no labeled data of question-answer pairs. Using the proposed synthesis network with a pretrained model on the SQuAD dataset, we achieve an F1 measure of 46.6{\%} on the challenging NewsQA dataset, approaching performance of in-domain models (F1 measure of 50.0{\%}) and outperforming the out-of-domain baseline by 7.6{\%}, without use of provided annotations.
null
null
10.18653/v1/D17-1087
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,575
inproceedings
wang-etal-2017-deep
Deep Neural Solver for Math Word Problems
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1088/
Wang, Yan and Liu, Xiaojiang and Shi, Shuming
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
845--854
This paper presents a deep neural solver to automatically solve math word problems. In contrast to previous statistical learning approaches, we directly translate math word problems to equation templates using a recurrent neural network (RNN) model, without sophisticated feature engineering. We further design a hybrid model that combines the RNN model and a similarity-based retrieval model to achieve additional performance improvement. Experiments conducted on a large dataset show that the RNN model and the hybrid model significantly outperform state-of-the-art statistical learning methods for math word problem solving.
null
null
10.18653/v1/D17-1088
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,576
inproceedings
p-etal-2017-latent
Latent Space Embedding for Retrieval in Question-Answer Archives
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1089/
P, Deepak and Garg, Dinesh and Shevade, Shirish
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
855--865
Community-driven Question Answering (CQA) systems such as Yahoo! Answers have become valuable sources of reusable information. CQA retrieval enables usage of historical CQA archives to solve new questions posed by users. This task has received much recent attention, with methods building upon literature from translation models, topic models, and deep learning. In this paper, we devise a CQA retrieval technique, LASER-QA, that embeds question-answer pairs within a unified latent space preserving the local neighborhood structure of question and answer spaces. The idea is that such a space mirrors semantic similarity among questions as well as answers, thereby enabling high quality retrieval. Through an empirical analysis on various real-world QA datasets, we illustrate the improved effectiveness of LASER-QA over state-of-the-art methods.
null
null
10.18653/v1/D17-1089
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,577
inproceedings
duan-etal-2017-question
Question Generation for Question Answering
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1090/
Duan, Nan and Tang, Duyu and Chen, Peng and Zhou, Ming
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
866--874
This paper presents how to generate questions from given passages using neural networks, where large scale QA pairs are automatically crawled and processed from Community-QA website, and used as training data. The contribution of the paper is 2-fold: First, two types of question generation approaches are proposed, one is a retrieval-based method using convolution neural network (CNN), the other is a generation-based method using recurrent neural network (RNN); Second, we show how to leverage the generated questions to improve existing question answering systems. We evaluate our question generation method for the answer sentence selection task on three benchmark datasets, including SQuAD, MS MARCO, and WikiQA. Experimental results show that, by using generated questions as an extra signal, significant QA improvement can be achieved.
null
null
10.18653/v1/D17-1090
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,578
inproceedings
dong-etal-2017-learning
Learning to Paraphrase for Question Answering
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1091/
Dong, Li and Mallinson, Jonathan and Reddy, Siva and Lapata, Mirella
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
875--886
Question answering (QA) systems are sensitive to the many different ways natural language expresses the same information need. In this paper we turn to paraphrases as a means of capturing this knowledge and present a general framework which learns felicitous paraphrases for various QA tasks. Our method is trained end-to-end using question-answer pairs as a supervision signal. A question and its paraphrases serve as input to a neural scoring model which assigns higher weights to linguistic expressions most likely to yield correct answers. We evaluate our approach on QA over Freebase and answer sentence selection. Experimental results on three datasets show that our framework consistently improves performance, achieving competitive results despite the use of simple QA models.
null
null
10.18653/v1/D17-1091
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,579
inproceedings
meng-etal-2017-temporal
Temporal Information Extraction for Question Answering Using Syntactic Dependencies in an {LSTM}-based Architecture
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1092/
Meng, Yuanliang and Rumshisky, Anna and Romanov, Alexey
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
887--896
In this paper, we propose to use a set of simple, uniform in architecture LSTM-based models to recover different kinds of temporal relations from text. Using the shortest dependency path between entities as input, the same architecture is used to extract intra-sentence, cross-sentence, and document creation time relations. A {\textquotedblleft}double-checking{\textquotedblright} technique reverses entity pairs in classification, boosting the recall of positive cases and reducing misclassifications between opposite classes. An efficient pruning algorithm resolves conflicts globally. Evaluated on QA-TempEval (SemEval2015 Task 5), our proposed technique outperforms state-of-the-art methods by a large margin. We also conduct intrinsic evaluation and post state-of-the-art results on Timebank-Dense.
null
null
10.18653/v1/D17-1092
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,580
inproceedings
tymoshenko-etal-2017-ranking
Ranking Kernels for Structures and Embeddings: A Hybrid Preference and Classification Model
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1093/
Tymoshenko, Kateryna and Bonadiman, Daniele and Moschitti, Alessandro
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
897--902
Recent work has shown that Tree Kernels (TKs) and Convolutional Neural Networks (CNNs) obtain the state of the art in answer sentence reranking. Additionally, their combination used in Support Vector Machines (SVMs) is promising as it can exploit both the syntactic patterns captured by TKs and the embeddings learned by CNNs. However, the embeddings are constructed according to a classification function, which is not directly exploitable in the preference ranking algorithm of SVMs. In this work, we propose a new hybrid approach combining preference ranking applied to TKs and pointwise ranking applied to CNNs. We show that our approach produces better results on two well-known and rather different datasets: WikiQA for answer sentence selection and SemEval cQA for comment selection in Community Question Answering.
null
null
10.18653/v1/D17-1093
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,581
inproceedings
yavuz-etal-2017-recovering
Recovering Question Answering Errors via Query Revision
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1094/
Yavuz, Semih and Gur, Izzeddin and Su, Yu and Yan, Xifeng
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
903--909
The existing factoid QA systems often lack a post-inspection component that can help models recover from their own mistakes. In this work, we propose to crosscheck the corresponding KB relations behind the predicted answers and identify potential inconsistencies. Instead of developing a new model that accepts evidences collected from these relations, we choose to plug them back to the original questions directly and check if the revised question makes sense or not. A bidirectional LSTM is applied to encode revised questions. We develop a scoring mechanism over the revised question encodings to refine the predictions of a base QA system. This approach can improve the F1 score of STAGG (Yih et al., 2015), one of the leading QA systems, from 52.5{\%} to 53.9{\%} on WEBQUESTIONS data.
null
null
10.18653/v1/D17-1094
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,582
inproceedings
delbrouck-dupont-2017-empirical
An empirical study on the effectiveness of images in Multimodal Neural Machine Translation
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1095/
Delbrouck, Jean-Benoit and Dupont, St{\'e}phane
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
910--919
In state-of-the-art Neural Machine Translation (NMT), an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multi-modal tasks, where it becomes possible to focus both on sentence parts and image regions that they describe. In this paper, we compare several attention mechanism on the multi-modal translation task (English, image {\textrightarrow} German) and evaluate the ability of the model to make use of images to improve translation. We surpass state-of-the-art scores on the Multi30k data set, we nevertheless identify and report different misbehavior of the machine while translating.
null
null
10.18653/v1/D17-1095
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,583
inproceedings
vijayakumar-etal-2017-sound
Sound-{W}ord2{V}ec: Learning Word Representations Grounded in Sounds
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1096/
Vijayakumar, Ashwin and Vedantam, Ramakrishna and Parikh, Devi
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
920--925
To be able to interact better with humans, it is crucial for machines to understand sound {--} a primary modality of human perception. Previous works have used sound to learn embeddings for improved generic semantic similarity assessment. In this work, we treat sound as a first-class citizen, studying downstream 6textual tasks which require aural grounding. To this end, we propose sound-word2vec {--} a new embedding scheme that learns specialized word embeddings grounded in sounds. For example, we learn that two seemingly (semantically) unrelated concepts, like leaves and paper are similar due to the similar rustling sounds they make. Our embeddings prove useful in textual tasks requiring aural reasoning like text-based sound retrieval and discovering Foley sound effects (used in movies). Moreover, our embedding space captures interesting dependencies between words and onomatopoeia and outperforms prior work on aurally-relevant word relatedness datasets such as AMEN and ASLex.
null
null
10.18653/v1/D17-1096
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,584
inproceedings
mahendru-etal-2017-promise
The Promise of Premise: Harnessing Question Premises in Visual Question Answering
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1097/
Mahendru, Aroma and Prabhu, Viraj and Mohapatra, Akrit and Batra, Dhruv and Lee, Stefan
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
926--935
In this paper, we make a simple observation that questions about images often contain premises {--} objects and relationships implied by the question {--} and that reasoning about premises can help Visual Question Answering (VQA) models respond more intelligently to irrelevant or previously unseen questions. When presented with a question that is irrelevant to an image, state-of-the-art VQA models will still answer purely based on learned language biases, resulting in non-sensical or even misleading answers. We note that a visual question is irrelevant to an image if at least one of its premises is false (i.e. not depicted in the image). We leverage this observation to construct a dataset for Question Relevance Prediction and Explanation (QRPE) by searching for false premises. We train novel question relevance detection models and show that models that reason about premises consistently outperform models that do not. We also find that forcing standard VQA models to reason about premises during training can lead to improvements on tasks requiring compositional reasoning.
null
null
10.18653/v1/D17-1097
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,585
inproceedings
anderson-etal-2017-guided
Guided Open Vocabulary Image Captioning with Constrained Beam Search
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1098/
Anderson, Peter and Fernando, Basura and Johnson, Mark and Gould, Stephen
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
936--945
Existing image captioning models do not generalize well to out-of-domain images containing novel scenes or objects. This limitation severely hinders the use of these models in real world applications dealing with images in the wild. We address this problem using a flexible approach that enables existing deep captioning architectures to take advantage of image taggers at test time, without re-training. Our method uses constrained beam search to force the inclusion of selected tag words in the output, and fixed, pretrained word embeddings to facilitate vocabulary expansion to previously unseen tag words. Using this approach we achieve state of the art results for out-of-domain captioning on MSCOCO (and improved results for in-domain captioning). Perhaps surprisingly, our results significantly outperform approaches that incorporate the same tag predictions into the learning algorithm. We also show that we can significantly improve the quality of generated ImageNet captions by leveraging ground-truth labels.
null
null
10.18653/v1/D17-1098
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,586
inproceedings
zellers-choi-2017-zero
Zero-Shot Activity Recognition with Verb Attribute Induction
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1099/
Zellers, Rowan and Choi, Yejin
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
946--958
In this paper, we investigate large-scale zero-shot activity recognition by modeling the visual and linguistic attributes of action verbs. For example, the verb {\textquotedblleft}salute{\textquotedblright} has several properties, such as being a light movement, a social act, and short in duration. We use these attributes as the internal mapping between visual and textual representations to reason about a previously unseen action. In contrast to much prior work that assumes access to gold standard attributes for zero-shot classes and focuses primarily on object attributes, our model uniquely learns to infer action attributes from dictionary definitions and distributed word representations. Experimental results confirm that action attributes inferred from language can provide a predictive signal for zero-shot prediction of previously unseen activities.
null
null
10.18653/v1/D17-1099
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,587
inproceedings
zarriess-schlangen-2017-deriving
Deriving continous grounded meaning representations from referentially structured multimodal contexts
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1100/
Zarrie{\ss}, Sina and Schlangen, David
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
959--965
Corpora of referring expressions paired with their visual referents are a good source for learning word meanings directly grounded in visual representations. Here, we explore additional ways of extracting from them word representations linked to multi-modal context: through expressions that refer to the same object, and through expressions that refer to different objects in the same scene. We show that continuous meaning representations derived from these contexts capture complementary aspects of similarity, , even if not outperforming textual embeddings trained on very large amounts of raw text when tested on standard similarity benchmarks. We propose a new task for evaluating grounded meaning representations{---}detection of potentially co-referential phrases{---}and show that it requires precise denotational representations of attribute meanings, which our method provides.
null
null
10.18653/v1/D17-1100
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,588
inproceedings
yu-etal-2017-hierarchically
Hierarchically-Attentive {RNN} for Album Summarization and Storytelling
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1101/
Yu, Licheng and Bansal, Mohit and Berg, Tamara
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
966--971
We address the problem of end-to-end visual storytelling. Given a photo album, our model first selects the most representative (summary) photos, and then composes a natural language story for the album. For this task, we make use of the Visual Storytelling dataset and a model composed of three hierarchically-attentive Recurrent Neural Nets (RNNs) to: encode the album photos, select representative (summary) photos, and compose the story. Automatic and human evaluations show our model achieves better performance on selection, generation, and retrieval than baselines.
null
null
10.18653/v1/D17-1101
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,589
inproceedings
fu-etal-2017-video
Video Highlight Prediction Using Audience Chat Reactions
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1102/
Fu, Cheng-Yang and Lee, Joon and Bansal, Mohit and Berg, Alexander
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
972--978
Sports channel video portals offer an exciting domain for research on multimodal, multilingual analysis. We present methods addressing the problem of automatic video highlight prediction based on joint visual features and textual analysis of the real-world audience discourse with complex slang, in both English and traditional Chinese. We present a novel dataset based on League of Legends championships recorded from North American and Taiwanese Twitch.tv channels (will be released for further research), and demonstrate strong results on these using multimodal, character-level CNN-RNN model architectures.
null
null
10.18653/v1/D17-1102
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,590
inproceedings
pasunuru-bansal-2017-reinforced
Reinforced Video Captioning with Entailment Rewards
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1103/
Pasunuru, Ramakanth and Bansal, Mohit
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
979--985
Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), achieving significant improvements over the baseline, based on both automatic metrics and human evaluation on multiple datasets. Next, we propose a novel entailment-enhanced reward (CIDEnt) that corrects phrase-matching based metrics (such as CIDEr) to only allow for logically-implied partial matches and avoid contradictions, achieving further significant improvements over the CIDEr-reward model. Overall, our CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset.
null
null
10.18653/v1/D17-1103
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,591
inproceedings
mu-etal-2017-evaluating
Evaluating Hierarchies of Verb Argument Structure with Hierarchical Clustering
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1104/
Mu, Jesse and Hartshorne, Joshua K. and O{'}Donnell, Timothy
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
986--991
Verbs can only be used with a few specific arrangements of their arguments (syntactic frames). Most theorists note that verbs can be organized into a hierarchy of verb classes based on the frames they admit. Here we show that such a hierarchy is objectively well-supported by the patterns of verbs and frames in English, since a systematic hierarchical clustering algorithm converges on the same structure as the handcrafted taxonomy of VerbNet, a broad-coverage verb lexicon. We also show that the hierarchies capture meaningful psychological dimensions of generalization by predicting novel verb coercions by human participants. We discuss limitations of a simple hierarchical representation and suggest similar approaches for identifying the representations underpinning verb argument structure.
null
null
10.18653/v1/D17-1104
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,592
inproceedings
calixto-liu-2017-incorporating
Incorporating Global Visual Features into Attention-based Neural Machine Translation.
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1105/
Calixto, Iacer and Liu, Qun
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
992--1003
We introduce multi-modal, attention-based neural machine translation (NMT) models which incorporate visual features into different parts of both the encoder and the decoder. Global image features are extracted using a pre-trained convolutional neural network and are incorporated (i) as words in the source sentence, (ii) to initialise the encoder hidden state, and (iii) as additional data to initialise the decoder hidden state. In our experiments, we evaluate translations into English and German, how different strategies to incorporate global image features compare and which ones perform best. We also study the impact that adding synthetic multi-modal, multilingual data brings and find that the additional data have a positive impact on multi-modal NMT models. We report new state-of-the-art results and our best models also significantly improve on a comparable phrase-based Statistical MT (PBSMT) model trained on the Multi30k data set according to all metrics evaluated. To the best of our knowledge, it is the first time a purely neural model significantly improves over a PBSMT model on all metrics evaluated on this data set.
null
null
10.18653/v1/D17-1105
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,593
inproceedings
misra-etal-2017-mapping
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1106/
Misra, Dipendra and Langford, John and Artzi, Yoav
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
1004--1015
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent`s exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
null
null
10.18653/v1/D17-1106
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,594
inproceedings
fraser-etal-2017-analysis
An analysis of eye-movements during reading for the detection of mild cognitive impairment
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1107/
Fraser, Kathleen C. and Lundholm Fors, Kristina and Kokkinakis, Dimitrios and Nordlund, Arto
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
1016--1026
We present a machine learning analysis of eye-tracking data for the detection of mild cognitive impairment, a decline in cognitive abilities that is associated with an increased risk of developing dementia. We compare two experimental configurations (reading aloud versus reading silently), as well as two methods of combining information from the two trials (concatenation and merging). Additionally, we annotate the words being read with information about their frequency and syntactic category, and use these annotations to generate new features. Ultimately, we are able to distinguish between participants with and without cognitive impairment with up to 86{\%} accuracy.
null
null
10.18653/v1/D17-1107
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,595
inproceedings
ning-etal-2017-structured
A Structured Learning Approach to Temporal Relation Extraction
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1108/
Ning, Qiang and Feng, Zhili and Roth, Dan
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
1027--1037
Identifying temporal relations between events is an essential step towards natural language understanding. However, the temporal relation between two events in a story depends on, and is often dictated by, relations among other events. Consequently, effectively identifying temporal relations between events is a challenging problem even for human annotators. This paper suggests that it is important to take these dependencies into account while learning to identify these relations and proposes a structured learning approach to address this challenge. As a byproduct, this provides a new perspective on handling missing relations, a known issue that hurts existing methods. As we show, the proposed approach results in significant improvements on the two commonly used data sets for this problem.
null
null
10.18653/v1/D17-1108
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,596
inproceedings
chaganty-etal-2017-importance
Importance sampling for unbiased on-demand evaluation of knowledge base population
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1109/
Chaganty, Arun and Paranjape, Ashwin and Liang, Percy and Manning, Christopher D.
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
1038--1048
Knowledge base population (KBP) systems take in a large document corpus and extract entities and their relations. Thus far, KBP evaluation has relied on judgements on the pooled predictions of existing systems. We show that this evaluation is problematic: when a new system predicts a previously unseen relation, it is penalized even if it is correct. This leads to significant bias against new systems, which counterproductively discourages innovation in the field. Our first contribution is a new importance-sampling based evaluation which corrects for this bias by annotating a new system`s predictions on-demand via crowdsourcing. We show this eliminates bias and reduces variance using data from the 2015 TAC KBP task. Our second contribution is an implementation of our method made publicly available as an online KBP evaluation service. We pilot the service by testing diverse state-of-the-art systems on the TAC KBP 2016 corpus and obtain accurate scores in a cost effective manner.
null
null
10.18653/v1/D17-1109
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,597
inproceedings
hui-etal-2017-pacrr
{PACRR}: A Position-Aware Neural {IR} Model for Relevance Matching
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1110/
Hui, Kai and Yates, Andrew and Berberich, Klaus and de Melo, Gerard
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
1049--1058
In order to adopt deep learning for information retrieval, models are needed that can capture all relevant information required to assess the relevance of a document to a given user query. While previous works have successfully captured unigram term matches, how to fully employ position-dependent information such as proximity and term dependencies has been insufficiently explored. In this work, we propose a novel neural IR model named PACRR aiming at better modeling position-dependent interactions between a query and a document. Extensive experiments on six years' TREC Web Track data confirm that the proposed model yields better results under multiple benchmarks.
null
null
10.18653/v1/D17-1110
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,598
inproceedings
raiman-miller-2017-globally
Globally Normalized Reader
Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian
sep
2017
Copenhagen, Denmark
Association for Computational Linguistics
https://aclanthology.org/D17-1111/
Raiman, Jonathan and Miller, John
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
1059--1069
Rapid progress has been made towards question answering (QA) systems that can extract answers from text. Existing neural approaches make use of expensive bi-directional attention mechanisms or score all possible answer spans, limiting scalability. We propose instead to cast extractive QA as an iterative search problem: select the answer`s sentence, start word, and end word. This representation reduces the space of each search step and allows computation to be conditionally allocated to promising search paths. We show that globally normalizing the decision process and back-propagating through beam search makes this representation viable and learning efficient. We empirically demonstrate the benefits of this approach using our model, Globally Normalized Reader (GNR), which achieves the second highest single model performance on the Stanford Question Answering Dataset (68.4 EM, 76.21 F1 dev) and is 24.7x faster than bi-attention-flow. We also introduce a data-augmentation method to produce semantically valid examples by aligning named entities to a knowledge base and swapping them with new entities of the same type. This method improves the performance of all models considered in this work and is of independent interest for a variety of NLP tasks.
null
null
10.18653/v1/D17-1111
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,599