entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
pu-etal-2017-consistent
Consistent Translation of Repeated Nouns using Syntactic and Semantic Cues
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1089/
Pu, Xiao and Mascarell, Laura and Popescu-Belis, Andrei
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
948--957
We propose a method to decide whether two occurrences of the same noun in a source text should be translated consistently, i.e. using the same noun in the target text as well. We train and test classifiers that predict consistent translations based on lexical, syntactic, and semantic features. We first evaluate the accuracy of our classifiers intrinsically, in terms of the accuracy of consistency predictions, over a subset of the UN Corpus. Then, we also evaluate them in combination with phrase-based statistical MT systems for Chinese-to-English and German-to-English. We compare the automatic post-editing of noun translations with the re-ranking of the translation hypotheses based on the classifiers' output, and also use these methods in combination. This improves over the baseline and closes up to 50{\%} of the gap in BLEU scores between the baseline and an oracle classifier.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,287
inproceedings
howcroft-demberg-2017-psycholinguistic
Psycholinguistic Models of Sentence Processing Improve Sentence Readability Ranking
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1090/
Howcroft, David M. and Demberg, Vera
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
958--968
While previous research on readability has typically focused on document-level measures, recent work in areas such as natural language generation has pointed out the need of sentence-level readability measures. Much of psycholinguistics has focused for many years on processing measures that provide difficulty estimates on a word-by-word basis. However, these psycholinguistic measures have not yet been tested on sentence readability ranking tasks. In this paper, we use four psycholinguistic measures: idea density, surprisal, integration cost, and embedding depth to test whether these features are predictive of readability levels. We find that psycholinguistic features significantly improve performance by up to 3 percentage points over a standard document-level readability metric baseline.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,288
inproceedings
das-etal-2017-web
Web-Scale Language-Independent Cataloging of Noisy Product Listings for {E}-Commerce
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1091/
Das, Pradipto and Xia, Yandi and Levine, Aaron and Di Fabbrizio, Giuseppe and Datta, Ankur
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
969--979
The cataloging of product listings through taxonomy categorization is a fundamental problem for any e-commerce marketplace, with applications ranging from personalized search recommendations to query understanding. However, manual and rule based approaches to categorization are not scalable. In this paper, we compare several classifiers for categorizing listings in both English and Japanese product catalogs. We show empirically that a combination of words from product titles, navigational breadcrumbs, and list prices, when available, improves results significantly. We outline a novel method using correspondence topic models and a lightweight manual process to reduce noise from mis-labeled data in the training set. We contrast linear models, gradient boosted trees (GBTs) and convolutional neural networks (CNNs), and show that GBTs and CNNs yield the highest gains in error reduction. Finally, we show GBTs applied in a language-agnostic way on a large-scale Japanese e-commerce dataset have improved taxonomy categorization performance over current state-of-the-art based on deep belief network models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,289
inproceedings
stab-gurevych-2017-recognizing
Recognizing Insufficiently Supported Arguments in Argumentative Essays
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1092/
Stab, Christian and Gurevych, Iryna
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
980--990
In this paper, we propose a new task for assessing the quality of natural language arguments. The premises of a well-reasoned argument should provide enough evidence for accepting or rejecting its claim. Although this criterion, known as sufficiency, is widely adopted in argumentation theory, there are no empirical studies on its applicability to real arguments. In this work, we show that human annotators substantially agree on the sufficiency criterion and introduce a novel annotated corpus. Furthermore, we experiment with feature-rich SVMs and Convolutional Neural Networks and achieve 84{\%} accuracy for automatically identifying insufficiently supported arguments. The final corpus as well as the annotation guideline are freely available for encouraging future research on argument quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,290
inproceedings
sato-etal-2017-distributed
Distributed Document and Phrase Co-embeddings for Descriptive Clustering
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1093/
Sato, Motoki and Brockmeier, Austin J. and Kontonatsios, Georgios and Mu, Tingting and Goulermas, John Y. and Tsujii, Jun{'}ichi and Ananiadou, Sophia
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
991--1001
Descriptive document clustering aims to automatically discover groups of semantically related documents and to assign a meaningful label to characterise the content of each cluster. In this paper, we present a descriptive clustering approach that employs a distributed representation model, namely the paragraph vector model, to capture semantic similarities between documents and phrases. The proposed method uses a joint representation of phrases and documents (i.e., a co-embedding) to automatically select a descriptive phrase that best represents each document cluster. We evaluate our method by comparing its performance to an existing state-of-the-art descriptive clustering method that also uses co-embedding but relies on a bag-of-words representation. Results obtained on benchmark datasets demonstrate that the paragraph vector-based method obtains superior performance over the existing approach in both identifying clusters and assigning appropriate descriptive labels to them.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,291
inproceedings
farra-mckeown-2017-smarties
{SMART}ies: Sentiment Models for {A}rabic Target entities
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1094/
Farra, Noura and McKeown, Kathy
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1002--1013
We consider entity-level sentiment analysis in Arabic, a morphologically rich language with increasing resources. We present a system that is applied to complex posts written in response to Arabic newspaper articles. Our goal is to identify important entity {\textquotedblleft}targets{\textquotedblright} within the post along with the polarity expressed about each target. We achieve significant improvements over multiple baselines, demonstrating that the use of specific morphological representations improves the performance of identifying both important targets and their sentiment, and that the use of distributional semantic clusters further boosts performances for these representations, especially when richer linguistic resources are not available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,292
inproceedings
segura-bedmar-etal-2017-exploring
Exploring Convolutional Neural Networks for Sentiment Analysis of {S}panish tweets
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1095/
Segura-Bedmar, Isabel and Quir{\'o}s, Antonio and Mart{\'i}nez, Paloma
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1014--1022
Spanish is the third-most used language on the internet, after English and Chinese, with a total of 7.7{\%} (more than 277 million of users) and a huge internet growth of more than 1,400{\%}. However, most work on sentiment analysis has been focused on English. This paper describes a deep learning system for Spanish sentiment analysis. To the best of our knowledge, this is the first work that explores the use of a convolutional neural network to polarity classification of Spanish tweets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,293
inproceedings
mousa-schuller-2017-contextual
Contextual Bidirectional Long Short-Term Memory Recurrent Neural Network Language Models: A Generative Approach to Sentiment Analysis
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1096/
Mousa, Amr and Schuller, Bj{\"orn
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1023--1032
Traditional learning-based approaches to sentiment analysis of written text use the concept of bag-of-words or bag-of-n-grams, where a document is viewed as a set of terms or short combinations of terms disregarding grammar rules or word order. Novel approaches de-emphasize this concept and view the problem as a sequence classification problem. In this context, recurrent neural networks (RNNs) have achieved significant success. The idea is to use RNNs as discriminative binary classifiers to predict a positive or negative sentiment label at every word position then perform a type of pooling to get a sentence-level polarity. Here, we investigate a novel generative approach in which a separate probability distribution is estimated for every sentiment using language models (LMs) based on long short-term memory (LSTM) RNNs. We introduce a novel type of LM using a modified version of bidirectional LSTM (BLSTM) called contextual BLSTM (cBLSTM), where the probability of a word is estimated based on its full left and right contexts. Our approach is compared with a BLSTM binary classifier. Significant improvements are observed in classifying the IMDB movie review dataset. Further improvements are achieved via model combination.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,294
inproceedings
sun-etal-2017-large
Large-scale Opinion Relation Extraction with Distantly Supervised Neural Network
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1097/
Sun, Changzhi and Wu, Yuanbin and Lan, Man and Sun, Shiliang and Zhang, Qi
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1033--1043
We investigate the task of open domain opinion relation extraction. Different from works on manually labeled corpus, we propose an efficient distantly supervised framework based on pattern matching and neural network classifiers. The patterns are designed to automatically generate training data, and the deep learning model is design to capture various lexical and syntactic features. The result algorithm is fast and scalable on large-scale corpus. We test the system on the Amazon online review dataset. The result shows that our model is able to achieve promising performances without any human annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,295
inproceedings
argueta-chiang-2017-decoding
Decoding with Finite-State Transducers on {GPU}s
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1098/
Argueta, Arturo and Chiang, David
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1044--1052
Weighted finite automata and transducers (including hidden Markov models and conditional random fields) are widely used in natural language processing (NLP) to perform tasks such as morphological analysis, part-of-speech tagging, chunking, named entity recognition, speech recognition, and others. Parallelizing finite state algorithms on graphics processing units (GPUs) would benefit many areas of NLP. Although researchers have implemented GPU versions of basic graph algorithms, no work, to our knowledge, has been done on GPU algorithms for weighted finite automata. We introduce a GPU implementation of the Viterbi and forward-backward algorithm, achieving speedups of up to 4x over our serial implementations running on different computer architectures and 3335x over widely used tools such as OpenFST.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,296
inproceedings
gu-etal-2017-learning
Learning to Translate in Real-time with Neural Machine Translation
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1099/
Gu, Jiatao and Neubig, Graham and Cho, Kyunghyun and Li, Victor O.K.
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1053--1062
Translating in real-time, a.k.a.simultaneous translation, outputs translation words before the input sentence ends, which is a challenging problem for conventional machine translation methods. We propose a neural machine translation (NMT) framework for simultaneous translation in which an agent learns to make decisions on when to translate from the interaction with a pre-trained NMT environment. To trade off quality and delay, we extensively explore various targets for delay and design a method for beam-search applicable in the simultaneous MT setting. Experiments against state-of-the-art baselines on two language pairs demonstrate the efficacy of the proposed framework both quantitatively and qualitatively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,297
inproceedings
toral-sanchez-cartagena-2017-multifaceted
A Multifaceted Evaluation of Neural versus Phrase-Based Machine Translation for 9 Language Directions
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1100/
Toral, Antonio and S{\'a}nchez-Cartagena, V{\'i}ctor M.
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1063--1073
We aim to shed light on the strengths and weaknesses of the newly introduced neural machine translation paradigm. To that end, we conduct a multifaceted evaluation in which we compare outputs produced by state-of-the-art neural machine translation and phrase-based machine translation systems for 9 language directions across a number of dimensions. Specifically, we measure the similarity of the outputs, their fluency and amount of reordering, the effect of sentence length and performance across different error categories. We find out that translations produced by neural machine translation systems are considerably different, more fluent and more accurate in terms of word order compared to those produced by phrase-based systems. Neural machine translation systems are also more accurate at producing inflected forms, but they perform poorly when translating very long sentences.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,298
inproceedings
rabinovich-etal-2017-personalized
Personalized Machine Translation: Preserving Original Author Traits
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1101/
Rabinovich, Ella and Patel, Raj Nath and Mirkin, Shachar and Specia, Lucia and Wintner, Shuly
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1074--1084
The language that we produce reflects our personality, and various personal and demographic characteristics can be detected in natural language texts. We focus on one particular personal trait of the author, gender, and study how it is manifested in original texts and in translations. We show that author`s gender has a powerful, clear signal in originals texts, but this signal is obfuscated in human and machine translation. We then propose simple domain-adaptation techniques that help retain the original gender traits in the translation, without harming the quality of the translation, thereby creating more personalized machine translation systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,299
inproceedings
heyman-etal-2017-bilingual
Bilingual Lexicon Induction by Learning to Combine Word-Level and Character-Level Representations
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1102/
Heyman, Geert and Vuli{\'c}, Ivan and Moens, Marie-Francine
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1085--1095
We study the problem of bilingual lexicon induction (BLI) in a setting where some translation resources are available, but unknown translations are sought for certain, possibly domain-specific terminology. We frame BLI as a classification problem for which we design a neural network based classification architecture composed of recurrent long short-term memory and deep feed forward networks. The results show that word- and character-level representations each improve state-of-the-art results for BLI, and the best results are obtained by exploiting the synergy between these word- and character-level representations in the classification model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,300
inproceedings
escoter-etal-2017-grouping
Grouping business news stories based on salience of named entities
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1103/
Escoter, Lloren{\c{c}} and Pivovarova, Lidia and Du, Mian and Katinskaia, Anisia and Yangarber, Roman
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1096--1106
In news aggregation systems focused on broad news domains, certain stories may appear in multiple articles. Depending on the relative importance of the story, the number of versions can reach dozens or hundreds within a day. The text in these versions may be nearly identical or quite different. Linking multiple versions of a story into a single group brings several important benefits to the end-user{--}reducing the cognitive load on the reader, as well as signaling the relative importance of the story. We present a grouping algorithm, and explore several vector-based representations of input documents: from a baseline using keywords, to a method using salience{--}a measure of importance of named entities in the text. We demonstrate that features beyond keywords yield substantial improvements, verified on a manually-annotated corpus of business news stories.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,301
inproceedings
conneau-etal-2017-deep
Very Deep Convolutional Networks for Text Classification
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1104/
Conneau, Alexis and Schwenk, Holger and Barrault, Lo{\"ic and Lecun, Yann
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1107--1116
The dominant approach for many NLP tasks are recurrent neural networks, in particular LSTMs, and convolutional neural networks. However, these architectures are rather shallow in comparison to the deep convolutional networks which have pushed the state-of-the-art in computer vision. We present a new architecture (VDCNN) for text processing which operates directly at the character level and uses only small convolutions and pooling operations. We are able to show that the performance of this model increases with the depth: using up to 29 convolutional layers, we report improvements over the state-of-the-art on several public text classification tasks. To the best of our knowledge, this is the first time that very deep convolutional nets have been applied to text processing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,302
inproceedings
wachsmuth-etal-2017-pagerank
{\textquotedblleft}{P}age{R}ank{\textquotedblright} for Argument Relevance
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1105/
Wachsmuth, Henning and Stein, Benno and Ajjour, Yamen
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1117--1127
Future search engines are expected to deliver pro and con arguments in response to queries on controversial topics. While argument mining is now in the focus of research, the question of how to retrieve the relevant arguments remains open. This paper proposes a radical model to assess relevance objectively at web scale: the relevance of an argument`s conclusion is decided by what other arguments reuse it as a premise. We build an argument graph for this model that we analyze with a recursive weighting scheme, adapting key ideas of PageRank. In experiments on a large ground-truth argument graph, the resulting relevance scores correlate with human average judgments. We outline what natural language challenges must be faced at web scale in order to stepwise bring argument relevance to web search engines.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,303
inproceedings
perez-rosas-etal-2017-predicting
Predicting Counselor Behaviors in Motivational Interviewing Encounters
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1106/
P{\'e}rez-Rosas, Ver{\'o}nica and Mihalcea, Rada and Resnicow, Kenneth and Singh, Satinder and An, Lawrence and Goggin, Kathy J. and Catley, Delwyn
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1128--1137
As the number of people receiving psycho-therapeutic treatment increases, the automatic evaluation of counseling practice arises as an important challenge in the clinical domain. In this paper, we address the automatic evaluation of counseling performance by analyzing counselors' language during their interaction with clients. In particular, we present a model towards the automation of Motivational Interviewing (MI) coding, which is the current gold standard to evaluate MI counseling. First, we build a dataset of hand labeled MI encounters; second, we use text-based methods to extract and analyze linguistic patterns associated with counselor behaviors; and third, we develop an automatic system to predict these behaviors. We introduce a new set of features based on semantic information and syntactic patterns, and show that they lead to accuracy figures of up to 90{\%}, which represent a significant improvement with respect to features used in the past.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,304
inproceedings
stamatatos-2017-authorship
Authorship Attribution Using Text Distortion
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1107/
Stamatatos, Efstathios
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1138--1149
Authorship attribution is associated with important applications in forensics and humanities research. A crucial point in this field is to quantify the personal style of writing, ideally in a way that is not affected by changes in topic or genre. In this paper, we present a novel method that enhances authorship attribution effectiveness by introducing a text distortion step before extracting stylometric measures. The proposed method attempts to mask topic-specific information that is not related to the personal style of authors. Based on experiments on two main tasks in authorship attribution, closed-set attribution and authorship verification, we demonstrate that the proposed approach can enhance existing methods especially under cross-topic conditions, where the training and test corpora do not match in topic.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,305
inproceedings
leeuwenberg-moens-2017-structured
Structured Learning for Temporal Relation Extraction from Clinical Records
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1108/
Leeuwenberg, Artuur and Moens, Marie-Francine
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1150--1158
We propose a scalable structured learning model that jointly predicts temporal relations between events and temporal expressions (TLINKS), and the relation between these events and the document creation time (DCTR). We employ a structured perceptron, together with integer linear programming constraints for document-level inference during training and prediction to exploit relational properties of temporality, together with global learning of the relations at the document level. Moreover, this study gives insights in the results of integrating constraints for temporal relation extraction when using structured learning and prediction. Our best system outperforms the state-of-the art on both the CONTAINS TLINK task, and the DCTR task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,306
inproceedings
yadav-etal-2017-entity
Entity Extraction in Biomedical Corpora: An Approach to Evaluate Word Embedding Features with {PSO} based Feature Selection
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1109/
Yadav, Shweta and Ekbal, Asif and Saha, Sriparna and Bhattacharyya, Pushpak
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1159--1170
Text mining has drawn significant attention in recent past due to the rapid growth in biomedical and clinical records. Entity extraction is one of the fundamental components for biomedical text mining. In this paper, we propose a novel approach of feature selection for entity extraction that exploits the concept of deep learning and Particle Swarm Optimization (PSO). The system utilizes word embedding features along with several other features extracted by studying the properties of the datasets. We obtain an interesting observation that compact word embedding features as determined by PSO are more effective compared to the entire word embedding feature set for entity extraction. The proposed system is evaluated on three benchmark biomedical datasets such as GENIA, GENETAG, and AiMed. The effectiveness of the proposed approach is evident with significant performance gains over the baseline models as well as the other existing systems. We observe improvements of 7.86{\%}, 5.27{\%} and 7.25{\%} F-measure points over the baseline models for GENIA, GENETAG, and AiMed dataset respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,307
inproceedings
quirk-poon-2017-distant
Distant Supervision for Relation Extraction beyond the Sentence Boundary
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1110/
Quirk, Chris and Poon, Hoifung
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1171--1182
The growing demand for structured knowledge has led to great interest in relation extraction, especially in cases with limited supervision. However, existing distance supervision approaches only extract relations expressed in single sentences. In general, cross-sentence relation extraction is under-explored, even in the supervised-learning setting. In this paper, we propose the first approach for applying distant supervision to cross-sentence relation extraction. At the core of our approach is a graph representation that can incorporate both standard dependencies and discourse relations, thus providing a unifying way to model relations within and across sentences. We extract features from multiple paths in this graph, increasing accuracy and robustness when confronted with linguistic variation and analysis error. Experiments on an important extraction task for precision medicine show that our approach can learn an accurate cross-sentence extractor, using only a small existing knowledge base and unlabeled text from biomedical research articles. Compared to the existing distant supervision paradigm, our approach extracted twice as many relations at similar precision, thus demonstrating the prevalence of cross-sentence relations and the promise of our approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,308
inproceedings
yaghoobzadeh-etal-2017-noise
Noise Mitigation for Neural Entity Typing and Relation Extraction
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1111/
Yaghoobzadeh, Yadollah and Adel, Heike and Sch{\"utze, Hinrich
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1183--1194
In this paper, we address two different types of noise in information extraction models: noise from distant supervision and noise from pipeline input features. Our target tasks are entity typing and relation extraction. For the first noise type, we introduce multi-instance multi-label learning algorithms using neural network models, and apply them to fine-grained entity typing for the first time. Our model outperforms the state-of-the-art supervised approach which uses global embeddings of entities. For the second noise type, we propose ways to improve the integration of noisy entity type predictions into relation extraction. Our experiments show that probabilistic predictions are more robust than discrete predictions and that joint training of the two tasks performs best.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,309
inproceedings
takamura-etal-2017-analyzing
Analyzing Semantic Change in {J}apanese Loanwords
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1112/
Takamura, Hiroya and Nagata, Ryo and Kawasaki, Yoshifumi
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1195--1204
We analyze semantic changes in loanwords from English that are used in Japanese (Japanese loanwords). Specifically, we create word embeddings of English and Japanese and map the Japanese embeddings into the English space so that we can calculate the similarity of each Japanese word and each English word. We then attempt to find loanwords that are semantically different from their original, see if known meaning changes are correctly captured, and show the possibility of using our methodology in language education.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,310
inproceedings
jager-etal-2017-using
Using support vector machines and state-of-the-art algorithms for phonetic alignment to identify cognates in multi-lingual wordlists
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1113/
J{\"ager, Gerhard and List, Johann-Mattis and Sofroniev, Pavel
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1205--1216
Most current approaches in phylogenetic linguistics require as input multilingual word lists partitioned into sets of etymologically related words (cognates). Cognate identification is so far done manually by experts, which is time consuming and as of yet only available for a small number of well-studied language families. Automatizing this step will greatly expand the empirical scope of phylogenetic methods in linguistics, as raw wordlists (in phonetic transcription) are much easier to obtain than wordlists in which cognate words have been fully identified and annotated, even for under-studied languages. A couple of different methods have been proposed in the past, but they are either disappointing regarding their performance or not applicable to larger datasets. Here we present a new approach that uses support vector machines to unify different state-of-the-art methods for phonetic alignment and cognate detection within a single framework. Training and evaluating these method on a typologically broad collection of gold-standard data shows it to be superior to the existing state of the art.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,311
inproceedings
maharjan-etal-2017-multi
A Multi-task Approach to Predict Likability of Books
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1114/
Maharjan, Suraj and Arevalo, John and Montes, Manuel and Gonz{\'a}lez, Fabio A. and Solorio, Thamar
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1217--1227
We investigate the value of feature engineering and neural network models for predicting successful writing. Similar to previous work, we treat this as a binary classification task and explore new strategies to automatically learn representations from book contents. We evaluate our feature set on two different corpora created from Project Gutenberg books. The first presents a novel approach for generating the gold standard labels for the task and the other is based on prior research. Using a combination of hand-crafted and recurrent neural network learned representations in a dual learning setting, we obtain the best performance of 73.50{\%} weighted F1-score.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,312
inproceedings
van-cranenburgh-bod-2017-data
A Data-Oriented Model of Literary Language
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1115/
van Cranenburgh, Andreas and Bod, Rens
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1228--1238
We consider the task of predicting how literary a text is, with a gold standard from human ratings. Aside from a standard bigram baseline, we apply rich syntactic tree fragments, mined from the training set, and a series of hand-picked features. Our model is the first to distinguish degrees of highly and less literary novels using a variety of lexical and syntactic features, and explains 76.0 {\%} of the variation in literary ratings.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,313
inproceedings
shoemark-etal-2017-aye
Aye or naw, whit dae ye hink? {S}cottish independence and linguistic identity on social media
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1116/
Shoemark, Philippa and Sur, Debnil and Shrimpton, Luke and Murray, Iain and Goldwater, Sharon
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1239--1248
Political surveys have indicated a relationship between a sense of Scottish identity and voting decisions in the 2014 Scottish Independence Referendum. Identity is often reflected in language use, suggesting the intuitive hypothesis that individuals who support Scottish independence are more likely to use distinctively Scottish words than those who oppose it. In the first large-scale study of sociolinguistic variation on social media in the UK, we identify distinctively Scottish terms in a data-driven way, and find that these terms are indeed used at a higher rate by users of pro-independence hashtags than by users of anti-independence hashtags. However, we also find that in general people are less likely to use distinctively Scottish words in tweets with referendum-related hashtags than in their general Twitter activity. We attribute this difference to style shifting relative to audience, aligning with previous work showing that Twitter users tend to use fewer local variants when addressing a broader audience.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,314
inproceedings
kuncoro-etal-2017-recurrent
What Do Recurrent Neural Network Grammars Learn About Syntax?
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1117/
Kuncoro, Adhiguna and Ballesteros, Miguel and Kong, Lingpeng and Dyer, Chris and Neubig, Graham and Smith, Noah A.
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1249--1258
Recurrent neural network grammars (RNNG) are a recently proposed probablistic generative modeling family for natural language. They show state-of-the-art language modeling and parsing performance. We investigate what information they learn, from a linguistic perspective, through various ablations to the model and the data, and by augmenting the model with an attention mechanism (GA-RNNG) to enable closer inspection. We find that explicit modeling of composition is crucial for achieving the best performance. Through the attention mechanism, we find that headedness plays a central role in phrasal representation (with the model`s latent attention largely agreeing with predictions made by hand-crafted head rules, albeit with some important differences). By training grammars without nonterminal labels, we find that phrasal representations depend minimally on nonterminals, providing support for the endocentricity hypothesis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,315
inproceedings
coavoux-crabbe-2017-incremental
Incremental Discontinuous Phrase Structure Parsing with the {GAP} Transition
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1118/
Coavoux, Maximin and Crabb{\'e}, Beno{\^i}t
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1259--1270
This article introduces a novel transition system for discontinuous lexicalized constituent parsing called SR-GAP. It is an extension of the shift-reduce algorithm with an additional gap transition. Evaluation on two German treebanks shows that SR-GAP outperforms the previous best transition-based discontinuous parser (Maier, 2015) by a large margin (it is notably twice as accurate on the prediction of discontinuous constituents), and is competitive with the state of the art (Fern{\'a}ndez-Gonz{\'a}lez and Martins, 2015). As a side contribution, we adapt span features (Hall et al., 2014) to discontinuous parsing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,316
inproceedings
shimaoka-etal-2017-neural
Neural Architectures for Fine-grained Entity Type Classification
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1119/
Shimaoka, Sonse and Stenetorp, Pontus and Inui, Kentaro and Riedel, Sebastian
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1271--1280
In this work, we investigate several neural network architectures for fine-grained entity type classification and make three key contributions. Despite being a natural comparison and addition, previous work on attentive neural architectures have not considered hand-crafted features and we combine these with learnt features and establish that they complement each other. Additionally, through quantitative analysis we establish that the attention mechanism learns to attend over syntactic heads and the phrase containing the mention, both of which are known to be strong hand-crafted features for our task. We introduce parameter sharing between labels through a hierarchical encoding method, that in low-dimensional projections show clear clusters for each type hierarchy. Lastly, despite using the same evaluation dataset, the literature frequently compare models trained using different data. We demonstrate that the choice of training data has a drastic impact on performance, which decreases by as much as 9.85{\%} loose micro F1 score for a previously proposed method. Despite this discrepancy, our best model achieves state-of-the-art results with 75.36{\%} loose micro F1 score on the well-established Figer (GOLD) dataset and we report the best results for models trained using publicly available data for the OntoNotes dataset with 64.93{\%} loose micro F1 score.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,317
inproceedings
kohita-etal-2017-multilingual
Multilingual Back-and-Forth Conversion between Content and Function Head for Easy Dependency Parsing
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2001/
Kohita, Ryosuke and Noji, Hiroshi and Matsumoto, Yuji
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
1--7
Universal Dependencies (UD) is becoming a standard annotation scheme cross-linguistically, but it is argued that this scheme centering on content words is harder to parse than the conventional one centering on function words. To improve the parsability of UD, we propose a back-and-forth conversion algorithm, in which we preprocess the training treebank to increase parsability, and reconvert the parser outputs to follow the UD scheme as a postprocess. We show that this technique consistently improves LAS across languages even with a state-of-the-art parser, in particular on core dependency arcs such as nominal modifier. We also provide an in-depth analysis to understand why our method increases parsability.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,319
inproceedings
littell-etal-2017-uriel
{URIEL} and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2002/
Littell, Patrick and Mortensen, David R. and Lin, Ke and Kairis, Katherine and Turner, Carlisle and Levin, Lori
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
8--14
We introduce the URIEL knowledge base for massively multilingual NLP and the lang2vec utility, which provides information-rich vector identifications of languages drawn from typological, geographical, and phylogenetic databases and normalized to have straightforward and consistent formats, naming, and semantics. The goal of URIEL and lang2vec is to enable multilingual NLP, especially on less-resourced languages and make possible types of experiments (especially but not exclusively related to NLP tasks) that are otherwise difficult or impossible due to the sparsity and incommensurability of the data sources. lang2vec vectors have been shown to reduce perplexity in multilingual language modeling, when compared to one-hot language identification vectors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,320
inproceedings
labeau-allauzen-2017-experimental
An experimental analysis of Noise-Contrastive Estimation: the noise distribution matters
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2003/
Labeau, Matthieu and Allauzen, Alexandre
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
15--20
Noise Contrastive Estimation (NCE) is a learning procedure that is regularly used to train neural language models, since it avoids the computational bottleneck caused by the output softmax. In this paper, we attempt to explain some of the weaknesses of this objective function, and to draw directions for further developments. Experiments on a small task show the issues raised by an unigram noise distribution, and that a context dependent noise distribution, such as the bigram distribution, can solve these issues and provide stable and data-efficient learning.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,321
inproceedings
li-etal-2017-robust
Robust Training under Linguistic Adversity
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2004/
Li, Yitong and Cohn, Trevor and Baldwin, Timothy
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
21--27
Deep neural networks have achieved remarkable results across many language processing tasks, however they have been shown to be susceptible to overfitting and highly sensitive to noise, including adversarial attacks. In this work, we propose a linguistically-motivated approach for training robust models based on exposing the model to corrupted text examples at training time. We consider several flavours of linguistically plausible corruption, include lexical semantic and syntactic methods. Empirically, we evaluate our method with a convolutional neural model across a range of sentiment analysis datasets. Compared with a baseline and the dropout method, our method achieves better overall performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,322
inproceedings
zamani-schwartz-2017-using
Using {T}witter Language to Predict the Real Estate Market
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2005/
Zamani, Mohammadzaman and Schwartz, H. Andrew
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
28--33
We explore whether social media can provide a window into community real estate -foreclosure rates and price changes- beyond that of traditional economic and demographic variables. We find language use in Twitter not only predicts real estate outcomes as well as traditional variables across counties, but that including Twitter language in traditional models leads to a significant improvement (e.g. from Pearson r = :50 to r = :59 for price changes). We overcome the challenge of the relative sparsity and noise in Twitter language variables by showing that training on the residual error of the traditional models leads to more accurate overall assessments. Finally, we discover that it is Twitter language related to business (e.g. {\textquoteleft}company', {\textquoteleft}marketing') and technology (e.g. {\textquoteleft}technology', {\textquoteleft}internet'), among others, that yield predictive power over economics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,323
inproceedings
paetzold-specia-2017-lexical
Lexical Simplification with Neural Ranking
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2006/
Paetzold, Gustavo and Specia, Lucia
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
34--40
We present a new Lexical Simplification approach that exploits Neural Networks to learn substitutions from the Newsela corpus - a large set of professionally produced simplifications. We extract candidate substitutions by combining the Newsela corpus with a retrofitted context-aware word embeddings model and rank them using a new neural regression model that learns rankings from annotated data. This strategy leads to the highest Accuracy, Precision and F1 scores to date in standard datasets for the task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,324
inproceedings
schluter-2017-limits
The limits of automatic summarisation according to {ROUGE}
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2007/
Schluter, Natalie
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
41--45
This paper discusses some central caveats of summarisation, incurred in the use of the ROUGE metric for evaluation, with respect to optimal solutions. The task is NP-hard, of which we give the first proof. Still, as we show empirically for three central benchmark datasets for the task, greedy algorithms empirically seem to perform optimally according to the metric. Additionally, overall quality assurance is problematic: there is no natural upper bound on the quality of summarisation systems, and even humans are excluded from performing optimal summarisation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,325
inproceedings
ouyang-etal-2017-crowd
Crowd-Sourced Iterative Annotation for Narrative Summarization Corpora
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2008/
Ouyang, Jessica and Chang, Serina and McKeown, Kathy
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
46--51
We present an iterative annotation process for producing aligned, parallel corpora of abstractive and extractive summaries for narrative. Our approach uses a combination of trained annotators and crowd-sourcing, allowing us to elicit human-generated summaries and alignments quickly and at low cost. We use crowd-sourcing to annotate aligned phrases with the text-to-text generation techniques needed to transform each phrase into the other. We apply this process to a corpus of 476 personal narratives, which we make available on the Web.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,326
inproceedings
chu-etal-2017-broad
Broad Context Language Modeling as Reading Comprehension
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2009/
Chu, Zewei and Wang, Hai and Gimpel, Kevin and McAllester, David
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
52--57
Progress in text understanding has been driven by large datasets that test particular capabilities, like recent datasets for reading comprehension (Hermann et al., 2015). We focus here on the LAMBADA dataset (Paperno et al., 2016), a word prediction task requiring broader context than the immediate sentence. We view LAMBADA as a reading comprehension problem and apply comprehension models based on neural networks. Though these models are constrained to choose a word from the context, they improve the state of the art on LAMBADA from 7.3{\%} to 49{\%}. We analyze 100 instances, finding that neural network readers perform well in cases that involve selecting a name from the context based on dialogue or discourse cues but struggle when coreference resolution or external knowledge is needed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,327
inproceedings
fancellu-etal-2017-detecting
Detecting negation scope is easy, except when it isn`t
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2010/
Fancellu, Federico and Lopez, Adam and Webber, Bonnie and He, Hangfeng
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
58--63
Several corpora have been annotated with negation scope{---}the set of words whose meaning is negated by a cue like the word {\textquotedblleft}not{\textquotedblright}{---}leading to the development of classifiers that detect negation scope with high accuracy. We show that for nearly all of these corpora, this high accuracy can be attributed to a single fact: they frequently annotate negation scope as a single span of text delimited by punctuation. For negation scopes not of this form, detection accuracy is low and under-sampling the easy training examples does not substantially improve accuracy. We demonstrate that this is partly an artifact of annotation guidelines, and we argue that future negation scope annotation efforts should focus on these more difficult cases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,328
inproceedings
zhang-etal-2017-mt
{MT}/{IE}: Cross-lingual Open Information Extraction with Neural Sequence-to-Sequence Models
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2011/
Zhang, Sheng and Duh, Kevin and Van Durme, Benjamin
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
64--70
Cross-lingual information extraction is the task of distilling facts from foreign language (e.g. Chinese text) into representations in another language that is preferred by the user (e.g. English tuples). Conventional pipeline solutions decompose the task as machine translation followed by information extraction (or vice versa). We propose a joint solution with a neural sequence model, and show that it outperforms the pipeline in a cross-lingual open information extraction setting by 1-4 BLEU and 0.5-0.8 F1.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,329
inproceedings
rimell-etal-2017-learning
Learning to Negate Adjectives with Bilinear Models
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2012/
Rimell, Laura and Mabona, Amandla and Bulat, Luana and Kiela, Douwe
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
71--78
We learn a mapping that negates adjectives by predicting an adjective`s antonym in an arbitrary word embedding model. We show that both linear models and neural networks improve on this task when they have access to a vector representing the semantic domain of the input word, e.g. a centroid of temperature words when predicting the antonym of {\textquoteleft}cold'. We introduce a continuous class-conditional bilinear neural network which is able to negate adjectives with high precision.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,330
inproceedings
boleda-etal-2017-instances
Instances and concepts in distributional space
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2013/
Boleda, Gemma and Gupta, Abhijeet and Pad{\'o}, Sebastian
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
79--85
Instances ({\textquotedblleft}Mozart{\textquotedblright}) are ontologically distinct from concepts or classes ({\textquotedblleft}composer{\textquotedblright}). Natural language encompasses both, but instances have received comparatively little attention in distributional semantics. Our results show that instances and concepts differ in their distributional properties. We also establish that instantiation detection ({\textquotedblleft}Mozart {--} composer{\textquotedblright}) is generally easier than hypernymy detection ({\textquotedblleft}chemist {--} scientist{\textquotedblright}), and that results on the influence of input representation do not transfer from hyponymy to instantiation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,331
inproceedings
zarriess-schlangen-2017-child
Is this a Child, a Girl or a Car? Exploring the Contribution of Distributional Similarity to Learning Referential Word Meanings
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2014/
Zarrie{\ss}, Sina and Schlangen, David
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
86--91
There has recently been a lot of work trying to use images of referents of words for improving vector space meaning representations derived from text. We investigate the opposite direction, as it were, trying to improve visual word predictors that identify objects in images, by exploiting distributional similarity information during training. We show that for certain words (such as entry-level nouns or hypernyms), we can indeed learn better referential word meanings by taking into account their semantic similarity to other words. For other words, there is no or even a detrimental effect, compared to a learning setup that presents even semantically related objects as negative instances.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,332
inproceedings
cocos-callison-burch-2017-language
The Language of Place: Semantic Value from Geospatial Context
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2016/
Cocos, Anne and Callison-Burch, Chris
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
99--104
There is a relationship between what we say and where we say it. Word embeddings are usually trained assuming that semantically-similar words occur within the same textual contexts. We investigate the extent to which semantically-similar words occur within the same geospatial contexts. We enrich a corpus of geolocated Twitter posts with physical data derived from Google Places and OpenStreetMap, and train word embeddings using the resulting geospatial contexts. Intrinsic evaluation of the resulting vectors shows that geographic context alone does provide useful information about semantic relatedness.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,334
inproceedings
barbieri-etal-2017-emojis
Are Emojis Predictable?
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2017/
Barbieri, Francesco and Ballesteros, Miguel and Saggion, Horacio
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
105--111
Emojis are ideograms which are naturally combined with plain text to visually complement or condense the meaning of a message. Despite being widely used in social media, their underlying semantics have received little attention from a Natural Language Processing standpoint. In this paper, we investigate the relation between words and emojis, studying the novel task of predicting which emojis are evoked by text-based tweet messages. We train several models based on Long Short-Term Memory networks (LSTMs) in this task. Our experimental results show that our neural model outperforms a baseline as well as humans solving the same task, suggesting that computational models are able to better capture the underlying semantics of emojis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,335
inproceedings
kirov-etal-2017-rich
A Rich Morphological Tagger for {E}nglish: Exploring the Cross-Linguistic Tradeoff Between Morphology and Syntax
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2018/
Kirov, Christo and Sylak-Glassman, John and Knowles, Rebecca and Cotterell, Ryan and Post, Matt
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
112--117
A traditional claim in linguistics is that all human languages are equally expressive{---}able to convey the same wide range of meanings. Morphologically rich languages, such as Czech, rely on overt inflectional and derivational morphology to convey many semantic distinctions. Languages with comparatively limited morphology, such as English, should be able to accomplish the same using a combination of syntactic and contextual cues. We capitalize on this idea by training a tagger for English that uses syntactic features obtained by automatic parsing to recover complex morphological tags projected from Czech. The high accuracy of the resulting model provides quantitative confirmation of the underlying linguistic hypothesis of equal expressivity, and bodes well for future improvements in downstream HLT tasks including machine translation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,336
inproceedings
vylomova-etal-2017-context
Context-Aware Prediction of Derivational Word-forms
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2019/
Vylomova, Ekaterina and Cotterell, Ryan and Baldwin, Timothy and Cohn, Trevor
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
118--124
Derivational morphology is a fundamental and complex characteristic of language. In this paper we propose a new task of predicting the derivational form of a given base-form lemma that is appropriate for a given context. We present an encoder-decoder style neural network to produce a derived form character-by-character, based on its corresponding character-level representation of the base form and the context. We demonstrate that our model is able to generate valid context-sensitive derivations from known base forms, but is less accurate under lexicon agnostic setting.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,337
inproceedings
le-godais-etal-2017-comparing
Comparing Character-level Neural Language Models Using a Lexical Decision Task
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2020/
Le Godais, Ga{\"el and Linzen, Tal and Dupoux, Emmanuel
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
125--130
What is the information captured by neural network models of language? We address this question in the case of character-level recurrent neural language models. These models do not have explicit word representations; do they acquire implicit ones? We assess the lexical capacity of a network using the lexical decision task common in psycholinguistics: the system is required to decide whether or not a string of characters forms a word. We explore how accuracy on this task is affected by the architecture of the network, focusing on cell type (LSTM vs. SRN), depth and width. We also compare these architectural properties to a simple count of the parameters of the network. The overall number of parameters in the network turns out to be the most important predictor of accuracy; in particular, there is little evidence that deeper networks are beneficial for this task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,338
inproceedings
lemke-etal-2017-optimal
Optimal encoding! - Information Theory constrains article omission in newspaper headlines
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2021/
Lemke, Robin and Horch, Eva and Reich, Ingo
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
131--135
In this paper we pursue the hypothesis that the distribution of article omission specifically is constrained by principles of Information Theory (Shannon 1948). In particular, Information Theory predicts a stronger preference for article omission before nouns which are relatively unpredictable in context of the preceding words. We investigated article omission in German newspaper headlines with a corpus and acceptability rating study. Both support our hypothesis: Articles are inserted more often before unpredictable nouns and subjects perceive article omission before predictable nouns as more well-formed than before unpredictable ones. This suggests that information theoretic principles constrain the distribution of article omission in headlines.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,339
inproceedings
strapparava-mihalcea-2017-computational
A Computational Analysis of the Language of Drug Addiction
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2022/
Strapparava, Carlo and Mihalcea, Rada
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
136--142
We present a computational analysis of the language of drug users when talking about their drug experiences. We introduce a new dataset of over 4,000 descriptions of experiences reported by users of four main drug types, and show that we can predict with an F1-score of up to 88{\%} the drug behind a certain experience. We also perform an analysis of the dominant psycholinguistic processes and dominant emotions associated with each drug type, which sheds light on the characteristics of drug users.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,340
inproceedings
haponchyk-moschitti-2017-practical
A Practical Perspective on Latent Structured Prediction for Coreference Resolution
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2023/
Haponchyk, Iryna and Moschitti, Alessandro
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
143--149
Latent structured prediction theory proposes powerful methods such as Latent Structural SVM (LSSVM), which can potentially be very appealing for coreference resolution (CR). In contrast, only small work is available, mainly targeting the latent structured perceptron (LSP). In this paper, we carried out a practical study comparing for the first time online learning with LSSVM. We analyze the intricacies that may have made initial attempts to use LSSVM fail, i.e., a huge training time and much lower accuracy produced by Kruskal`s spanning tree algorithm. In this respect, we also propose a new effective feature selection approach for improving system efficiency. The results show that LSP, if correctly parameterized, produces the same performance as LSSVM, being much more efficient.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,341
inproceedings
shi-demberg-2017-need
On the Need of Cross Validation for Discourse Relation Classification
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2024/
Shi, Wei and Demberg, Vera
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
150--156
The task of implicit discourse relation classification has received increased attention in recent years, including two CoNNL shared tasks on the topic. Existing machine learning models for the task train on sections 2-21 of the PDTB and test on section 23, which includes a total of 761 implicit discourse relations. In this paper, we`d like to make a methodological point, arguing that the standard test set is too small to draw conclusions about whether the inclusion of certain features constitute a genuine improvement, or whether one got lucky with some properties of the test set, and argue for the adoption of cross validation for the discourse relation classification task by the community.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,342
inproceedings
press-wolf-2017-using
Using the Output Embedding to Improve Language Models
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2025/
Press, Ofir and Wolf, Lior
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
157--163
We study the topmost weight matrix of neural network language models. We show that this matrix constitutes a valid word embedding. When training language models, we recommend tying the input embedding and this output embedding. We analyze the resulting update rules and show that the tied embedding evolves in a more similar way to the output embedding than to the input embedding in the untied model. We also offer a new method of regularizing the output embedding. Our methods lead to a significant reduction in perplexity, as we are able to show on a variety of neural network language models. Finally, we show that weight tying can reduce the size of neural translation models to less than half of their original size without harming their performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,343
inproceedings
bingel-sogaard-2017-identifying
Identifying beneficial task relations for multi-task learning in deep neural networks
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2026/
Bingel, Joachim and S{\o}gaard, Anders
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
164--169
Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP tasks, mixed results have been reported, and little is known about the conditions under which MTL leads to gains in NLP. This paper sheds light on the specific task relations that can lead to gains from MTL models over single-task setups.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,344
inproceedings
pande-2017-effective
Effective search space reduction for spell correction using character neural embeddings
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2027/
Pande, Harshit
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
170--174
We present a novel, unsupervised, and distance measure agnostic method for search space reduction in spell correction using neural character embeddings. The embeddings are learned by skip-gram word2vec training on sequences generated from dictionary words in a phonetic information-retentive manner. We report a very high performance in terms of both success rates and reduction of search space on the Birkbeck spelling error corpus. To the best of our knowledge, this is the first application of word2vec to spell correction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,345
inproceedings
cotterell-etal-2017-explaining
Explaining and Generalizing Skip-Gram through Exponential Family Principal Component Analysis
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2028/
Cotterell, Ryan and Poliak, Adam and Van Durme, Benjamin and Eisner, Jason
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
175--181
The popular skip-gram model induces word embeddings by exploiting the signal from word-context coocurrence. We offer a new interpretation of skip-gram based on exponential family PCA-a form of matrix factorization to generalize the skip-gram model to tensor factorization. In turn, this lets us train embeddings through richer higher-order coocurrences, e.g., triples that include positional information (to incorporate syntax) or morphological information (to share parameters across related words). We experiment on 40 languages and show our model improves upon skip-gram.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,346
inproceedings
cao-clark-2017-latent
Latent Variable Dialogue Models and their Diversity
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2029/
Cao, Kris and Clark, Stephen
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
182--187
We present a dialogue generation model that directly captures the variability in possible responses to a given input, which reduces the {\textquoteleft}boring output' issue of deterministic dialogue models. Experiments show that our model generates more diverse outputs than baseline models, and also generates more consistently acceptable output than sampling from a deterministic encoder-decoder model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,347
inproceedings
katerenchuk-2017-age
Age Group Classification with Speech and Metadata Multimodality Fusion
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2030/
Katerenchuk, Denys
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
188--193
Children comprise a significant proportion of TV viewers and it is worthwhile to customize the experience for them. However, identifying who is a child in the audience can be a challenging task. We present initial studies of a novel method which combines utterances with user metadata. In particular, we develop an ensemble of different machine learning techniques on different subsets of data to improve child detection. Our initial results show an 9.2{\%} absolute improvement over the baseline, leading to a state-of-the-art performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,348
inproceedings
lakomkin-etal-2017-automatically
Automatically augmenting an emotion dataset improves classification using audio
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2031/
Lakomkin, Egor and Weber, Cornelius and Wermter, Stefan
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
194--197
In this work, we tackle a problem of speech emotion classification. One of the issues in the area of affective computation is that the amount of annotated data is very limited. On the other hand, the number of ways that the same emotion can be expressed verbally is enormous due to variability between speakers. This is one of the factors that limits performance and generalization. We propose a simple method that extracts audio samples from movies using textual sentiment analysis. As a result, it is possible to automatically construct a larger dataset of audio samples with positive, negative emotional and neutral speech. We show that pretraining recurrent neural network on such a dataset yields better results on the challenging EmotiW corpus. This experiment shows a potential benefit of combining textual sentiment analysis with vocal information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,349
inproceedings
chen-etal-2017-line
On-line Dialogue Policy Learning with Companion Teaching
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2032/
Chen, Lu and Yang, Runzhe and Chang, Cheng and Ye, Zihao and Zhou, Xiang and Yu, Kai
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
198--204
On-line dialogue policy learning is the key for building evolvable conversational agent in real world scenarios. Poor initial policy can easily lead to bad user experience and consequently fail to attract sufficient users for policy training. A novel framework, companion teaching, is proposed to include a human teacher in the dialogue policy training loop to address the cold start problem. Here, dialogue policy is trained using not only user`s reward, but also teacher`s example action as well as estimated immediate reward at turn level. Simulation experiments showed that, with small number of human teaching dialogues, the proposed approach can effectively improve user experience at the beginning and smoothly lead to good performance with more user interaction data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,350
inproceedings
vodolan-etal-2017-hybrid
Hybrid Dialog State Tracker with {ASR} Features
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2033/
Vodol{\'a}n, Miroslav and Kadlec, Rudolf and Kleindienst, Jan
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
205--210
This paper presents a hybrid dialog state tracker enhanced by trainable Spoken Language Understanding (SLU) for slot-filling dialog systems. Our architecture is inspired by previously proposed neural-network-based belief-tracking systems. In addition, we extended some parts of our modular architecture with differentiable rules to allow end-to-end training. We hypothesize that these rules allow our tracker to generalize better than pure machine-learning based systems. For evaluation, we used the Dialog State Tracking Challenge (DSTC) 2 dataset - a popular belief tracking testbed with dialogs from restaurant information system. To our knowledge, our hybrid tracker sets a new state-of-the-art result in three out of four categories within the DSTC2.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,351
inproceedings
nicolai-kondrak-2017-morphological
Morphological Analysis without Expert Annotation
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2034/
Nicolai, Garrett and Kondrak, Grzegorz
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
211--216
The task of morphological analysis is to produce a complete list of lemma+tag analyses for a given word-form. We propose a discriminative string transduction approach which exploits plain inflection tables and raw text corpora, thus obviating the need for expert annotation. Experiments on four languages demonstrate that our system has much higher coverage than a hand-engineered FST analyzer, and is more accurate than a state-of-the-art morphological tagger.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,352
inproceedings
kumar-etal-2017-morphological
Morphological Analysis of the {D}ravidian Language Family
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2035/
Kumar, Arun and Cotterell, Ryan and Padr{\'o}, Llu{\'i}s and Oliver, Antoni
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
217--222
The Dravidian languages are one of the most widely spoken language families in the world, yet there are very few annotated resources available to NLP researchers. To remedy this, we create DravMorph, a corpus annotated for morphological segmentation and part-of-speech. Additionally, we exploit novel features and higher-order models to set state-of-the-art results on these corpora on both tasks, beating techniques proposed in the literature by as much as 4 points in segmentation F1.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,353
inproceedings
camacho-collados-navigli-2017-babeldomains
{B}abel{D}omains: Large-Scale Domain Labeling of Lexical Resources
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2036/
Camacho-Collados, Jose and Navigli, Roberto
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
223--228
In this paper we present BabelDomains, a unified resource which provides lexical items with information about domains of knowledge. We propose an automatic method that uses knowledge from various lexical resources, exploiting both distributional and graph-based clues, to accurately propagate domain information. We evaluate our methodology intrinsically on two lexical resources (WordNet and BabelNet), achieving a precision over 80{\%} in both cases. Finally, we show the potential of BabelDomains in a supervised learning setting, clustering training data by domain for hypernym discovery.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,354
inproceedings
napoles-etal-2017-jfleg
{JFLEG}: A Fluency Corpus and Benchmark for Grammatical Error Correction
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2037/
Napoles, Courtney and Sakaguchi, Keisuke and Tetreault, Joel
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
229--234
We present a new parallel corpus, JHU FLuency-Extended GUG corpus (JFLEG) for developing and evaluating grammatical error correction (GEC). Unlike other corpora, it represents a broad range of language proficiency levels and uses holistic fluency edits to not only correct grammatical errors but also make the original text more native sounding. We describe the types of corrections made and benchmark four leading GEC systems on this corpus, identifying specific areas in which they do well and how they can improve. JFLEG fulfills the need for a new gold standard to properly assess the current state of GEC.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,355
inproceedings
habash-etal-2017-parallel
A Parallel Corpus for Evaluating Machine Translation between {A}rabic and {E}uropean Languages
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2038/
Habash, Nizar and Zalmout, Nasser and Taji, Dima and Hoang, Hieu and Alzate, Maverick
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
235--241
We present Arab-Acquis, a large publicly available dataset for evaluating machine translation between 22 European languages and Arabic. Arab-Acquis consists of over 12,000 sentences from the JRC-Acquis (Acquis Communautaire) corpus translated twice by professional translators, once from English and once from French, and totaling over 600,000 words. The corpus follows previous data splits in the literature for tuning, development, and testing. We describe the corpus and how it was created. We also present the first benchmarking results on translating to and from Arabic for 22 European languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,356
inproceedings
abzianidze-etal-2017-parallel
The {P}arallel {M}eaning {B}ank: Towards a Multilingual Corpus of Translations Annotated with Compositional Meaning Representations
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2039/
Abzianidze, Lasha and Bjerva, Johannes and Evang, Kilian and Haagsma, Hessel and van Noord, Rik and Ludmann, Pierre and Nguyen, Duc-Duy and Bos, Johan
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
242--247
The Parallel Meaning Bank is a corpus of translations annotated with shared, formal meaning representations comprising over 11 million words divided over four languages (English, German, Italian, and Dutch). Our approach is based on cross-lingual projection: automatically produced (and manually corrected) semantic annotations for English sentences are mapped onto their word-aligned translations, assuming that the translations are meaning-preserving. The semantic annotation consists of five main steps: (i) segmentation of the text in sentences and lexical items; (ii) syntactic parsing with Combinatory Categorial Grammar; (iii) universal semantic tagging; (iv) symbolization; and (v) compositional semantic analysis based on Discourse Representation Theory. These steps are performed using statistical models trained in a semi-supervised manner. The employed annotation models are all language-neutral. Our first results are promising.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,357
inproceedings
agic-etal-2017-cross
Cross-lingual tagger evaluation without test data
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2040/
Agi{\'c}, {\v{Z}}eljko and Plank, Barbara and S{\o}gaard, Anders
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
248--253
We address the challenge of cross-lingual POS tagger evaluation in absence of manually annotated test data. We put forth and evaluate two dictionary-based metrics. On the tasks of accuracy prediction and system ranking, we reveal that these metrics are reliable enough to approximate test set-based evaluation, and at the same time lean enough to support assessment for truly low-resource languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,358
inproceedings
cardellino-etal-2017-legal
Legal {NERC} with ontologies, {W}ikipedia and curriculum learning
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2041/
Cardellino, Cristian and Teruel, Milagro and Alonso Alemany, Laura and Villata, Serena
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
254--259
In this paper, we present a Wikipedia-based approach to develop resources for the legal domain. We establish a mapping between a legal domain ontology, LKIF (Hoekstra et al. 2007), and a Wikipedia-based ontology, YAGO (Suchanek et al. 2007), and through that we populate LKIF. Moreover, we use the mentions of those entities in Wikipedia text to train a specific Named Entity Recognizer and Classifier. We find that this classifier works well in the Wikipedia, but, as could be expected, performance decreases in a corpus of judgments of the European Court of Human Rights. However, this tool will be used as a preprocess for human annotation. We resort to a technique called {\textquotedblleft}curriculum learning{\textquotedblright} aimed to overcome problems of overfitting by learning increasingly more complex concepts. However, we find that in this particular setting, the method works best by learning from most specific to most general concepts, not the other way round.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,359
inproceedings
sprugnoli-etal-2017-content
The Content Types Dataset: a New Resource to Explore Semantic and Functional Characteristics of Texts
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2042/
Sprugnoli, Rachele and Caselli, Tommaso and Tonelli, Sara and Moretti, Giovanni
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
260--266
This paper presents a new resource, called Content Types Dataset, to promote the analysis of texts as a composition of units with specific semantic and functional roles. By developing this dataset, we also introduce a new NLP task for the automatic classification of Content Types. The annotation scheme and the dataset are described together with two sets of classification experiments.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,360
inproceedings
sari-etal-2017-continuous
Continuous N-gram Representations for Authorship Attribution
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2043/
Sari, Yunita and Vlachos, Andreas and Stevenson, Mark
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
267--273
This paper presents work on using continuous representations for authorship attribution. In contrast to previous work, which uses discrete feature representations, our model learns continuous representations for n-gram features via a neural network jointly with the classification layer. Experimental results demonstrate that the proposed model outperforms the state-of-the-art on two datasets, while producing comparable results on the remaining two.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,361
inproceedings
bekoulis-etal-2017-reconstructing
Reconstructing the house from the ad: Structured prediction on real estate classifieds
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2044/
Bekoulis, Giannis and Deleu, Johannes and Demeester, Thomas and Develder, Chris
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
274--279
In this paper, we address the (to the best of our knowledge) new problem of extracting a structured description of real estate properties from their natural language descriptions in classifieds. We survey and present several models to (a) identify important entities of a property (e.g.,rooms) from classifieds and (b) structure them into a tree format, with the entities as nodes and edges representing a part-of relation. Experiments show that a graph-based system deriving the tree from an initially fully connected entity graph, outperforms a transition-based system starting from only the entity nodes, since it better reconstructs the tree.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,362
inproceedings
farajian-etal-2017-neural
Neural vs. Phrase-Based Machine Translation in a Multi-Domain Scenario
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2045/
Farajian, M. Amin and Turchi, Marco and Negri, Matteo and Bertoldi, Nicola and Federico, Marcello
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
280--284
State-of-the-art neural machine translation (NMT) systems are generally trained on specific domains by carefully selecting the training sets and applying proper domain adaptation techniques. In this paper we consider the real world scenario in which the target domain is not predefined, hence the system should be able to translate text from multiple domains. We compare the performance of a generic NMT system and phrase-based statistical machine translation (PBMT) system by training them on a generic parallel corpus composed of data from different domains. Our results on multi-domain English-French data show that, in these realistic conditions, PBMT outperforms its neural counterpart. This raises the question: is NMT ready for deployment as a generic/multi-purpose MT backbone in real-world settings?
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,363
inproceedings
martschat-markert-2017-improving
Improving {ROUGE} for Timeline Summarization
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2046/
Martschat, Sebastian and Markert, Katja
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
285--290
Current evaluation metrics for timeline summarization either ignore the temporal aspect of the task or require strict date matching. We introduce variants of ROUGE that allow alignment of daily summaries via temporal distance or semantic similarity. We argue for the suitability of these variants in a theoretical analysis and demonstrate it in a battery of task-specific tests.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,364
inproceedings
suzuki-nagata-2017-cutting
Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2047/
Suzuki, Jun and Nagata, Masaaki
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
291--297
This paper tackles the reduction of redundant repeating generation that is often observed in RNN-based encoder-decoder models. Our basic idea is to jointly estimate the upper-bound frequency of each target vocabulary in the encoder and control the output words based on the estimation in the decoder. Our method shows significant improvement over a strong RNN-based encoder-decoder baseline and achieved its best results on an abstractive summarization benchmark.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,365
inproceedings
gatti-etal-2017-sing
To Sing like a Mockingbird
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2048/
Gatti, Lorenzo and {\"Ozbal, G{\"ozde and Stock, Oliviero and Strapparava, Carlo
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
298--304
Musical parody, i.e. the act of changing the lyrics of an existing and very well-known song, is a commonly used technique for creating catchy advertising tunes and for mocking people or events. Here we describe a system for automatically producing a musical parody, starting from a corpus of songs. The system can automatically identify characterizing words and concepts related to a novel text, which are taken from the daily news. These concepts are then used as seeds to appropriately replace part of the original lyrics of a song, using metrical, rhyming and lexical constraints. Finally, the parody can be sung with a singing speech synthesizer, with no intervention from the user.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,366
inproceedings
hayashi-nagata-2017-k
K-best Iterative {V}iterbi Parsing
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2049/
Hayashi, Katsuhiko and Nagata, Masaaki
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
305--310
This paper presents an efficient and optimal parsing algorithm for probabilistic context-free grammars (PCFGs). To achieve faster parsing, our proposal employs a pruning technique to reduce unnecessary edges in the search space. The key is to conduct repetitively Viterbi inside and outside parsing, while gradually expanding the search space to efficiently compute heuristic bounds used for pruning. Our experimental results using the English Penn Treebank corpus show that the proposed algorithm is faster than the standard CKY parsing algorithm. In addition, we also show how to extend this algorithm to extract k-best Viterbi parse trees.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,367
inproceedings
de-kok-etal-2017-pp
{PP} Attachment: Where do We Stand?
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2050/
de Kok, Dani{\"el and Ma, Jianqiang and Dima, Corina and Hinrichs, Erhard
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
311--317
Prepostitional phrase (PP) attachment is a well known challenge to parsing. In this paper, we combine the insights of different works, namely: (1) treating PP attachment as a classification task with an arbitrary number of attachment candidates; (2) using auxiliary distributions to augment the data beyond the hand-annotated training set; (3) using topological fields to get information about the distribution of PP attachment throughout clauses and (4) using state-of-the-art techniques such as word embeddings and neural networks. We show that jointly using these techniques leads to substantial improvements. We also conduct a qualitative analysis to gauge where the ceiling of the task is in a realistic setup.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,368
inproceedings
aufrant-etal-2017-dont
Don`t Stop Me Now! Using Global Dynamic Oracles to Correct Training Biases of Transition-Based Dependency Parsers
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2051/
Aufrant, Lauriane and Wisniewski, Guillaume and Yvon, Fran{\c{c}}ois
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
318--323
This paper formalizes a sound extension of dynamic oracles to global training, in the frame of transition-based dependency parsers. By dispensing with the pre-computation of references, this extension widens the training strategies that can be entertained for such parsers; we show this by revisiting two standard training procedures, early-update and max-violation, to correct some of their search space sampling biases. Experimentally, on the SPMRL treebanks, this improvement increases the similarity between the train and test distributions and yields performance improvements up to 0.7 UAS, without any computation overhead.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,369
inproceedings
bhat-etal-2017-joining
Joining Hands: Exploiting Monolingual Treebanks for Parsing of Code-mixing Data
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2052/
Bhat, Irshad and Bhat, Riyaz A. and Shrivastava, Manish and Sharma, Dipti
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
324--330
In this paper, we propose efficient and less resource-intensive strategies for parsing of code-mixed data. These strategies are not constrained by in-domain annotations, rather they leverage pre-existing monolingual annotated resources for training. We show that these methods can produce significantly better results as compared to an informed baseline. Due to lack of an evaluation set for code-mixed structures, we also present a data set of 450 Hindi and English code-mixed tweets of Hindi multilingual speakers for evaluation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,370
inproceedings
coavoux-crabbe-2017-multilingual
Multilingual Lexicalized Constituency Parsing with Word-Level Auxiliary Tasks
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2053/
Coavoux, Maximin and Crabb{\'e}, Beno{\^i}t
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
331--336
We introduce a constituency parser based on a bi-LSTM encoder adapted from recent work (Cross and Huang, 2016b; Kiperwasser and Goldberg, 2016), which can incorporate a lower level character biLSTM (Ballesteros et al., 2015; Plank et al., 2016). We model two important interfaces of constituency parsing with auxiliary tasks supervised at the word level: (i) part-of-speech (POS) and morphological tagging, (ii) functional label prediction. On the SPMRL dataset, our parser obtains above state-of-the-art results on constituency parsing without requiring either predicted POS or morphological tags, and outputs labelled dependency trees.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,371
inproceedings
pezzelle-etal-2017-precise
Be Precise or Fuzzy: Learning the Meaning of Cardinals and Quantifiers from Vision
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2054/
Pezzelle, Sandro and Marelli, Marco and Bernardi, Raffaella
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
337--342
People can refer to quantities in a visual scene by using either exact cardinals (e.g. one, two, three) or natural language quantifiers (e.g. few, most, all). In humans, these two processes underlie fairly different cognitive and neural mechanisms. Inspired by this evidence, the present study proposes two models for learning the objective meaning of cardinals and quantifiers from visual scenes containing multiple objects. We show that a model capitalizing on a {\textquoteleft}fuzzy' measure of similarity is effective for learning quantifiers, whereas the learning of exact cardinals is better accomplished when information about number is provided.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,372
inproceedings
ficler-goldberg-2017-improving
Improving a Strong Neural Parser with Conjunction-Specific Features
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2055/
Ficler, Jessica and Goldberg, Yoav
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
343--348
While dependency parsers reach very high overall accuracy, some dependency relations are much harder than others. In particular, dependency parsers perform poorly in coordination construction (i.e., correctly attaching the conj relation). We extend a state-of-the-art dependency parser with conjunction-specific features, focusing on the similarity between the conjuncts head words. Training the extended parser yields an improvement in conj attachment as well as in overall dependency parsing accuracy on the Stanford dependency conversion of the Penn TreeBank.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,373
inproceedings
pal-etal-2017-neural
Neural Automatic Post-Editing Using Prior Alignment and Reranking
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2056/
Pal, Santanu and Naskar, Sudip Kumar and Vela, Mihaela and Liu, Qun and van Genabith, Josef
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
349--355
We present a second-stage machine translation (MT) system based on a neural machine translation (NMT) approach to automatic post-editing (APE) that improves the translation quality provided by a first-stage MT system. Our APE system (APE{\_}Sym) is an extended version of an attention based NMT model with bilingual symmetry employing bidirectional models, mt{--}pe and pe{--}mt. APE translations produced by our system show statistically significant improvements over the first-stage MT, phrase-based APE and the best reported score on the WMT 2016 APE dataset by a previous neural APE system. Re-ranking (APE{\_}Rerank) of the n-best translations from the phrase-based APE and APE{\_}Sym systems provides further substantial improvements over the symmetric neural APE model. Human evaluation confirms that the APE{\_}Rerank generated PE translations improve on the previous best neural APE system at WMT 2016.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,374
inproceedings
graham-etal-2017-improving
Improving Evaluation of Document-level Machine Translation Quality Estimation
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2057/
Graham, Yvette and Ma, Qingsong and Baldwin, Timothy and Liu, Qun and Parra, Carla and Scarton, Carolina
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
356--361
Meaningful conclusions about the relative performance of NLP systems are only possible if the gold standard employed in a given evaluation is both valid and reliable. In this paper, we explore the validity of human annotations currently employed in the evaluation of document-level quality estimation for machine translation (MT). We demonstrate the degree to which MT system rankings are dependent on weights employed in the construction of the gold standard, before proposing direct human assessment as a valid alternative. Experiments show direct assessment (DA) scores for documents to be highly reliable, achieving a correlation of above 0.9 in a self-replication experiment, in addition to a substantial estimated cost reduction through quality controlled crowd-sourcing. The original gold standard based on post-edits incurs a 10{--}20 times greater cost than DA.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,375
inproceedings
stahlberg-etal-2017-neural
Neural Machine Translation by Minimising the {B}ayes-risk with Respect to Syntactic Translation Lattices
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2058/
Stahlberg, Felix and de Gispert, Adri{\`a} and Hasler, Eva and Byrne, Bill
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
362--368
We present a novel scheme to combine neural machine translation (NMT) with traditional statistical machine translation (SMT). Our approach borrows ideas from linearised lattice minimum Bayes-risk decoding for SMT. The NMT score is combined with the Bayes-risk of the translation according the SMT lattice. This makes our approach much more flexible than n-best list or lattice rescoring as the neural decoder is not restricted to the SMT search space. We show an efficient and simple way to integrate risk estimation into the NMT decoder which is suitable for word-level as well as subword-unit-level NMT. We test our method on English-German and Japanese-English and report significant gains over lattice rescoring on several data sets for both single and ensembled NMT. The MBR decoder produces entirely new hypotheses far beyond simply rescoring the SMT search space or fixing UNKs in the NMT output.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,376
inproceedings
huck-etal-2017-producing
Producing Unseen Morphological Variants in Statistical Machine Translation
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2059/
Huck, Matthias and Tamchyna, Ale{\v{s}} and Bojar, Ond{\v{r}}ej and Fraser, Alexander
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
369--375
Translating into morphologically rich languages is difficult. Although the coverage of lemmas may be reasonable, many morphological variants cannot be learned from the training data. We present a statistical translation system that is able to produce these inflected word forms. Different from most previous work, we do not separate morphological prediction from lexical choice into two consecutive steps. Our approach is novel in that it is integrated in decoding and takes advantage of context information from both the source language and the target language sides.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,377
inproceedings
sennrich-2017-grammatical
How Grammatical is Character-level Neural Machine Translation? Assessing {MT} Quality with Contrastive Translation Pairs
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2060/
Sennrich, Rico
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
376--382
Analysing translation quality in regards to specific linguistic phenomena has historically been difficult and time-consuming. Neural machine translation has the attractive property that it can produce scores for arbitrary translations, and we propose a novel method to assess how well NMT systems model specific linguistic phenomena such as agreement over long distances, the production of novel words, and the faithful translation of polarity. The core idea is that we measure whether a reference translation is more probable under a NMT model than a contrastive translation which introduces a specific type of error. We present LingEval97, a large-scale data set of 97000 contrastive translation pairs based on the WMT English-{\ensuremath{>}}German translation task, with errors automatically created with simple rules. We report results for a number of systems, and find that recently introduced character-level NMT systems perform better at transliteration than models with byte-pair encoding (BPE) segmentation, but perform more poorly at morphosyntactic agreement, and translating discontiguous units of meaning.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,378
inproceedings
yang-etal-2017-neural
Neural Machine Translation with Recurrent Attention Modeling
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2061/
Yang, Zichao and Hu, Zhiting and Deng, Yuntian and Dyer, Chris and Smola, Alex
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
383--387
Knowing which words have been attended to in previous time steps while generating a translation is a rich source of information for predicting what words will be attended to in the future. We improve upon the attention model of Bahdanau et al. (2014) by explicitly modeling the relationship between previous and subsequent attention levels for each word using one recurrent network per input word. This architecture easily captures informative features, such as fertility and regularities in relative distortion. In experiments, we show our parameterization of attention improves translation quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,379
inproceedings
pilehvar-collier-2017-inducing
Inducing Embeddings for Rare and Unseen Words by Leveraging Lexical Resources
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2062/
Pilehvar, Mohammad Taher and Collier, Nigel
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
388--393
We put forward an approach that exploits the knowledge encoded in lexical resources in order to induce representations for words that were not encountered frequently during training. Our approach provides an advantage over the past work in that it enables vocabulary expansion not only for morphological variations, but also for infrequent domain specific terms. We performed evaluations in different settings, showing that the technique can provide consistent improvements on multiple benchmarks across domains.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,380
inproceedings
lapesa-evert-2017-large
Large-scale evaluation of dependency-based {DSM}s: Are they worth the effort?
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2063/
Lapesa, Gabriella and Evert, Stefan
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
394--400
This paper presents a large-scale evaluation study of dependency-based distributional semantic models. We evaluate dependency-filtered and dependency-structured DSMs in a number of standard semantic similarity tasks, systematically exploring their parameter space in order to give them a {\textquotedblleft}fair shot{\textquotedblright} against window-based models. Our results show that properly tuned window-based DSMs still outperform the dependency-based models in most tasks. There appears to be little need for the language-dependent resources and computational cost associated with syntactic analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,381
inproceedings
sanchez-riedel-2017-well
How Well Can We Predict Hypernyms from Word Embeddings? A Dataset-Centric Analysis
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2064/
Sanchez, Ivan and Riedel, Sebastian
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
401--407
One key property of word embeddings currently under study is their capacity to encode hypernymy. Previous works have used supervised models to recover hypernymy structures from embeddings. However, the overall results do not clearly show how well we can recover such structures. We conduct the first dataset-centric analysis that shows how only the Baroni dataset provides consistent results. We empirically show that a possible reason for its good performance is its alignment to dimensions specific of hypernymy: generality and similarity
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,382
inproceedings
vulic-2017-cross
Cross-Lingual Syntactically Informed Distributed Word Representations
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2065/
Vuli{\'c}, Ivan
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
408--414
We develop a novel cross-lingual word representation model which injects syntactic information through dependency-based contexts into a shared cross-lingual word vector space. The model, termed CL-DepEmb, is based on the following assumptions: (1) dependency relations are largely language-independent, at least for related languages and prominent dependency links such as direct objects, as evidenced by the Universal Dependencies project; (2) word translation equivalents take similar grammatical roles in a sentence and are therefore substitutable within their syntactic contexts. Experiments with several language pairs on word similarity and bilingual lexicon induction, two fundamental semantic tasks emphasising semantic similarity, suggest the usefulness of the proposed syntactically informed cross-lingual word vector spaces. Improvements are observed in both tasks over standard cross-lingual {\textquotedblleft}offline mapping{\textquotedblright} baselines trained using the same setup and an equal level of bilingual supervision.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,383
inproceedings
ferrero-etal-2017-using
Using Word Embedding for Cross-Language Plagiarism Detection
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2066/
Ferrero, J{\'e}r{\'e}my and Besacier, Laurent and Schwab, Didier and Agn{\`e}s, Fr{\'e}d{\'e}ric
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
415--421
This paper proposes to use distributed representation of words (word embeddings) in cross-language textual similarity detection. The main contributions of this paper are the following: (a) we introduce new cross-language similarity detection methods based on distributed representation of words; (b) we combine the different methods proposed to verify their complementarity and finally obtain an overall F1 score of 89.15{\%} for English-French similarity detection at chunk level (88.5{\%} at sentence level) on a very challenging corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,384
inproceedings
avraham-goldberg-2017-interplay
The Interplay of Semantics and Morphology in Word Embeddings
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2067/
Avraham, Oded and Goldberg, Yoav
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
422--426
We explore the ability of word embeddings to capture both semantic and morphological similarity, as affected by the different types of linguistic properties (surface form, lemma, morphological tag) used to compose the representation of each word. We train several models, where each uses a different subset of these properties to compose its representations. By evaluating the models on semantic and morphological measures, we reveal some useful insights on the relationship between semantics and morphology.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,385
inproceedings
joulin-etal-2017-bag
Bag of Tricks for Efficient Text Classification
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2068/
Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
427--431
This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,386
inproceedings
schofield-etal-2017-pulling
Pulling Out the Stops: Rethinking Stopword Removal for Topic Models
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2069/
Schofield, Alexandra and Magnusson, M{\r{a}}ns and Mimno, David
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
432--436
It is often assumed that topic models benefit from the use of a manually curated stopword list. Constructing this list is time-consuming and often subject to user judgments about what kinds of words are important to the model and the application. Although stopword removal clearly affects which word types appear as most probable terms in topics, we argue that this improvement is superficial, and that topic inference benefits little from the practice of removing stopwords beyond very frequent terms. Removing corpus-specific stopwords after model inference is more transparent and produces similar results to removing those words prior to inference.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,387
inproceedings
ramrakhiyani-etal-2017-measuring
Measuring Topic Coherence through Optimal Word Buckets
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-2070/
Ramrakhiyani, Nitin and Pawar, Sachin and Hingmire, Swapnil and Palshikar, Girish
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
437--442
Measuring topic quality is essential for scoring the learned topics and their subsequent use in Information Retrieval and Text classification. To measure quality of Latent Dirichlet Allocation (LDA) based topics learned from text, we propose a novel approach based on grouping of topic words into buckets (TBuckets). A single large bucket signifies a single coherent theme, in turn indicating high topic coherence. TBuckets uses word embeddings of topic words and employs singular value decomposition (SVD) and Integer Linear Programming based optimization to create coherent word buckets. TBuckets outperforms the state-of-the-art techniques when evaluated using 3 publicly available datasets and on another one proposed in this paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,388