entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
banerjee-etal-2017-nitmz
{NITMZ}-{JU} at {IJCNLP}-2017 Task 4: Customer Feedback Analysis
Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/I17-4030/
Banerjee, Somnath and Pakray, Partha and Manna, Riyanka and Das, Dipankar and Gelbukh, Alexander
Proceedings of the {IJCNLP} 2017, Shared Tasks
180--183
In this paper, we describe a deep learning framework for analyzing the customer feedback as part of our participation in the shared task on Customer Feedback Analysis at the 8th International Joint Conference on Natural Language Processing (IJCNLP 2017). A Convolutional Neural Network (CNN) based deep neural network model was employed for the customer feedback task. The proposed system was evaluated on two languages, namely, English and French.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,184
inproceedings
gupta-etal-2017-iitp
{IITP} at {IJCNLP}-2017 Task 4: Auto Analysis of Customer Feedback using {CNN} and {GRU} Network
Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/I17-4031/
Gupta, Deepak and Lenka, Pabitra and Bedi, Harsimran and Ekbal, Asif and Bhattacharyya, Pushpak
Proceedings of the {IJCNLP} 2017, Shared Tasks
184--193
Analyzing customer feedback is the best way to channelize the data into new marketing strategies that benefit entrepreneurs as well as customers. Therefore an automated system which can analyze the customer behavior is in great demand. Users may write feedbacks in any language, and hence mining appropriate information often becomes intractable. Especially in a traditional feature-based supervised model, it is difficult to build a generic system as one has to understand the concerned language for finding the relevant features. In order to overcome this, we propose deep Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) based approaches that do not require handcrafting of features. We evaluate these techniques for analyzing customer feedback sentences on four languages, namely English, French, Japanese and Spanish. Our empirical analysis shows that our models perform well in all the four languages on the setups of IJCNLP Shared Task on Customer Feedback Analysis. Our model achieved the second rank in French, with an accuracy of 71.75{\%} and third ranks for all the other languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,185
inproceedings
wang-etal-2017-ynudlg
{YNUDLG} at {IJCNLP}-2017 Task 5: A {CNN}-{LSTM} Model with Attention for Multi-choice Question Answering in Examinations
Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/I17-4032/
Wang, Min and Liu, Qingxun and Ding, Peng and Li, Yongbin and Zhou, Xiaobing
Proceedings of the {IJCNLP} 2017, Shared Tasks
194--198
In this paper, we perform convolutional neural networks (CNN) to learn the joint representations of question-answer pairs first, then use the joint representations as the inputs of the long short-term memory (LSTM) with attention to learn the answer sequence of a question for labeling the matching quality of each answer. We also incorporating external knowledge by training Word2Vec on Flashcards data, thus we get more compact embedding. Experimental results show that our method achieves better or comparable performance compared with the baseline system. The proposed approach achieves the accuracy of 0.39, 0.42 in English valid set, test set, respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,186
inproceedings
li-kong-2017-als
{ALS} at {IJCNLP}-2017 Task 5: Answer Localization System for Multi-Choice Question Answering in Exams
Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/I17-4033/
Li, Changliang and Kong, Cunliang
Proceedings of the {IJCNLP} 2017, Shared Tasks
199--202
Multi-choice question answering in exams is a typical QA task. To accomplish this task, we present an answer localization method to locate answers shown in web pages, considering structural information and semantic information both. Using this method as basis, we analyze sentences and paragraphs appeared on web pages to get predictions. With this answer localization system, we get effective results on both validation dataset and test dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,187
inproceedings
hazem-2017-mappsent
{M}app{S}ent at {IJCNLP}-2017 Task 5: A Textual Similarity Approach Applied to Multi-choice Question Answering in Examinations
Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/I17-4034/
Hazem, Amir
Proceedings of the {IJCNLP} 2017, Shared Tasks
203--207
In this paper we present MappSent, a textual similarity approach that we applied to the multi-choice question answering in exams shared task. MappSent has initially been proposed for question-to-question similarity hazem2017. In this work, we present the results of two adaptations of MappSent for the question answering task on the English dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,188
inproceedings
yuan-etal-2017-ynu
{YNU}-{HPCC} at {IJCNLP}-2017 Task 5: Multi-choice Question Answering in Exams Using an Attention-based {LSTM} Model
Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/I17-4035/
Yuan, Hang and Zhang, You and Wang, Jin and Zhang, Xuejie
Proceedings of the {IJCNLP} 2017, Shared Tasks
208--212
A shared task is a typical question answering task that aims to test how accurately the participants can answer the questions in exams. Typically, for each question, there are four candidate answers, and only one of the answers is correct. The existing methods for such a task usually implement a recurrent neural network (RNN) or long short-term memory (LSTM). However, both RNN and LSTM are biased models in which the words in the tail of a sentence are more dominant than the words in the header. In this paper, we propose the use of an attention-based LSTM (AT-LSTM) model for these tasks. By adding an attention mechanism to the standard LSTM, this model can more easily capture long contextual information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,189
inproceedings
sarkar-etal-2017-ju
{JU} {NITM} at {IJCNLP}-2017 Task 5: A Classification Approach for Answer Selection in Multi-choice Question Answering System
Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/I17-4036/
Sarkar, Sandip and Das, Dipankar and Pakray, Partha
Proceedings of the {IJCNLP} 2017, Shared Tasks
213--216
This paper describes the participation of the JU NITM team in IJCNLP-2017 Task 5: {\textquotedblleft}Multi-choice Question Answering in Examinations{\textquotedblright}. The main aim of this shared task is to choose the correct option for each multi-choice question. Our proposed model includes vector representations as feature and machine learning for classification. At first we represent question and answer in vector space and after that find the cosine similarity between those two vectors. Finally we apply classification approach to find the correct answer. Our system was only developed for the English language, and it obtained an accuracy of 40.07{\%} for test dataset and 40.06{\%} for valid dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,190
inproceedings
che-zhang-2017-deep
Deep Learning in Lexical Analysis and Parsing
Kurohashi, Sadao and Strube, Michael
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/I17-5001/
Che, Wanxiang and Zhang, Yue
Proceedings of the {IJCNLP} 2017, Tutorial Abstracts
1--2
Neural networks, also with a fancy name deep learning, just right can overcome the above {\textquotedblleft}feature engineering{\textquotedblright} problem. In theory, they can use non-linear activation functions and multiple layers to automatically find useful features. The novel network structures, such as convolutional or recurrent, help to reduce the difficulty further. These deep learning models have been successfully used for lexical analysis and parsing. In this tutorial, we will give a review of each line of work, by contrasting them with traditional statistical methods, and organizing them in consistent orders.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,192
inproceedings
chen-gao-2017-open
Open-Domain Neural Dialogue Systems
Kurohashi, Sadao and Strube, Michael
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/I17-5003/
Chen, Yun-Nung and Gao, Jianfeng
Proceedings of the {IJCNLP} 2017, Tutorial Abstracts
6--10
In the past decade, spoken dialogue systems have been the most prominent component in today`s personal assistants. A lot of devices have incorporated dialogue system modules, which allow users to speak naturally in order to finish tasks more efficiently. The traditional conversational systems have rather complex and/or modular pipelines. The advance of deep learning technologies has recently risen the applications of neural models to dialogue modeling. Nevertheless, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge on the benchmark of the models of the prior work and the recent state-of-the-art work. Therefore, this tutorial is designed to focus on an overview of the dialogue system development while describing most recent research for building task-oriented and chit-chat dialogue systems, and summarizing the challenges. We target the audience of students and practitioners who have some deep learning background, who want to get more familiar with conversational dialogue systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,194
inproceedings
cromieres-etal-2017-neural
Neural Machine Translation: Basics, Practical Aspects and Recent Trends
Kurohashi, Sadao and Strube, Michael
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/I17-5004/
Cromieres, Fabien and Nakazawa, Toshiaki and Dabre, Raj
Proceedings of the {IJCNLP} 2017, Tutorial Abstracts
11--13
Machine Translation (MT) is a sub-field of NLP which has experienced a number of paradigm shifts since its inception. Up until 2014, Phrase Based Statistical Machine Translation (PBSMT) approaches used to be the state of the art. In late 2014, Neural Machine Translation (NMT) was introduced and was proven to outperform all PBSMT approaches by a significant margin. Since then, the NMT approaches have undergone several transformations which have pushed the state of the art even further. This tutorial is primarily aimed at researchers who are either interested in or are fairly new to the world of NMT and want to obtain a deep understanding of NMT fundamentals. Because it will also cover the latest developments in NMT, it should also be useful to attendees with some experience in NMT.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,195
inproceedings
paetzold-specia-2017-ultimate
The Ultimate Presentation Makeup Tutorial: How to {P}olish your Posters, Slides and Presentations Skills
Kurohashi, Sadao and Strube, Michael
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/I17-5005/
Paetzold, Gustavo and Specia, Lucia
Proceedings of the {IJCNLP} 2017, Tutorial Abstracts
14--15
There is no question that our research community have, and still has been producing an insurmountable amount of interesting strategies, models and tools to a wide array of problems and challenges in diverse areas of knowledge. But for as long as interesting work has existed, we`ve been plagued by a great unsolved mystery: how come there is so much interesting work being published in conferences, but not as many interesting and engaging posters and presentations being featured in them? In this tutorial, we present practical step-by-step makeup solutions for poster, slides and oral presentations in order to help researchers who feel like they are not able to convey the importance of their research to the community in conferences.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,196
inproceedings
tian-etal-2017-challenge
The Challenge of Composition in Distributional and Formal Semantics
Kurohashi, Sadao and Strube, Michael
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/I17-5006/
Tian, Ran and Mineshima, Koji and Mart{\'i}nez-G{\'o}mez, Pascual
Proceedings of the {IJCNLP} 2017, Tutorial Abstracts
16--17
This is tutorial proposal. Abstract is as follows: The principle of compositionality states that the meaning of a complete sentence must be explained in terms of the meanings of its subsentential parts; in other words, each syntactic operation should have a corresponding semantic operation. In recent years, it has been increasingly evident that distributional and formal semantics are complementary in addressing composition; while the distributional/vector-based approach can naturally measure semantic similarity (Mitchell and Lapata, 2010), the formal/symbolic approach has a long tradition within logic-based semantic frameworks (Montague, 1974) and can readily be connected to theorem provers or databases to perform complicated tasks. In this tutorial, we will cover recent efforts in extending word vectors to account for composition and reasoning, the various challenging phenomena observed in composition and addressed by formal semantics, and a hybrid approach that combines merits of the two. Outline and introduction to instructors are found in the submission. Ran Tian has taught a tutorial at the Annual Meeting of the Association for Natural Language Processing in Japan, 2015. The estimated audience size was about one hundred. Only a limited part of the contents in this tutorial is drawn from the previous one. Koji Mineshima has taught a one-week course at the 28th European Summer School in Logic, Language and Information (ESSLLI2016), together with Prof. Daisuke Bekki. Only a few contents are the same with this tutorial. Tutorials on {\textquotedblleft}CCG Semantic Parsing{\textquotedblright} have been given in ACL2013, EMNLP2014, and AAAI2015. A coming tutorial on {\textquotedblleft}Deep Learning for Semantic Composition{\textquotedblright} will be given in ACL2017. Contents in these tutorials are somehow related to but not overlapping with our proposal.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,197
inproceedings
liu-perez-2017-gated
Gated End-to-End Memory Networks
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1001/
Liu, Fei and Perez, Julien
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
1--10
Machine reading using differentiable reasoning models has recently shown remarkable progress. In this context, End-to-End trainable Memory Networks (MemN2N) have demonstrated promising performance on simple natural language based reasoning tasks such as factual reasoning and basic deduction. However, other tasks, namely multi-fact question-answering, positional reasoning or dialog related tasks, remain challenging particularly due to the necessity of more complex interactions between the memory and controller modules composing this family of models. In this paper, we introduce a novel end-to-end memory access regulation mechanism inspired by the current progress on the connection short-cutting principle in the field of computer vision. Concretely, we develop a Gated End-to-End trainable Memory Network architecture (GMemN2N). From the machine learning perspective, this new capability is learned in an end-to-end fashion without the use of any additional supervision signal which is, as far as our knowledge goes, the first of its kind. Our experiments show significant improvements on the most challenging tasks in the 20 bAbI dataset, without the use of any domain knowledge. Then, we show improvements on the Dialog bAbI tasks including the real human-bot conversion-based Dialog State Tracking Challenge (DSTC-2) dataset. On these two datasets, our model sets the new state of the art.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,199
inproceedings
munkhdalai-yu-2017-neural
Neural Tree Indexers for Text Understanding
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1002/
Munkhdalai, Tsendsuren and Yu, Hong
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
11--21
Recurrent neural networks (RNNs) process input text sequentially and model the conditional transition between word tokens. In contrast, the advantages of recursive networks include that they explicitly model the compositionality and the recursive structure of natural language. However, the current recursive architecture is limited by its dependence on syntactic tree. In this paper, we introduce a robust syntactic parsing-independent tree structured model, Neural Tree Indexers (NTI) that provides a middle ground between the sequential RNNs and the syntactic treebased recursive models. NTI constructs a full n-ary tree by processing the input text with its node function in a bottom-up fashion. Attention mechanism can then be applied to both structure and node function. We implemented and evaluated a binary tree model of NTI, showing the model achieved the state-of-the-art performance on three different NLP tasks: natural language inference, answer sentence selection, and sentence classification, outperforming state-of-the-art recurrent and recursive neural networks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,200
inproceedings
adel-schutze-2017-exploring
Exploring Different Dimensions of Attention for Uncertainty Detection
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1003/
Adel, Heike and Sch{\"utze, Hinrich
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
22--34
Neural networks with attention have proven effective for many natural language processing tasks. In this paper, we develop attention mechanisms for uncertainty detection. In particular, we generalize standardly used attention mechanisms by introducing external attention and sequence-preserving attention. These novel architectures differ from standard approaches in that they use external resources to compute attention weights and preserve sequence information. We compare them to other configurations along different dimensions of attention. Our novel architectures set the new state of the art on a Wikipedia benchmark dataset and perform similar to the state-of-the-art model on a biomedical benchmark which uses a large set of linguistic features.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,201
inproceedings
al-nabki-etal-2017-classifying
Classifying Illegal Activities on Tor Network Based on Web Textual Contents
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1004/
Al Nabki, Mhd Wesam and Fidalgo, Eduardo and Alegre, Enrique and de Paz, Ivan
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
35--43
The freedom of the Deep Web offers a safe place where people can express themselves anonymously but they also can conduct illegal activities. In this paper, we present and make publicly available a new dataset for Darknet active domains, which we call {\textquotedblright}Darknet Usage Text Addresses{\textquotedblright} (DUTA). We built DUTA by sampling the Tor network during two months and manually labeled each address into 26 classes. Using DUTA, we conducted a comparison between two well-known text representation techniques crossed by three different supervised classifiers to categorize the Tor hidden services. We also fixed the pipeline elements and identified the aspects that have a critical influence on the classification results. We found that the combination of TFIDF words representation with Logistic Regression classifier achieves 96.6{\%} of 10 folds cross-validation accuracy and a macro F1 score of 93.7{\%} when classifying a subset of illegal activities from DUTA. The good performance of the classifier might support potential tools to help the authorities in the detection of these activities.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,202
inproceedings
martinez-alonso-plank-2017-multitask
When is multitask learning effective? Semantic sequence prediction under varying data conditions
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1005/
Mart{\'i}nez Alonso, H{\'e}ctor and Plank, Barbara
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
44--53
Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on \textit{when} MTL works and whether there are data characteristics that help to determine the success of MTL. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary task configurations, amongst which a novel setup, and correlate their impact to data-dependent conditions. Our results show that MTL is not always effective, because significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,203
inproceedings
hartung-etal-2017-learning
Learning Compositionality Functions on Word Embeddings for Modelling Attribute Meaning in Adjective-Noun Phrases
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1006/
Hartung, Matthias and Kaupmann, Fabian and Jebbara, Soufian and Cimiano, Philipp
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
54--64
Word embeddings have been shown to be highly effective in a variety of lexical semantic tasks. They tend to capture meaningful relational similarities between individual words, at the expense of lacking the capabilty of making the underlying semantic relation explicit. In this paper, we investigate the attribute relation that often holds between the constituents of adjective-noun phrases. We use CBOW word embeddings to represent word meaning and learn a compositionality function that combines the individual constituents into a phrase representation, thus capturing the compositional attribute meaning. The resulting embedding model, while being fully interpretable, outperforms count-based distributional vector space models that are tailored to attribute meaning in the two tasks of attribute selection and phrase similarity prediction. Moreover, as the model captures a generalized layer of attribute meaning, it bears the potential to be used for predictions over various attribute inventories without re-training.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,204
inproceedings
shwartz-etal-2017-hypernyms
Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1007/
Shwartz, Vered and Santus, Enrico and Schlechtweg, Dominik
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
65--75
The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution. We investigate an extensive number of such unsupervised measures, using several distributional semantic models that differ by context type and feature weighting. We analyze the performance of the different methods based on their linguistic motivation. Comparison to the state-of-the-art supervised methods shows that while supervised methods generally outperform the unsupervised ones, the former are sensitive to the distribution of training instances, hurting their reliability. Being based on general linguistic hypotheses and independent from training data, unsupervised measures are more robust, and therefore are still useful artillery for hypernymy detection.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,205
inproceedings
nguyen-etal-2017-distinguishing
Distinguishing Antonyms and Synonyms in a Pattern-based Neural Network
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1008/
Nguyen, Kim Anh and Schulte im Walde, Sabine and Vu, Ngoc Thang
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
76--85
Distinguishing between antonyms and synonyms is a key task to achieve high performance in NLP systems. While they are notoriously difficult to distinguish by distributional co-occurrence models, pattern-based methods have proven effective to differentiate between the relations. In this paper, we present a novel neural network model AntSynNET that exploits lexico-syntactic patterns from syntactic parse trees. In addition to the lexical and syntactic information, we successfully integrate the distance between the related words along the syntactic path as a new pattern feature. The results from classification experiments show that AntSynNET improves the performance over prior pattern-based methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,206
inproceedings
panchenko-etal-2017-unsupervised-mean
Unsupervised Does Not Mean Uninterpretable: The Case for Word Sense Induction and Disambiguation
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1009/
Panchenko, Alexander and Ruppert, Eugen and Faralli, Stefano and Ponzetto, Simone Paolo and Biemann, Chris
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
86--98
The current trend in NLP is the use of highly opaque models, e.g. neural networks and word embeddings. While these models yield state-of-the-art results on a range of tasks, their drawback is poor interpretability. On the example of word sense induction and disambiguation (WSID), we show that it is possible to develop an interpretable model that matches the state-of-the-art models in accuracy. Namely, we present an unsupervised, knowledge-free WSID approach, which is interpretable at three levels: word sense inventory, sense feature representations, and disambiguation procedure. Experiments show that our model performs on par with state-of-the-art word sense embeddings and other unsupervised systems while offering the possibility to justify its decisions in human-readable form.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,207
inproceedings
raganato-etal-2017-word
Word Sense Disambiguation: A Unified Evaluation Framework and Empirical Comparison
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1010/
Raganato, Alessandro and Camacho-Collados, Jose and Navigli, Roberto
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
99--110
Word Sense Disambiguation is a long-standing task in Natural Language Processing, lying at the core of human language understanding. However, the evaluation of automatic systems has been problematic, mainly due to the lack of a reliable evaluation framework. In this paper we develop a unified evaluation framework and analyze the performance of various Word Sense Disambiguation systems in a fair setup. The results show that supervised systems clearly outperform knowledge-based models. Among the supervised systems, a linear classifier trained on conventional local features still proves to be a hard baseline to beat. Nonetheless, recent approaches exploiting neural networks on unlabeled corpora achieve promising results, surpassing this hard baseline in most test sets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,208
inproceedings
guo-etal-2017-effective
Which is the Effective Way for {G}aokao: Information Retrieval or Neural Networks?
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1011/
Guo, Shangmin and Zeng, Xiangrong and He, Shizhu and Liu, Kang and Zhao, Jun
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
111--120
As one of the most important test of China, Gaokao is designed to be difficult enough to distinguish the excellent high school students. In this work, we detailed the Gaokao History Multiple Choice Questions(GKHMC) and proposed two different approaches to address them using various resources. One approach is based on entity search technique (IR approach), the other is based on text entailment approach where we specifically employ deep neural networks(NN approach). The result of experiment on our collected real Gaokao questions showed that they are good at different categories of questions, that is IR approach performs much better at entity questions(EQs) while NN approach shows its advantage on sentence questions(SQs). We achieve state-of-the-art performance and show that it`s indispensable to apply hybrid method when participating in the real-world tests.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,209
inproceedings
bogdanova-etal-2017-cant
If You Can`t Beat Them Join Them: Handcrafted Features Complement Neural Nets for Non-Factoid Answer Reranking
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1012/
Bogdanova, Dasha and Foster, Jennifer and Dzendzik, Daria and Liu, Qun
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
121--131
We show that a neural approach to the task of non-factoid answer reranking can benefit from the inclusion of tried-and-tested handcrafted features. We present a neural network architecture based on a combination of recurrent neural networks that are used to encode questions and answers, and a multilayer perceptron. We show how this approach can be combined with additional features, in particular, the discourse features used by previous research. Our neural approach achieves state-of-the-art performance on a public dataset from Yahoo! Answers and its performance is further improved by incorporating the discourse features. Additionally, we present a new dataset of Ask Ubuntu questions where the hybrid approach also achieves good results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,210
inproceedings
das-etal-2017-chains
Chains of Reasoning over Entities, Relations, and Text using Recurrent Neural Networks
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1013/
Das, Rajarshi and Neelakantan, Arvind and Belanger, David and McCallum, Andrew
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
132--141
Our goal is to combine the rich multi-step inference of symbolic logical reasoning with the generalization capabilities of neural networks. We are particularly interested in complex reasoning about entities and relations in text and large-scale knowledge bases (KBs). Neelakantan et al. (2015) use RNNs to compose the distributed semantics of multi-hop paths in KBs; however for multiple reasons, the approach lacks accuracy and practicality. This paper proposes three significant modeling advances: (1) we learn to jointly reason about relations, \textit{entities, and entity-types}; (2) we use neural attention modeling to incorporate \textit{multiple paths}; (3) we learn to \textit{share strength in a single RNN} that represents logical composition across all relations. On a large-scale Freebase+ClueWeb prediction task, we achieve 25{\%} error reduction, and a 53{\%} error reduction on sparse relations due to shared strength. On chains of reasoning in WordNet we reduce error in mean quantile by 84{\%} versus previous state-of-the-art.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,211
inproceedings
stanovsky-etal-2017-recognizing
Recognizing Mentions of Adverse Drug Reaction in Social Media Using Knowledge-Infused Recurrent Models
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1014/
Stanovsky, Gabriel and Gruhl, Daniel and Mendes, Pablo
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
142--151
Recognizing mentions of Adverse Drug Reactions (ADR) in social media is challenging: ADR mentions are context-dependent and include long, varied and unconventional descriptions as compared to more formal medical symptom terminology. We use the CADEC corpus to train a recurrent neural network (RNN) transducer, integrated with knowledge graph embeddings of DBpedia, and show the resulting model to be highly accurate (93.4 F1). Furthermore, even when lacking high quality expert annotations, we show that by employing an active learning technique and using purpose built annotation tools, we can train the RNN to perform well (83.9 F1).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,212
inproceedings
benton-etal-2017-multitask
Multitask Learning for Mental Health Conditions with Limited Social Media Data
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1015/
Benton, Adrian and Mitchell, Margaret and Hovy, Dirk
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
152--162
Language contains information about the author`s demographic attributes as well as their mental state, and has been successfully leveraged in NLP to predict either one alone. However, demographic attributes and mental states also interact with each other, and we are the first to demonstrate how to use them jointly to improve the prediction of mental health conditions across the board. We model the different conditions as tasks in a multitask learning (MTL) framework, and establish for the first time the potential of deep learning in the prediction of mental health from online user-generated text. The framework we propose significantly improves over all baselines and single-task models for predicting mental health conditions, with particularly significant gains for conditions with limited data. In addition, our best MTL model can predict the presence of conditions (neuroatypicality) more generally, further reducing the error of the strong feed-forward baseline.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,213
inproceedings
vulic-etal-2017-evaluation
Evaluation by Association: A Systematic Study of Quantitative Word Association Evaluation
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1016/
Vuli{\'c}, Ivan and Kiela, Douwe and Korhonen, Anna
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
163--175
Recent work on evaluating representation learning architectures in NLP has established a need for evaluation protocols based on subconscious cognitive measures rather than manually tailored intrinsic similarity and relatedness tasks. In this work, we propose a novel evaluation framework that enables large-scale evaluation of such architectures in the free word association (WA) task, which is firmly grounded in cognitive theories of human semantic representation. This evaluation is facilitated by the existence of large manually constructed repositories of word association data. In this paper, we (1) present a detailed analysis of the new quantitative WA evaluation protocol, (2) suggest new evaluation metrics for the WA task inspired by its direct analogy with information retrieval problems, (3) evaluate various state-of-the-art representation models on this task, and (4) discuss the relationship between WA and prior evaluations of semantic representation with well-known similarity and relatedness evaluation sets. We have made the WA evaluation toolkit publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,214
inproceedings
wachsmuth-etal-2017-computational
Computational Argumentation Quality Assessment in Natural Language
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1017/
Wachsmuth, Henning and Naderi, Nona and Hou, Yufang and Bilu, Yonatan and Prabhakaran, Vinodkumar and Thijm, Tim Alberdingk and Hirst, Graeme and Stein, Benno
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
176--187
Research on computational argumentation faces the problem of how to automatically assess the quality of an argument or argumentation. While different quality dimensions have been approached in natural language processing, a common understanding of argumentation quality is still missing. This paper presents the first holistic work on computational argumentation quality in natural language. We comprehensively survey the diverse existing theories and approaches to assess logical, rhetorical, and dialectical quality dimensions, and we derive a systematic taxonomy from these. In addition, we provide a corpus with 320 arguments, annotated for all 15 dimensions in the taxonomy. Our results establish a common ground for research on computational argumentation quality assessment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,215
inproceedings
tuggener-2017-method
A method for in-depth comparative evaluation: How (dis)similar are outputs of pos taggers, dependency parsers and coreference resolvers really?
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1018/
Tuggener, Don
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
188--198
This paper proposes a generic method for the comparative evaluation of system outputs. The approach is able to quantify the pairwise differences between two outputs and to unravel in detail what the differences consist of. We apply our approach to three tasks in Computational Linguistics, i.e. POS tagging, dependency parsing, and coreference resolution. We find that system outputs are more distinct than the (often) small differences in evaluation scores seem to suggest.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,216
inproceedings
kilickaya-etal-2017-evaluating
Re-evaluating Automatic Metrics for Image Captioning
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1019/
Kilickaya, Mert and Erdem, Aykut and Ikizler-Cinbis, Nazli and Erdem, Erkut
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
199--209
The task of generating natural language descriptions from images has received a lot of attention in recent years. Consequently, it is becoming increasingly important to evaluate such image captioning approaches in an automatic manner. In this paper, we provide an in-depth evaluation of the existing image captioning metrics through a series of carefully designed experiments. Moreover, we explore the utilization of the recently proposed Word Mover`s Distance (WMD) document metric for the purpose of image captioning. Our findings outline the differences and/or similarities between metrics and their relative robustness by means of extensive correlation, accuracy and distraction based evaluations. Our results also demonstrate that WMD provides strong advantages over other metrics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,217
inproceedings
baskaya-etal-2017-integrating
Integrating Meaning into Quality Evaluation of Machine Translation
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1020/
Ba{\c{skaya, Osman and Yildiz, Eray and Tunao{\u{glu, Doruk and Eren, Mustafa Tolga and Do{\u{gru{\"oz, A. Seza
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
210--219
Machine translation (MT) quality is evaluated through comparisons between MT outputs and the human translations (HT). Traditionally, this evaluation relies on form related features (e.g. lexicon and syntax) and ignores the transfer of meaning reflected in HT outputs. Instead, we evaluate the quality of MT outputs through meaning related features (e.g. polarity, subjectivity) with two experiments. In the first experiment, the meaning related features are compared to human rankings individually. In the second experiment, combinations of meaning related features and other quality metrics are utilized to predict the same human rankings. The results of our experiments confirm the benefit of these features in predicting human evaluation of translation quality in addition to traditional metrics which focus mainly on form.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,218
inproceedings
schlichtkrull-sogaard-2017-cross
Cross-Lingual Dependency Parsing with Late Decoding for Truly Low-Resource Languages
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1021/
Schlichtkrull, Michael and S{\o}gaard, Anders
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
220--229
In cross-lingual dependency annotation projection, information is often lost during transfer because of early decoding. We present an end-to-end graph-based neural network dependency parser that can be trained to reproduce matrices of edge scores, which can be directly projected across word alignments. We show that our approach to cross-lingual dependency parsing is not only simpler, but also achieves an absolute improvement of 2.25{\%} averaged across 10 languages compared to the previous state of the art.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,219
inproceedings
martinez-alonso-etal-2017-parsing
Parsing {U}niversal {D}ependencies without training
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1022/
Mart{\'i}nez Alonso, H{\'e}ctor and Agi{\'c}, {\v{Z}}eljko and Plank, Barbara and S{\o}gaard, Anders
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
230--240
We present UDP, the first training-free parser for Universal Dependencies (UD). Our algorithm is based on PageRank and a small set of specific dependency head rules. UDP features two-step decoding to guarantee that function words are attached as leaf nodes. The parser requires no training, and it is competitive with a delexicalized transfer system. UDP offers a linguistically sound unsupervised alternative to cross-lingual parsing for UD. The parser has very few parameters and distinctly robust to domain change across languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,220
inproceedings
dehouck-denis-2017-delexicalized
Delexicalized Word Embeddings for Cross-lingual Dependency Parsing
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1023/
Dehouck, Mathieu and Denis, Pascal
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
241--250
This paper presents a new approach to the problem of cross-lingual dependency parsing, aiming at leveraging training data from different source languages to learn a parser in a target language. Specifically, this approach first constructs word vector representations that exploit structural (i.e., dependency-based) contexts but only considering the morpho-syntactic information associated with each word and its contexts. These delexicalized word embeddings, which can be trained on any set of languages and capture features shared across languages, are then used in combination with standard language-specific features to train a lexicalized parser in the target language. We evaluate our approach through experiments on a set of eight different languages that are part the Universal Dependencies Project. Our main results show that using such delexicalized embeddings, either trained in a monolingual or multilingual fashion, achieves significant improvements over monolingual baselines.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,221
inproceedings
bar-haim-etal-2017-stance
Stance Classification of Context-Dependent Claims
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1024/
Bar-Haim, Roy and Bhattacharya, Indrajit and Dinuzzo, Francesco and Saha, Amrita and Slonim, Noam
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
251--261
Recent work has addressed the problem of detecting relevant claims for a given controversial topic. We introduce the complementary task of Claim Stance Classification, along with the first benchmark dataset for this task. We decompose this problem into: (a) open-domain target identification for topic and claim (b) sentiment classification for each target, and (c) open-domain contrast detection between the topic and the claim targets. Manual annotation of the dataset confirms the applicability and validity of our model. We describe an implementation of our model, focusing on a novel algorithm for contrast detection. Our approach achieves promising results, and is shown to outperform several baselines, which represent the common practice of applying a single, monolithic classifier for stance classification.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,222
inproceedings
karoui-etal-2017-exploring
Exploring the Impact of Pragmatic Phenomena on Irony Detection in Tweets: A Multilingual Corpus Study
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1025/
Karoui, Jihen and Benamara, Farah and Moriceau, V{\'e}ronique and Patti, Viviana and Bosco, Cristina and Aussenac-Gilles, Nathalie
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
262--272
This paper provides a linguistic and pragmatic analysis of the phenomenon of irony in order to represent how Twitter`s users exploit irony devices within their communication strategies for generating textual contents. We aim to measure the impact of a wide-range of pragmatic phenomena in the interpretation of irony, and to investigate how these phenomena interact with contexts local to the tweet. Informed by linguistic theories, we propose for the first time a multi-layered annotation schema for irony and its application to a corpus of French, English and Italian tweets. We detail each layer, explore their interactions, and discuss our results according to a qualitative and quantitative perspective.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,223
inproceedings
nozza-etal-2017-multi
A Multi-View Sentiment Corpus
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1026/
Nozza, Debora and Fersini, Elisabetta and Messina, Enza
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
273--280
Sentiment Analysis is a broad task that involves the analysis of various aspect of the natural language text. However, most of the approaches in the state of the art usually investigate independently each aspect, i.e. Subjectivity Classification, Sentiment Polarity Classification, Emotion Recognition, Irony Detection. In this paper we present a Multi-View Sentiment Corpus (MVSC), which comprises 3000 English microblog posts related the movie domain. Three independent annotators manually labelled MVSC, following a broad annotation schema about different aspects that can be grasped from natural language text coming from social networks. The contribution is therefore a corpus that comprises five different views for each message, i.e. subjective/objective, sentiment polarity, implicit/explicit, irony, emotion. In order to allow a more detailed investigation on the human labelling behaviour, we provide the annotations of each human annotator involved.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,224
inproceedings
rutherford-etal-2017-systematic
A Systematic Study of Neural Discourse Models for Implicit Discourse Relation
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1027/
Rutherford, Attapol and Demberg, Vera and Xue, Nianwen
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
281--291
Inferring implicit discourse relations in natural language text is the most difficult subtask in discourse parsing. Many neural network models have been proposed to tackle this problem. However, the comparison for this task is not unified, so we could hardly draw clear conclusions about the effectiveness of various architectures. Here, we propose neural network models that are based on feedforward and long-short term memory architecture and systematically study the effects of varying structures. To our surprise, the best-configured feedforward architecture outperforms LSTM-based model in most cases despite thorough tuning. Further, we compare our best feedforward system with competitive convolutional and recurrent networks and find that feedforward can actually be more effective. For the first time for this task, we compile and publish outputs from previous neural and non-neural systems to establish the standard for further comparison.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,225
inproceedings
braud-etal-2017-cross
Cross-lingual {RST} Discourse Parsing
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1028/
Braud, Chlo{\'e} and Coavoux, Maximin and S{\o}gaard, Anders
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
292--304
Discourse parsing is an integral part of understanding information flow and argumentative structure in documents. Most previous research has focused on inducing and evaluating models from the English RST Discourse Treebank. However, discourse treebanks for other languages exist, including Spanish, German, Basque, Dutch and Brazilian Portuguese. The treebanks share the same underlying linguistic theory, but differ slightly in the way documents are annotated. In this paper, we present (a) a new discourse parser which is simpler, yet competitive (significantly better on 2/3 metrics) to state of the art for English, (b) a harmonization of discourse treebanks across languages, enabling us to present (c) what to the best of our knowledge are the first experiments on cross-lingual discourse parsing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,226
inproceedings
perez-liu-2017-dialog
Dialog state tracking, a machine reading approach using Memory Network
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1029/
Perez, Julien and Liu, Fei
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
305--314
In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using an End-to-End Memory Network, MemN2N, a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has been converted for the occasion in order to frame the hidden state variable inference as a question-answering task based on a sequence of utterances extracted from a dialog. We show that the proposed tracker gives encouraging results. Then, we propose to extend the DSTC-2 dataset with specific reasoning capabilities requirement like counting, list maintenance, yes-no question answering and indefinite knowledge management. Finally, we present encouraging results using our proposed MemN2N based tracking model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,227
inproceedings
treviso-etal-2017-sentence
Sentence Segmentation in Narrative Transcripts from Neuropsychological Tests using Recurrent Convolutional Neural Networks
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1030/
Treviso, Marcos and Shulby, Christopher and Alu{\'i}sio, Sandra
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
315--325
Automated discourse analysis tools based on Natural Language Processing (NLP) aiming at the diagnosis of language-impairing dementias generally extract several textual metrics of narrative transcripts. However, the absence of sentence boundary segmentation in the transcripts prevents the direct application of NLP methods which rely on these marks in order to function properly, such as taggers and parsers. We present the first steps taken towards automatic neuropsychological evaluation based on narrative discourse analysis, presenting a new automatic sentence segmentation method for impaired speech. Our model uses recurrent convolutional neural networks with prosodic, Part of Speech (PoS) features, and word embeddings. It was evaluated intrinsically on impaired, spontaneous speech as well as normal, prepared speech and presents better results for healthy elderly (CTL) (F1 = 0.74) and Mild Cognitive Impairment (MCI) patients (F1 = 0.70) than the Conditional Random Fields method (F1 = 0.55 and 0.53, respectively) used in the same context of our study. The results suggest that our model is robust for impaired speech and can be used in automated discourse analysis tools to differentiate narratives produced by MCI and CTL.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,228
inproceedings
hough-schlangen-2017-joint
Joint, Incremental Disfluency Detection and Utterance Segmentation from Speech
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1031/
Hough, Julian and Schlangen, David
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
326--336
We present the joint task of incremental disfluency detection and utterance segmentation and a simple deep learning system which performs it on transcripts and ASR results. We show how the constraints of the two tasks interact. Our joint-task system outperforms the equivalent individual task systems, provides competitive results and is suitable for future use in conversation agents in the psychiatric domain.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,229
inproceedings
bergmanis-goldwater-2017-segmentation
From Segmentation to Analyses: a Probabilistic Model for Unsupervised Morphology Induction
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1032/
Bergmanis, Toms and Goldwater, Sharon
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
337--346
A major motivation for unsupervised morphological analysis is to reduce the sparse data problem in under-resourced languages. Most previous work focus on segmenting surface forms into their constituent morphs (taking: tak +ing), but surface form segmentation does not solve the sparse data problem as the analyses of take and taking are not connected to each other. We present a system that adapts the MorphoChains system (Narasimhan et al., 2015) to provide morphological analyses that aim to abstract over spelling differences in functionally similar morphs. This results in analyses that are not compelled to use all the orthographic material of a word (stopping: stop +ing) or limited to only that material (acidified: acid +ify +ed). On average across six typologically varied languages our system has a similar or better F-score on EMMA (a measure of underlying morpheme accuracy) than three strong baselines; moreover, the total number of distinct morphemes identified by our system is on average 12.8{\%} lower than for Morfessor (Virpioja et al., 2013), a state-of-the-art surface segmentation system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,230
inproceedings
mukherjee-etal-2017-creating
Creating {POS} Tagging and Dependency Parsing Experts via Topic Modeling
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1033/
Mukherjee, Atreyee and K{\"ubler, Sandra and Scheutz, Matthias
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
347--355
Part of speech (POS) taggers and dependency parsers tend to work well on homogeneous datasets but their performance suffers on datasets containing data from different genres. In our current work, we investigate how to create POS tagging and dependency parsing experts for heterogeneous data by employing topic modeling. We create topic models (using Latent Dirichlet Allocation) to determine genres from a heterogeneous dataset and then train an expert for each of the genres. Our results show that the topic modeling experts reach substantial improvements when compared to the general versions. For dependency parsing, the improvement reaches 2 percent points over the full training baseline when we use two topics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,231
inproceedings
vincze-etal-2017-universal
{U}niversal {D}ependencies and Morphology for {H}ungarian - and on the Price of Universality
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1034/
Vincze, Veronika and Simk{\'o}, Katalin and Sz{\'a}nt{\'o}, Zsolt and Farkas, Rich{\'a}rd
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
356--365
In this paper, we present how the principles of universal dependencies and morphology have been adapted to Hungarian. We report the most challenging grammatical phenomena and our solutions to those. On the basis of the adapted guidelines, we have converted and manually corrected 1,800 sentences from the Szeged Treebank to universal dependency format. We also introduce experiments on this manually annotated corpus for evaluating automatic conversion and the added value of language-specific, i.e. non-universal, annotations. Our results reveal that converting to universal dependencies is not necessarily trivial, moreover, using language-specific morphological features may have an impact on overall performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,232
inproceedings
peng-etal-2017-addressing
Addressing the Data Sparsity Issue in Neural {AMR} Parsing
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1035/
Peng, Xiaochang and Wang, Chuan and Gildea, Daniel and Xue, Nianwen
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
366--375
Neural attention models have achieved great success in different NLP tasks. However, they have not fulfilled their promise on the AMR parsing task due to the data sparsity issue. In this paper, we describe a sequence-to-sequence model for AMR parsing and present different ways to tackle the data sparsity problem. We show that our methods achieve significant improvement over a baseline neural attention model and our results are also competitive against state-of-the-art systems that do not use extra linguistic resources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,233
inproceedings
reddy-etal-2017-generating
Generating Natural Language Question-Answer Pairs from a Knowledge Graph Using a {RNN} Based Question Generation Model
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1036/
Reddy, Sathish and Raghu, Dinesh and Khapra, Mitesh M. and Joshi, Sachindra
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
376--385
In recent years, knowledge graphs such as Freebase that capture facts about entities and relationships between them have been used actively for answering factoid questions. In this paper, we explore the problem of automatically generating question answer pairs from a given knowledge graph. The generated question answer (QA) pairs can be used in several downstream applications. For example, they could be used for training better QA systems. To generate such QA pairs, we first extract a set of keywords from entities and relationships expressed in a triple stored in the knowledge graph. From each such set, we use a subset of keywords to generate a natural language question that has a unique answer. We treat this subset of keywords as a sequence and propose a sequence to sequence model using RNN to generate a natural language question from it. Our RNN based model generates QA pairs with an accuracy of 33.61 percent and performs 110.47 percent (relative) better than a state-of-the-art template based method for generating natural language question from keywords. We also do an extrinsic evaluation by using the generated QA pairs to train a QA system and observe that the F1-score of the QA system improves by 5.5 percent (relative) when using automatically generated QA pairs in addition to manually generated QA pairs available for training.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,234
inproceedings
hirao-etal-2017-enumeration
Enumeration of Extractive Oracle Summaries
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1037/
Hirao, Tsutomu and Nishino, Masaaki and Suzuki, Jun and Nagata, Masaaki
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
386--396
To analyze the limitations and the future directions of the extractive summarization paradigm, this paper proposes an Integer Linear Programming (ILP) formulation to obtain extractive oracle summaries in terms of ROUGE-N. We also propose an algorithm that enumerates all of the oracle summaries for a set of reference summaries to exploit F-measures that evaluate which system summaries contain how many sentences that are extracted as an oracle summary. Our experimental results obtained from Document Understanding Conference (DUC) corpora demonstrated the following: (1) room still exists to improve the performance of extractive summarization; (2) the F-measures derived from the enumerated oracle summaries have significantly stronger correlations with human judgment than those derived from single oracle summaries.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,235
inproceedings
munkhdalai-yu-2017-neural-semantic
Neural Semantic Encoders
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1038/
Munkhdalai, Tsendsuren and Yu, Hong
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
397--407
We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders. NSE is equipped with a novel memory update rule and has a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read, compose and write operations. NSE can also access 1 multiple and shared memories. In this paper, we demonstrated the effectiveness and the flexibility of NSE on five different natural language tasks: natural language inference, question answering, sentence classification, document sentiment analysis and machine translation where NSE achieved state-of-the-art performance when evaluated on publically available benchmarks. For example, our shared-memory model showed an encouraging result on neural machine translation, improving an attention-based baseline by approximately 1.0 BLEU.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,236
inproceedings
haffari-etal-2017-efficient
Efficient Benchmarking of {NLP} {API}s using Multi-armed Bandits
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1039/
Haffari, Gholamreza and Tran, Tuan Dung and Carman, Mark
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
408--416
Comparing NLP systems to select the best one for a task of interest, such as named entity recognition, is critical for practitioners and researchers. A rigorous approach involves setting up a hypothesis testing scenario using the performance of the systems on query documents. However, often the hypothesis testing approach needs to send a lot of document queries to the systems, which can be problematic. In this paper, we present an effective alternative based on the multi-armed bandit (MAB). We propose a hierarchical generative model to represent the uncertainty in the performance measures of the competing systems, to be used by Thompson Sampling to solve the resulting MAB. Experimental results on both synthetic and real data show that our approach requires significantly fewer queries compared to the standard benchmarking technique to identify the best system according to F-measure.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,237
inproceedings
verwimp-etal-2017-character
Character-Word {LSTM} Language Models
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1040/
Verwimp, Lyan and Pelemans, Joris and Van hamme, Hugo and Wambacq, Patrick
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
417--427
We present a Character-Word Long Short-Term Memory Language Model which both reduces the perplexity with respect to a baseline word-level language model and reduces the number of parameters of the model. Character information can reveal structural (dis)similarities between words and can even be used when a word is out-of-vocabulary, thus improving the modeling of infrequent and unknown words. By concatenating word and character embeddings, we achieve up to 2.77{\%} relative improvement on English compared to a baseline model with a similar amount of parameters and 4.57{\%} on Dutch. Moreover, we also outperform baseline word-level models with a larger number of parameters.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,238
inproceedings
tran-etal-2017-hierarchical
A Hierarchical Neural Model for Learning Sequences of Dialogue Acts
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1041/
Tran, Quan Hung and Zukerman, Ingrid and Haffari, Gholamreza
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
428--437
We propose a novel hierarchical Recurrent Neural Network (RNN) for learning sequences of Dialogue Acts (DAs). The input in this task is a sequence of utterances (i.e., conversational contributions) comprising a sequence of tokens, and the output is a sequence of DA labels (one label per utterance). Our model leverages the hierarchical nature of dialogue data by using two nested RNNs that capture long-range dependencies at the dialogue level and the utterance level. This model is combined with an attention mechanism that focuses on salient tokens in utterances. Our experimental results show that our model outperforms strong baselines on two popular datasets, Switchboard and MapTask; and our detailed empirical analysis highlights the impact of each aspect of our model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,239
inproceedings
wen-etal-2017-network
A Network-based End-to-End Trainable Task-oriented Dialogue System
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1042/
Wen, Tsung-Hsien and Vandyke, David and Mrk{\v{s}}i{\'c}, Nikola and Ga{\v{s}}i{\'c}, Milica and Rojas-Barahona, Lina M. and Su, Pei-Hao and Ultes, Stefan and Young, Steve
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
438--449
Teaching machines to accomplish tasks by conversing naturally with humans is challenging. Currently, developing task-oriented dialogue systems requires creating multiple components and typically this involves either a large amount of handcrafting, or acquiring costly labelled datasets to solve a statistical learning problem for each component. In this work we introduce a neural network-based text-in, text-out end-to-end trainable goal-oriented dialogue system along with a new way of collecting dialogue data based on a novel pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue systems easily and without making too many assumptions about the task at hand. The results show that the model can converse with human subjects naturally whilst helping them to accomplish tasks in a restaurant search domain.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,240
inproceedings
peng-etal-2017-may
May {I} take your order? A Neural Model for Extracting Structured Information from Conversations
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1043/
Peng, Baolin and Seltzer, Michael and Ju, Y.C. and Zweig, Geoffrey and Wong, Kam-Fai
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
450--459
In this paper we tackle a unique and important problem of extracting a structured order from the conversation a customer has with an order taker at a restaurant. This is motivated by an actual system under development to assist in the order taking process. We develop a sequence-to-sequence model that is able to map from unstructured conversational input to the structured form that is conveyed to the kitchen and appears on the customer receipt. This problem is critically different from other tasks like machine translation where sequence-to-sequence models have been used: the input includes two sides of a conversation; the output is highly structured; and logical manipulations must be performed, for example when the customer changes his mind while ordering. We present a novel sequence-to-sequence model that incorporates a special attention-memory gating mechanism and conversational role markers. The proposed model improves performance over both a phrase-based machine translation approach and a standard sequence-to-sequence model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,241
inproceedings
muzny-etal-2017-two
A Two-stage Sieve Approach for Quote Attribution
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1044/
Muzny, Grace and Fang, Michael and Chang, Angel and Jurafsky, Dan
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
460--470
We present a deterministic sieve-based system for attributing quotations in literary text and a new dataset: QuoteLi3. Quote attribution, determining who said what in a given text, is important for tasks like creating dialogue systems, and in newer areas like computational literary studies, where it creates opportunities to analyze novels at scale rather than only a few at a time. We release QuoteLi3, which contains more than 6,000 annotations linking quotes to speaker mentions and quotes to speaker entities, and introduce a new algorithm for quote attribution. Our two-stage algorithm first links quotes to mentions, then mentions to entities. Using two stages encapsulates difficult sub-problems and improves system performance. The modular design allows us to tune for overall performance or higher precision, which is useful for many real-world use cases. Our system achieves an average F-score of 87.5 across three novels, outperforming previous systems, and can be tuned for precision of 90.4 at a recall of 65.1.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,242
inproceedings
hartmann-etal-2017-domain
Out-of-domain {F}rame{N}et Semantic Role Labeling
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1045/
Hartmann, Silvana and Kuznetsov, Ilia and Martin, Teresa and Gurevych, Iryna
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
471--482
Domain dependence of NLP systems is one of the major obstacles to their application in large-scale text analysis, also restricting the applicability of FrameNet semantic role labeling (SRL) systems. Yet, current FrameNet SRL systems are still only evaluated on a single in-domain test set. For the first time, we study the domain dependence of FrameNet SRL on a wide range of benchmark sets. We create a novel test set for FrameNet SRL based on user-generated web text and find that the major bottleneck for out-of-domain FrameNet SRL is the frame identification step. To address this problem, we develop a simple, yet efficient system based on distributed word representations. Our system closely approaches the state-of-the-art in-domain while outperforming the best available frame identification system out-of-domain. We publish our system and test data for research purposes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,243
inproceedings
wang-etal-2017-tdparse
{TDP}arse: Multi-target-specific sentiment recognition on {T}witter
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1046/
Wang, Bo and Liakata, Maria and Zubiaga, Arkaitz and Procter, Rob
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
483--493
Existing target-specific sentiment recognition methods consider only a single target per tweet, and have been shown to miss nearly half of the actual targets mentioned. We present a corpus of UK election tweets, with an average of 3.09 entities per tweet and more than one type of sentiment in half of the tweets. This requires a method for multi-target specific sentiment recognition, which we develop by using the context around a target as well as syntactic dependencies involving the target. We present results of our method on both a benchmark corpus of single targets and the multi-target election corpus, showing state-of-the art performance in both corpora and outperforming previous approaches to multi-target sentiment task as well as deep learning models for single-target sentiment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,244
inproceedings
upadhyay-chang-2017-annotating
Annotating Derivations: A New Evaluation Strategy and Dataset for Algebra Word Problems
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1047/
Upadhyay, Shyam and Chang, Ming-Wei
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
494--504
We propose a new evaluation for automatic solvers for algebra word problems, which can identify mistakes that existing evaluations overlook. Our proposal is to evaluate such solvers using derivations, which reflect how an equation system was constructed from the word problem. To accomplish this, we develop an algorithm for checking the equivalence between two derivations, and show how derivation annotations can be semi-automatically added to existing datasets. To make our experiments more comprehensive, we include the derivation annotation for DRAW-1K, a new dataset containing 1000 general algebra word problems. In our experiments, we found that the annotated derivations enable a more accurate evaluation of automatic solvers than previously used metrics. We release derivation annotations for over 2300 algebra word problems for future evaluations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,245
inproceedings
heigold-etal-2017-extensive
An Extensive Empirical Evaluation of Character-Based Morphological Tagging for 14 Languages
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1048/
Heigold, Georg and Neumann, Guenter and van Genabith, Josef
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
505--513
This paper investigates neural character-based morphological tagging for languages with complex morphology and large tag sets. Character-based approaches are attractive as they can handle rarely- and unseen words gracefully. We evaluate on 14 languages and observe consistent gains over a state-of-the-art morphological tagger across all languages except for English and French, where we match the state-of-the-art. We compare two architectures for computing character-based word vectors using recurrent (RNN) and convolutional (CNN) nets. We show that the CNN based approach performs slightly worse and less consistently than the RNN based approach. Small but systematic gains are observed when combining the two architectures by ensembling.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,246
inproceedings
kann-etal-2017-neural
Neural Multi-Source Morphological Reinflection
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1049/
Kann, Katharina and Cotterell, Ryan and Sch{\"utze, Hinrich
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
514--524
We explore the task of multi-source morphological reinflection, which generalizes the standard, single-source version. The input consists of (i) a target tag and (ii) multiple pairs of source form and source tag for a lemma. The motivation is that it is beneficial to have access to more than one source form since different source forms can provide complementary information, e.g., different stems. We further present a novel extension to the encoder-decoder recurrent neural architecture, consisting of multiple encoders, to better solve the task. We show that our new architecture outperforms single-source reinflection models and publish our dataset for multi-source morphological reinflection to facilitate future research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,247
inproceedings
chatterjee-etal-2017-online
Online Automatic Post-editing for {MT} in a Multi-Domain Translation Environment
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1050/
Chatterjee, Rajen and Gebremelak, Gebremedhen and Negri, Matteo and Turchi, Marco
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
525--535
Automatic post-editing (APE) for machine translation (MT) aims to fix recurrent errors made by the MT decoder by learning from correction examples. In controlled evaluation scenarios, the representativeness of the training set with respect to the test data is a key factor to achieve good performance. Real-life scenarios, however, do not guarantee such favorable learning conditions. Ideally, to be integrated in a real professional translation workflow (e.g. to play a role in computer-assisted translation framework), APE tools should be flexible enough to cope with continuous streams of diverse data coming from different domains/genres. To cope with this problem, we propose an online APE framework that is: i) robust to data diversity (i.e. capable to learn and apply correction rules in the right contexts) and ii) able to evolve over time (by continuously extending and refining its knowledge). In a comparative evaluation, with English-German test data coming in random order from two different domains, we show the effectiveness of our approach, which outperforms a strong batch system and the state of the art in online APE.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,248
inproceedings
damonte-etal-2017-incremental
An Incremental Parser for {A}bstract {M}eaning {R}epresentation
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1051/
Damonte, Marco and Cohen, Shay B. and Satta, Giorgio
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
536--546
Abstract Meaning Representation (AMR) is a semantic representation for natural language that embeds annotations related to traditional tasks such as named entity recognition, semantic role labeling, word sense disambiguation and co-reference resolution. We describe a transition-based parser for AMR that parses sentences left-to-right, in linear time. We further propose a test-suite that assesses specific subtasks that are helpful in comparing AMR parsers, and show that our parser is competitive with the state of the art on the LDC2015E86 dataset and that it outperforms state-of-the-art parsers for recovering named entities and handling polarity.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,249
inproceedings
padmakumar-etal-2017-integrated
Integrated Learning of Dialog Strategies and Semantic Parsing
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1052/
Padmakumar, Aishwarya and Thomason, Jesse and Mooney, Raymond J.
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
547--557
Natural language understanding and dialog management are two integral components of interactive dialog systems. Previous research has used machine learning techniques to individually optimize these components, with different forms of direct and indirect supervision. We present an approach to integrate the learning of both a dialog strategy using reinforcement learning, and a semantic parser for robust natural language understanding, using only natural dialog interaction for supervision. Experimental results on a simulated task of robot instruction demonstrate that joint learning of both components improves dialog performance over learning either of these components alone.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,250
inproceedings
chen-palmer-2017-unsupervised
Unsupervised {AMR}-Dependency Parse Alignment
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1053/
Chen, Wei-Te and Palmer, Martha
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
558--567
In this paper, we introduce an Abstract Meaning Representation (AMR) to Dependency Parse aligner. Alignment is a preliminary step for AMR parsing, and our aligner improves current AMR parser performance. Our aligner involves several different features, including named entity tags and semantic role labels, and uses Expectation-Maximization training. Results show that our aligner reaches an 87.1{\%} F-Score score with the experimental data, and enhances AMR parsing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,251
inproceedings
jin-etal-2017-improving
Improving {C}hinese Semantic Role Labeling using High-quality Surface and Deep Case Frames
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1054/
Jin, Gongye and Kawahara, Daisuke and Kurohashi, Sadao
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
568--577
This paper presents a method for applying automatically acquired knowledge to semantic role labeling (SRL). We use a large amount of automatically extracted knowledge to improve the performance of SRL. We present two varieties of knowledge, which we call surface case frames and deep case frames. Although the surface case frames are compiled from syntactic parses and can be used as rich syntactic knowledge, they have limited capability for resolving semantic ambiguity. To compensate the deficiency of the surface case frames, we compile deep case frames from automatic semantic roles. We also consider quality management for both types of knowledge in order to get rid of the noise brought from the automatic analyses. The experimental results show that Chinese SRL can be improved using automatically acquired knowledge and the quality management shows a positive effect on this task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,252
inproceedings
yaghoobzadeh-schutze-2017-multi
Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1055/
Yaghoobzadeh, Yadollah and Sch{\"utze, Hinrich
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
578--589
Entities are essential elements of natural language. In this paper, we present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity names) and entity (entity embeddings). We investigate state-of-the-art learning methods on each level and find large differences, e.g., for deep learning models, traditional ngram features and the subword model of fasttext (Bojanowski et al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the entity level. We confirm experimentally that each level of representation contributes complementary information and a joint representation of all three levels improves the existing embedding based baseline for fine-grained entity typing by a large margin. Additionally, we show that adding information from entity descriptions further improves multi-level representations of entities.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,253
inproceedings
faralli-etal-2017-contrastmedium
The {C}ontrast{M}edium Algorithm: Taxonomy Induction From Noisy Knowledge Graphs With Just A Few Links
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1056/
Faralli, Stefano and Panchenko, Alexander and Biemann, Chris and Ponzetto, Simone Paolo
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
590--600
In this paper, we present ContrastMedium, an algorithm that transforms noisy semantic networks into full-fledged, clean taxonomies. ContrastMedium is able to identify the embedded taxonomy structure from a noisy knowledge graph without explicit human supervision such as, for instance, a set of manually selected input root and leaf concepts. This is achieved by leveraging structural information from a companion reference taxonomy, to which the input knowledge graph is linked (either automatically or manually). When used in conjunction with methods for hypernym acquisition and knowledge base linking, our methodology provides a complete solution for end-to-end taxonomy induction. We conduct experiments using automatically acquired knowledge graphs, as well as a SemEval benchmark, and show that our method is able to achieve high performance on the task of taxonomy induction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,254
inproceedings
min-etal-2017-probabilistic
Probabilistic Inference for Cold Start Knowledge Base Population with Prior World Knowledge
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1057/
Min, Bonan and Freedman, Marjorie and Meltzer, Talya
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
601--612
Building knowledge bases (KB) automatically from text corpora is crucial for many applications such as question answering and web search. The problem is very challenging and has been divided into sub-problems such as mention and named entity recognition, entity linking and relation extraction. However, combining these components has shown to be under-constrained and often produces KBs with supersize entities and common-sense errors in relations (a person has multiple birthdates). The errors are difficult to resolve solely with IE tools but become obvious with world knowledge at the corpus level. By analyzing Freebase and a large text collection, we found that per-relation cardinality and the popularity of entities follow the power-law distribution favoring flat long tails with low-frequency instances. We present a probabilistic joint inference algorithm to incorporate this world knowledge during KB construction. Our approach yields state-of-the-art performance on the TAC Cold Start task, and 42{\%} and 19.4{\%} relative improvements in F1 over our baseline on Cold Start hop-1 and all-hop queries respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,255
inproceedings
verga-etal-2017-generalizing
Generalizing to Unseen Entities and Entity Pairs with Row-less Universal Schema
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1058/
Verga, Patrick and Neelakantan, Arvind and McCallum, Andrew
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
613--622
Universal schema predicts the types of entities and relations in a knowledge base (KB) by jointly embedding the union of all available schema types{---}not only types from multiple structured databases (such as Freebase or Wikipedia infoboxes), but also types expressed as textual patterns from raw text. This prediction is typically modeled as a matrix completion problem, with one type per column, and either one or two entities per row (in the case of entity types or binary relation types, respectively). Factorizing this sparsely observed matrix yields a learned vector embedding for each row and each column. In this paper we explore the problem of making predictions for entities or entity-pairs unseen at training time (and hence without a pre-learned row embedding). We propose an approach having no per-row parameters at all; rather we produce a row vector on the fly using a learned aggregation function of the vectors of the observed columns for that row. We experiment with various aggregation functions, including neural network attention models. Our approach can be understood as a natural language database, in that questions about KB entities are answered by attending to textual or database evidence. In experiments predicting both relations and entity types, we demonstrate that despite having an order of magnitude fewer parameters than traditional universal schema, we can match the accuracy of the traditional model, and more importantly, we can now make predictions about unseen rows with nearly the same accuracy as rows available at training time.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,256
inproceedings
dong-etal-2017-learning-generate
Learning to Generate Product Reviews from Attributes
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1059/
Dong, Li and Huang, Shaohan and Wei, Furu and Lapata, Mirella and Zhou, Ming and Xu, Ke
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
623--632
Automatically generating product reviews is a meaningful, yet not well-studied task in sentiment analysis. Traditional natural language generation methods rely extensively on hand-crafted rules and predefined templates. This paper presents an attention-enhanced attribute-to-sequence model to generate product reviews for given attribute information, such as user, product, and rating. The attribute encoder learns to represent input attributes as vectors. Then, the sequence decoder generates reviews by conditioning its output on these vectors. We also introduce an attention mechanism to jointly generate reviews and align words with input attributes. The proposed model is trained end-to-end to maximize the likelihood of target product reviews given the attributes. We build a publicly available dataset for the review generation task by leveraging the Amazon book reviews and their metadata. Experiments on the dataset show that our approach outperforms baseline methods and the attention mechanism significantly improves the performance of our model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,257
inproceedings
chisholm-etal-2017-learning
Learning to generate one-sentence biographies from {W}ikidata
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1060/
Chisholm, Andrew and Radford, Will and Hachey, Ben
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
633--642
We investigate the generation of one-sentence Wikipedia biographies from facts derived from Wikidata slot-value pairs. We train a recurrent neural network sequence-to-sequence model with attention to select facts and generate textual summaries. Our model incorporates a novel secondary objective that helps ensure it generates sentences that contain the input facts. The model achieves a BLEU score of 41, improving significantly upon the vanilla sequence-to-sequence model and scoring roughly twice that of a simple template baseline. Human preference evaluation suggests the model is nearly as good as the Wikipedia reference. Manual analysis explores content selection, suggesting the model can trade the ability to infer knowledge against the risk of hallucinating incorrect information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,258
inproceedings
puduppully-etal-2017-transition
Transition-Based Deep Input Linearization
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1061/
Puduppully, Ratish and Zhang, Yue and Shrivastava, Manish
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
643--654
Traditional methods for deep NLG adopt pipeline approaches comprising stages such as constructing syntactic input, predicting function words, linearizing the syntactic input and generating the surface forms. Though easier to visualize, pipeline approaches suffer from error propagation. In addition, information available across modules cannot be leveraged by all modules. We construct a transition-based model to jointly perform linearization, function word prediction and morphological generation, which considerably improves upon the accuracy compared to a pipelined baseline system. On a standard deep input linearization shared task, our system achieves the best results reported so far.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,259
inproceedings
castro-ferreira-etal-2017-generating
Generating flexible proper name references in text: Data, models and evaluation
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1062/
Castro Ferreira, Thiago and Krahmer, Emiel and Wubben, Sander
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
655--664
This study introduces a statistical model able to generate variations of a proper name by taking into account the person to be mentioned, the discourse context and variation. The model relies on the REGnames corpus, a dataset with 53,102 proper name references to 1,000 people in different discourse contexts. We evaluate the versions of our model from the perspective of how human writers produce proper names, and also how human readers process them. The corpus and the model are publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,260
inproceedings
zhang-etal-2017-dependency
Dependency Parsing as Head Selection
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1063/
Zhang, Xingxing and Cheng, Jianpeng and Lapata, Mirella
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
665--676
Conventional graph-based dependency parsers guarantee a tree structure both during training and inference. Instead, we formalize dependency parsing as the problem of independently selecting the head of each word in a sentence. Our model which we call DENSE (as shorthand for \textbf{De}pendency \textbf{N}eural \textbf{Se}lection) produces a distribution over possible heads for each word using features obtained from a bidirectional recurrent neural network. Without enforcing structural constraints during training, DeNSe generates (at inference time) trees for the overwhelming majority of sentences, while non-tree outputs can be adjusted with a maximum spanning tree algorithm. We evaluate DeNSe on four languages (English, Chinese, Czech, and German) with varying degrees of non-projectivity. Despite the simplicity of the approach, our parsers are on par with the state of the art.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,261
inproceedings
le-fokkens-2017-tackling
Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1064/
L{\^e}, Minh and Fokkens, Antske
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
677--687
Error propagation is a common problem in NLP. Reinforcement learning explores erroneous states during training and can therefore be more robust when mistakes are made early in a process. In this paper, we apply reinforcement learning to greedy dependency parsing which is known to suffer from error propagation. Reinforcement learning improves accuracy of both labeled and unlabeled dependencies of the Stanford Neural Dependency Parser, a high performance greedy parser, while maintaining its efficiency. We investigate the portion of errors which are the result of error propagation and confirm that reinforcement learning reduces the occurrence of error propagation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,262
inproceedings
futrell-levy-2017-noisy
Noisy-context surprisal as a human sentence processing cost model
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1065/
Futrell, Richard and Levy, Roger
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
688--698
We use the noisy-channel theory of human sentence comprehension to develop an incremental processing cost model that unifies and extends key features of expectation-based and memory-based models. In this model, which we call noisy-context surprisal, the processing cost of a word is the surprisal of the word given a noisy representation of the preceding context. We show that this model accounts for an outstanding puzzle in sentence comprehension, language-dependent structural forgetting effects (Gibson and Thomas, 1999; Vasishth et al., 2010; Frank et al., 2016), which are previously not well modeled by either expectation-based or memory-based approaches. Additionally, we show that this model derives and generalizes locality effects (Gibson, 1998; Demberg and Keller, 2008), a signature prediction of memory-based models. We give corpus-based evidence for a key assumption in this derivation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,263
inproceedings
yin-schutze-2017-task
Task-Specific Attentive Pooling of Phrase Alignments Contributes to Sentence Matching
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1066/
Yin, Wenpeng and Sch{\"utze, Hinrich
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
699--709
This work studies comparatively two typical sentence matching tasks: textual entailment (TE) and answer selection (AS), observing that weaker phrase alignments are more critical in TE, while stronger phrase alignments deserve more attention in AS. The key to reach this observation lies in phrase detection, phrase representation, phrase alignment, and more importantly how to connect those aligned phrases of different matching degrees with the final classifier. Prior work (i) has limitations in phrase generation and representation, or (ii) conducts alignment at word and phrase levels by handcrafted features or (iii) utilizes a single framework of alignment without considering the characteristics of specific tasks, which limits the framework`s effectiveness across tasks. We propose an architecture based on Gated Recurrent Unit that supports (i) representation learning of phrases of arbitrary granularity and (ii) task-specific attentive pooling of phrase alignments between two sentences. Experimental results on TE and AS match our observation and show the effectiveness of our approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,264
inproceedings
martinez-gomez-etal-2017-demand
On-demand Injection of Lexical Knowledge for Recognising Textual Entailment
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1067/
Mart{\'i}nez-G{\'o}mez, Pascual and Mineshima, Koji and Miyao, Yusuke and Bekki, Daisuke
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
710--720
We approach the recognition of textual entailment using logical semantic representations and a theorem prover. In this setup, lexical divergences that preserve semantic entailment between the source and target texts need to be explicitly stated. However, recognising subsentential semantic relations is not trivial. We address this problem by monitoring the proof of the theorem and detecting unprovable sub-goals that share predicate arguments with logical premises. If a linguistic relation exists, then an appropriate axiom is constructed on-demand and the theorem proving continues. Experiments show that this approach is effective and precise, producing a system that outperforms other logic-based systems and is competitive with state-of-the-art statistical methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,265
inproceedings
lai-hockenmaier-2017-learning
Learning to Predict Denotational Probabilities For Modeling Entailment
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1068/
Lai, Alice and Hockenmaier, Julia
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
721--730
We propose a framework that captures the denotational probabilities of words and phrases by embedding them in a vector space, and present a method to induce such an embedding from a dataset of denotational probabilities. We show that our model successfully predicts denotational probabilities for unseen phrases, and that its predictions are useful for textual entailment datasets such as SICK and SNLI.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,266
inproceedings
maheshwari-etal-2017-societal
A Societal Sentiment Analysis: Predicting the Values and Ethics of Individuals by Analysing Social Media Content
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1069/
Maheshwari, Tushar and Reganti, Aishwarya N. and Gupta, Samiksha and Jamatia, Anupam and Kumar, Upendra and Gamb{\"ack, Bj{\"orn and Das, Amitava
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
731--741
To find out how users' social media behaviour and language are related to their ethical practices, the paper investigates applying Schwartz' psycholinguistic model of societal sentiment to social media text. The analysis is based on corpora collected from user essays as well as social media (Facebook and Twitter). Several experiments were carried out on the corpora to classify the ethical values of users, incorporating Linguistic Inquiry Word Count analysis, n-grams, topic models, psycholinguistic lexica, speech-acts, and non-linguistic information, while applying a range of machine learners (Support Vector Machines, Logistic Regression, and Random Forests) to identify the best linguistic and non-linguistic features for automatic classification of values and ethics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,267
inproceedings
lukin-etal-2017-argument
Argument Strength is in the Eye of the Beholder: Audience Effects in Persuasion
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1070/
Lukin, Stephanie and Anand, Pranav and Walker, Marilyn and Whittaker, Steve
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
742--753
Americans spend about a third of their time online, with many participating in online conversations on social and political issues. We hypothesize that social media arguments on such issues may be more engaging and persuasive than traditional media summaries, and that particular types of people may be more or less convinced by particular styles of argument, e.g. emotional arguments may resonate with some personalities while factual arguments resonate with others. We report a set of experiments testing at large scale how audience variables interact with argument style to affect the persuasiveness of an argument, an under-researched topic within natural language processing. We show that belief change is affected by personality factors, with conscientious, open and agreeable people being more convinced by emotional arguments.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,268
inproceedings
liu-etal-2017-language
A Language-independent and Compositional Model for Personality Trait Recognition from Short Texts
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1071/
Liu, Fei and Perez, Julien and Nowson, Scott
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
754--764
There have been many attempts at automatically recognising author personality traits from text, typically incorporating linguistic features with conventional machine learning models, e.g. linear regression or Support Vector Machines. In this work, we propose to use deep-learning-based models with atomic features of text {--} the characters {--} to build hierarchical, vectorial word and sentence representations for the task of trait inference. On a corpus of tweets, this method shows state-of-the-art performance across five traits and three languages (English, Spanish and Italian) compared with prior work in author profiling. The results, supported by preliminary visualisation work, are encouraging for the ability to detect complex human traits.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,269
inproceedings
levy-etal-2017-strong
A Strong Baseline for Learning Cross-Lingual Word Embeddings from Sentence Alignments
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1072/
Levy, Omer and S{\o}gaard, Anders and Goldberg, Yoav
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
765--774
While cross-lingual word embeddings have been studied extensively in recent years, the qualitative differences between the different algorithms remain vague. We observe that whether or not an algorithm uses a particular feature set (sentence IDs) accounts for a significant performance gap among these algorithms. This feature set is also used by traditional alignment algorithms, such as IBM Model-1, which demonstrate similar performance to state-of-the-art embedding algorithms on a variety of benchmarks. Overall, we observe that different algorithmic approaches for utilizing the sentence ID feature space result in similar performance. This paper draws both empirical and theoretical parallels between the embedding and alignment literature, and suggests that adding additional sources of information, which go beyond the traditional signal of bilingual sentence-aligned corpora, may substantially improve cross-lingual word embeddings, and that future baselines should at least take such features into account.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,270
inproceedings
denis-ralaivola-2017-online
Online Learning of Task-specific Word Representations with a Joint Biconvex Passive-Aggressive Algorithm
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1073/
Denis, Pascal and Ralaivola, Liva
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
775--784
This paper presents a new, efficient method for learning task-specific word vectors using a variant of the Passive-Aggressive algorithm. Specifically, this algorithm learns a word embedding matrix in tandem with the classifier parameters in an online fashion, solving a bi-convex constrained optimization at each iteration. We provide a theoretical analysis of this new algorithm in terms of regret bounds, and evaluate it on both synthetic data and NLP classification problems, including text classification and sentiment analysis. In the latter case, we compare various pre-trained word vectors to initialize our word embedding matrix, and show that the matrix learned by our algorithm vastly outperforms the initial matrix, with performance results comparable or above the state-of-the-art on these tasks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,271
inproceedings
schutze-2017-nonsymbolic
Nonsymbolic Text Representation
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1074/
Sch{\"utze, Hinrich
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
785--796
We introduce the first generic text representation model that is completely nonsymbolic, i.e., it does not require the availability of a segmentation or tokenization method that attempts to identify words or other symbolic units in text. This applies to training the parameters of the model on a training corpus as well as to applying it when computing the representation of a new text. We show that our model performs better than prior work on an information extraction and a text denoising task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,272
inproceedings
abhishek-etal-2017-fine
Fine-Grained Entity Type Classification by Jointly Learning Representations and Label Embeddings
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1075/
Abhishek, Abhishek and Anand, Ashish and Awekar, Amit
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
797--807
Fine-grained entity type classification (FETC) is the task of classifying an entity mention to a broad set of types. Distant supervision paradigm is extensively used to generate training data for this task. However, generated training data assigns same set of labels to every mention of an entity without considering its local context. Existing FETC systems have two major drawbacks: assuming training data to be noise free and use of hand crafted features. Our work overcomes both drawbacks. We propose a neural network model that jointly learns entity mentions and their context representation to eliminate use of hand crafted features. Our model treats training data as noisy and uses non-parametric variant of hinge loss function. Experiments show that the proposed model outperforms previous state-of-the-art methods on two publicly available datasets, namely FIGER (GOLD) and BBN with an average relative improvement of 2.69{\%} in micro-F1 score. Knowledge learnt by our model on one dataset can be transferred to other datasets while using same model or other FETC systems. These approaches of transferring knowledge further improve the performance of respective models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,273
inproceedings
zhou-etal-2017-event
Event extraction from {T}witter using Non-Parametric {B}ayesian Mixture Model with Word Embeddings
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1076/
Zhou, Deyu and Zhang, Xuan and He, Yulan
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
808--817
To extract structured representations of newsworthy events from Twitter, unsupervised models typically assume that tweets involving the same named entities and expressed using similar words are likely to belong to the same event. Hence, they group tweets into clusters based on the co-occurrence patterns of named entities and topical keywords. However, there are two main limitations. First, they require the number of events to be known beforehand, which is not realistic in practical applications. Second, they don`t recognise that the same named entity might be referred to by multiple mentions and tweets using different mentions would be wrongly assigned to different events. To overcome these limitations, we propose a non-parametric Bayesian mixture model with word embeddings for event extraction, in which the number of events can be inferred automatically and the issue of lexical variations for the same named entity can be dealt with properly. Our model has been evaluated on three datasets with sizes ranging between 2,499 and over 60 million tweets. Experimental results show that our model outperforms the baseline approach on all datasets by 5-8{\%} in F-measure.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,274
inproceedings
pawar-etal-2017-end
End-to-end Relation Extraction using Neural Networks and {M}arkov {L}ogic {N}etworks
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1077/
Pawar, Sachin and Bhattacharyya, Pushpak and Palshikar, Girish
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
818--827
End-to-end relation extraction refers to identifying boundaries of entity mentions, entity types of these mentions and appropriate semantic relation for each pair of mentions. Traditionally, separate predictive models were trained for each of these tasks and were used in a {\textquotedblleft}pipeline{\textquotedblright} fashion where output of one model is fed as input to another. But it was observed that addressing some of these tasks jointly results in better performance. We propose a single, joint neural network based model to carry out all the three tasks of boundary identification, entity type classification and relation type classification. This model is referred to as {\textquotedblleft}All Word Pairs{\textquotedblright} model (AWP-NN) as it assigns an appropriate label to each word pair in a given sentence for performing end-to-end relation extraction. We also propose to refine output of the AWP-NN model by using inference in Markov Logic Networks (MLN) so that additional domain knowledge can be effectively incorporated. We demonstrate effectiveness of our approach by achieving better end-to-end relation extraction performance than all 4 previous joint modelling approaches, on the standard dataset of ACE 2004.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,275
inproceedings
heinzerling-etal-2017-trust
Trust, but Verify! Better Entity Linking through Automatic Verification
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1078/
Heinzerling, Benjamin and Strube, Michael and Lin, Chin-Yew
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
828--838
We introduce automatic verification as a post-processing step for entity linking (EL). The proposed method trusts EL system results collectively, by assuming entity mentions are mostly linked correctly, in order to create a semantic profile of the given text using geospatial and temporal information, as well as fine-grained entity types. This profile is then used to automatically verify each linked mention individually, i.e., to predict whether it has been linked correctly or not. Verification allows leveraging a rich set of global and pairwise features that would be prohibitively expensive for EL systems employing global inference. Evaluation shows consistent improvements across datasets and systems. In particular, when applied to state-of-the-art systems, our method yields an absolute improvement in linking performance of up to 1.7 F1 on AIDA/CoNLL`03 and up to 2.4 F1 on the English TAC KBP 2015 TEDL dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,276
inproceedings
jochim-deleris-2017-named
Named Entity Recognition in the Medical Domain with Constrained {CRF} Models
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1079/
Jochim, Charles and Deleris, L{\'e}a
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
839--849
This paper investigates how to improve performance on information extraction tasks by constraining and sequencing CRF-based approaches. We consider two different relation extraction tasks, both from the medical literature: dependence relations and probability statements. We explore whether adding constraints can lead to an improvement over standard CRF decoding. Results on our relation extraction tasks are promising, showing significant increases in performance from both (i) adding constraints to post-process the output of a baseline CRF, which captures {\textquotedblleft}domain knowledge{\textquotedblright}, and (ii) further allowing flexibility in the application of those constraints by leveraging a binary classifier as a pre-processing step.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,277
inproceedings
yadav-etal-2017-learning
Learning and Knowledge Transfer with Memory Networks for Machine Comprehension
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1080/
Yadav, Mohit and Vig, Lovekesh and Shroff, Gautam
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
850--859
Enabling machines to read and comprehend unstructured text remains an unfulfilled goal for NLP research. Recent research efforts on the {\textquotedblleft}machine comprehension{\textquotedblright} task have managed to achieve close to ideal performance on simulated data. However, achieving similar levels of performance on small real world datasets has proved difficult; major challenges stem from the large vocabulary size, complex grammar, and, the frequent ambiguities in linguistic structure. On the other hand, the requirement of human generated annotations for training, in order to ensure a sufficiently diverse set of questions is prohibitively expensive. Motivated by these practical issues, we propose a novel curriculum inspired training procedure for Memory Networks to improve the performance for machine comprehension with relatively small volumes of training data. Additionally, we explore various training regimes for Memory Networks to allow knowledge transfer from a closely related domain having larger volumes of labelled data. We also suggest the use of a loss function to incorporate the asymmetric nature of knowledge transfer. Our experiments demonstrate improvements on Dailymail, CNN, and MCTest datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,278
inproceedings
sarabi-blanco-2017-media
If No Media Were Allowed inside the Venue, Was Anybody Allowed?
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1081/
Sarabi, Zahra and Blanco, Eduardo
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
860--869
This paper presents a framework to understand negation in positive terms. Specifically, we extract positive meaning from negation when the negation cue syntactically modifies a noun or adjective. Our approach is grounded on generating potential positive interpretations automatically, and then scoring them. Experimental results show that interpretations scored high can be reliably identified.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,279
inproceedings
abualhaija-etal-2017-metaheuristic
Metaheuristic Approaches to Lexical Substitution and Simplification
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1082/
Abualhaija, Sallam and Miller, Tristan and Eckle-Kohler, Judith and Gurevych, Iryna and Zimmermann, Karl-Heinz
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
870--880
In this paper, we propose using metaheuristics{---}in particular, simulated annealing and the new D-Bees algorithm{---}to solve word sense disambiguation as an optimization problem within a knowledge-based lexical substitution system. We are the first to perform such an extrinsic evaluation of metaheuristics, for which we use two standard lexical substitution datasets, one English and one German. We find that D-Bees has robust performance for both languages, and performs better than simulated annealing, though both achieve good results. Moreover, the D-Bees{--}based lexical substitution system outperforms state-of-the-art systems on several evaluation metrics. We also show that D-Bees achieves competitive performance in lexical simplification, a variant of lexical substitution.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,280
inproceedings
mallinson-etal-2017-paraphrasing
Paraphrasing Revisited with Neural Machine Translation
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1083/
Mallinson, Jonathan and Sennrich, Rico and Lapata, Mirella
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
881--893
Recognizing and generating paraphrases is an important component in many natural language processing applications. A well-established technique for automatically extracting paraphrases leverages bilingual corpora to find meaning-equivalent phrases in a single language by {\textquotedblleft}pivoting{\textquotedblright} over a shared translation in another language. In this paper we revisit bilingual pivoting in the context of neural machine translation and present a paraphrasing model based purely on neural networks. Our model represents paraphrases in a continuous space, estimates the degree of semantic relatedness between text segments of arbitrary length, and generates candidate paraphrases for any source input. Experimental results across tasks and datasets show that neural paraphrases outperform those obtained with conventional phrase-based pivoting approaches.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,281
inproceedings
duong-etal-2017-multilingual
Multilingual Training of Crosslingual Word Embeddings
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1084/
Duong, Long and Kanayama, Hiroshi and Ma, Tengfei and Bird, Steven and Cohn, Trevor
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
894--904
Crosslingual word embeddings represent lexical items from different languages using the same vector space, enabling crosslingual transfer. Most prior work constructs embeddings for a pair of languages, with English on one side. We investigate methods for building high quality crosslingual word embeddings for many languages in a unified vector space. In this way, we can exploit and combine strength of many languages. We obtained high performance on bilingual lexicon induction, monolingual similarity and crosslingual document classification tasks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,282
inproceedings
silva-de-carvalho-nguyen-2017-building
Building Lexical Vector Representations from Concept Definitions
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1085/
Silva de Carvalho, Danilo and Nguyen, Minh Le
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
905--915
The use of distributional language representations have opened new paths in solving a variety of NLP problems. However, alternative approaches can take advantage of information unavailable through pure statistical means. This paper presents a method for building vector representations from meaning unit blocks called concept definitions, which are obtained by extracting information from a curated linguistic resource (Wiktionary). The representations obtained in this way can be compared through conventional cosine similarity and are also interpretable by humans. Evaluation was conducted in semantic similarity and relatedness test sets, with results indicating a performance comparable to other methods based on single linguistic resource extraction. The results also indicate noticeable performance gains when combining distributional similarity scores with the ones obtained using this approach. Additionally, a discussion on the proposed method`s shortcomings is provided in the analysis of error cases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,283
inproceedings
butnaru-etal-2017-shotgunwsd
{S}hotgun{WSD}: An unsupervised algorithm for global word sense disambiguation inspired by {DNA} sequencing
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1086/
Butnaru, Andrei and Ionescu, Radu Tudor and Hristea, Florentina
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
916--926
In this paper, we present a novel unsupervised algorithm for word sense disambiguation (WSD) at the document level. Our algorithm is inspired by a widely-used approach in the field of genetics for whole genome sequencing, known as the Shotgun sequencing technique. The proposed WSD algorithm is based on three main steps. First, a brute-force WSD algorithm is applied to short context windows (up to 10 words) selected from the document in order to generate a short list of likely sense configurations for each window. In the second step, these local sense configurations are assembled into longer composite configurations based on suffix and prefix matching. The resulted configurations are ranked by their length, and the sense of each word is chosen based on a voting scheme that considers only the top k configurations in which the word appears. We compare our algorithm with other state-of-the-art unsupervised WSD algorithms and demonstrate better performance, sometimes by a very large margin. We also show that our algorithm can yield better performance than the Most Common Sense (MCS) baseline on one data set. Moreover, our algorithm has a very small number of parameters, is robust to parameter tuning, and, unlike other bio-inspired methods, it gives a deterministic solution (it does not involve random choices).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,284
inproceedings
kocmi-bojar-2017-lanidenn
{L}anide{NN}: Multilingual Language Identification on Character Window
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1087/
Kocmi, Tom and Bojar, Ond{\v{r}}ej
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
927--936
In language identification, a common first step in natural language processing, we want to automatically determine the language of some input text. Monolingual language identification assumes that the given document is written in one language. In multilingual language identification, the document is usually in two or three languages and we just want their names. We aim one step further and propose a method for textual language identification where languages can change arbitrarily and the goal is to identify the spans of each of the languages. Our method is based on Bidirectional Recurrent Neural Networks and it performs well in monolingual and multilingual language identification tasks on six datasets covering 131 languages. The method keeps the accuracy also for short documents and across domains, so it is ideal for off-the-shelf use without preparation of training data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,285
inproceedings
adams-etal-2017-cross
Cross-Lingual Word Embeddings for Low-Resource Language Modeling
Lapata, Mirella and Blunsom, Phil and Koller, Alexander
apr
2017
Valencia, Spain
Association for Computational Linguistics
https://aclanthology.org/E17-1088/
Adams, Oliver and Makarucha, Adam and Neubig, Graham and Bird, Steven and Cohn, Trevor
Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
937--947
Most languages have no established writing system and minimal written records. However, textual data is essential for natural language processing, and particularly important for training language models to support speech recognition. Even in cases where text data is missing, there are some languages for which bilingual lexicons are available, since creating lexicons is a fundamental task of documentary linguistics. We investigate the use of such lexicons to improve language models when textual training data is limited to as few as a thousand sentences. The method involves learning cross-lingual word embeddings as a preliminary step in training monolingual language models. Results across a number of languages show that language models are improved by this pre-training. Application to Yongning Na, a threatened language, highlights challenges in deploying the approach in real low-resource environments.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
57,286