entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
piasecki-etal-2017-recognition
Recognition of Genuine {P}olish Suicide Notes
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1076/
Piasecki, Maciej and M{\l}ynarczyk, Ksenia and Koco{\'n}, Jan
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
583--591
In this article we present the result of the recent research in the recognition of genuine Polish suicide notes (SNs). We provide useful method to distinguish between SNs and other types of discourse, including counterfeited SNs. The method uses a wide range of word-based and semantic features and it was evaluated using Polish Corpus of Suicide Notes, which contains 1244 genuine SNs, expanded with manually prepared set of 334 counterfeited SNs and 2200 letter-like texts from the Internet. We utilized the algorithm to create the class-related sense dictionaries to improve the result of SNs classification. The obtained results show that there are fundamental differences between genuine SNs and counterfeited SNs. The applied method of the sense dictionary construction appeared to be the best way of improving the model.
null
null
10.26615/978-954-452-049-6_076
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,352
inproceedings
prazak-konopik-2017-cross
Cross-Lingual {SRL} Based upon {U}niversal {D}ependencies
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1077/
Pra{\v{z}}{\'a}k, Ond{\v{r}}ej and Konop{\'i}k, Miloslav
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
592--600
In this paper, we introduce a cross-lingual Semantic Role Labeling (SRL) system with language independent features based upon Universal Dependencies. We propose two methods to convert SRL annotations from monolingual dependency trees into universal dependency trees. Our SRL system is based upon cross-lingual features derived from universal dependency trees and a supervised learning that utilizes a maximum entropy classifier. We design experiments to verify whether the Universal Dependencies are suitable for the cross-lingual SRL. The results are very promising and they open new interesting research paths for the future.
null
null
10.26615/978-954-452-049-6_077
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,353
inproceedings
rohanian-etal-2017-using
Using Gaze Data to Predict Multiword Expressions
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1078/
Rohanian, Omid and Taslimipoor, Shiva and Yaneva, Victoria and Ha, Le An
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
601--609
In recent years gaze data has been increasingly used to improve and evaluate NLP models due to the fact that it carries information about the cognitive processing of linguistic phenomena. In this paper we conduct a preliminary study towards the automatic identification of multiword expressions based on gaze features from native and non-native speakers of English. We report comparisons between a part-of-speech (POS) and frequency baseline to: i) a prediction model based solely on gaze data and ii) a combined model of gaze data, POS and frequency. In spite of the challenging nature of the task, best performance was achieved by the latter. Furthermore, we explore how the type of gaze data (from native versus non-native speakers) affects the prediction, showing that data from the two groups is discriminative to an equal degree for the task. Finally, we show that late processing measures are more predictive than early ones, which is in line with previous research on idioms and other formulaic structures.
null
null
10.26615/978-954-452-049-6_078
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,354
inproceedings
ruckle-gurevych-2017-real
Real-Time News Summarization with Adaptation to Media Attention
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1079/
R{\"uckl{\'e, Andreas and Gurevych, Iryna
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
610--617
Real-time summarization of news events (RTS) allows persons to stay up-to-date on important topics that develop over time. With the occurrence of major sub-events, media attention increases and a large number of news articles are published. We propose a summarization approach that detects such changes and selects a suitable summarization configuration at run-time. In particular, at times with high media attention, our approach exploits the redundancy in content to produce a more precise summary and avoid emitting redundant information. We find that our approach significantly outperforms a strong non-adaptive RTS baseline in terms of the emitted summary updates and achieves the best results on a recent web-scale dataset. It can successfully be applied to a different real-world dataset without requiring additional modifications.
null
null
10.26615/978-954-452-049-6_079
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,355
inproceedings
rudrapal-das-2017-measuring
Measuring the Limit of Semantic Divergence for {E}nglish Tweets.
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1080/
Rudrapal, Dwijen and Das, Amitava
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
618--624
In human language, an expression could be conveyed in many ways by different people. Even that the same person may express same sentence quite differently when addressing different audiences, using different modalities, or using different syntactic variations or may use different set of vocabulary. The possibility of such endless surface form of text while the meaning of the text remains almost same, poses many challenges for Natural Language Processing (NLP) systems like question-answering system, machine translation system and text summarization. This research paper is an endeavor to understand the characteristic of such endless semantic divergence. In this research work we develop a corpus of 1525 semantic divergent sentences for 200 English tweets.
null
null
10.26615/978-954-452-049-6_080
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,356
inproceedings
ruppenhofer-etal-2017-evaluating
Evaluating the morphological compositionality of polarity
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1081/
Ruppenhofer, Josef and Steiner, Petra and Wiegand, Michael
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
625--633
Unknown words are a challenge for any NLP task, including sentiment analysis. Here, we evaluate the extent to which sentiment polarity of complex words can be predicted based on their morphological make-up. We do this on German as it has very productive processes of derivation and compounding and many German hapax words, which are likely to bear sentiment, are morphologically complex. We present results of supervised classification experiments on new datasets with morphological parses and polarity annotations.
null
null
10.26615/978-954-452-049-6_081
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,357
inproceedings
rysova-etal-2017-introducing
Introducing {EVALD} {--} Software Applications for Automatic Evaluation of Discourse in {C}zech
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1082/
Rysov{\'a}, Kate{\v{r}}ina and Rysov{\'a}, Magdal{\'e}na and M{\'i}rovsk{\'y}, Ji{\v{r}}{\'i} and Nov{\'a}k, Michal
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
634--641
In the paper, we introduce two software applications for automatic evaluation of coherence in Czech texts called EVALD {--} Evaluator of Discourse. The first one {--} EVALD 1.0 {--} evaluates texts written by native speakers of Czech on a five-step scale commonly used at Czech schools (grade 1 is the best, grade 5 is the worst). The second application is EVALD 1.0 for Foreigners assessing texts by non-native speakers of Czech using six-step scale (A1{--}C2) according to CEFR. Both appli-cations are available online at \url{https://lindat.mff.cuni.cz/services/evald-foreign/}.
null
null
10.26615/978-954-452-049-6_082
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,358
inproceedings
salton-etal-2017-idiom
Idiom Type Identification with Smoothed Lexical Features and a Maximum Margin Classifier
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1083/
Salton, Giancarlo and Ross, Robert and Kelleher, John
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
642--651
In our work we address limitations in the state-of-the-art in idiom type identification. We investigate different approaches for a lexical fixedness metric, a component of the state-of the-art model. We also show that our Machine Learning based approach to the idiom type identification task achieves an F1-score of 0.85, an improvement of 11 points over the state-of the-art.
null
null
10.26615/978-954-452-049-6_083
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,359
inproceedings
satthar-etal-2017-calibration
A Calibration Method for Evaluation of Sentiment Analysis
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1084/
Satthar, F. Sharmila and Evans, Roger and Uchyigit, Gulden
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
652--660
Sentiment analysis is the computational task of extracting sentiment from a text document {--} for example whether it expresses a positive, negative or neutral opinion. Various approaches have been introduced in recent years, using a range of different techniques to extract sentiment information from a document. Measuring these methods against a gold standard dataset is a useful way to evaluate such systems. However, different sentiment analysis techniques represent sentiment values in different ways, such as discrete categorical classes or continuous numerical sentiment scores. This creates a challenge for evaluating and comparing such systems; in particular assessing numerical scores against datasets that use fixed classes is difficult, because the numerical outputs have to be mapped onto the ordered classes. This paper proposes a novel calibration technique that uses precision vs. recall curves to set class thresholds to optimize a continuous sentiment analyser`s performance against a discrete gold standard dataset. In experiments mapping a continuous score onto a three-class classification of movie reviews, we show that calibration results in a substantial increase in f-score when compared to a non-calibrated mapping.
null
null
10.26615/978-954-452-049-6_084
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,360
inproceedings
semmar-laib-2017-building
Building Multiword Expressions Bilingual Lexicons for Domain Adaptation of an Example-Based Machine Translation System
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1085/
Semmar, Nasredine and Laib, Mariama
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
661--670
We describe in this paper a hybrid ap-proach to build automatically bilingual lexicons of Multiword Expressions (MWEs) from parallel corpora. We more specifically investigate the impact of using a domain-specific bilingual lexicon of MWEs on domain adaptation of an Example-Based Machine Translation (EBMT) system. We conducted experiments on the English-French language pair and two kinds of texts: in-domain texts from Europarl (European Parliament proceedings) and out-of-domain texts from Emea (European Medicines Agency documents) and Ecb (European Central Bank corpus). The obtained results indicate that integrating domain-specific bilingual lexicons of MWEs improves translation quality of the EBMT system when texts to translate are related to the specific domain and induces a relatively slight deterioration of translation quality when translating general-purpose texts.
null
null
10.26615/978-954-452-049-6_085
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,361
inproceedings
simaki-etal-2017-identifying
Identifying the Authors' National Variety of {E}nglish in Social Media Texts
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1086/
Simaki, Vasiliki and Simakis, Panagiotis and Paradis, Carita and Kerren, Andreas
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
671--678
In this paper, we present a study for the identification of authors' national variety of English in texts from social media. In data from Facebook and Twitter, information about the author`s social profile is annotated, and the national English variety (US, UK, AUS, CAN, NNS) that each author uses is attributed. We tested four feature types: formal linguistic features, POS features, lexicon-based features related to the different varieties, and data-based features from each English variety. We used various machine learning algorithms for the classification experiments, and we implemented a feature selection process. The classification accuracy achieved, when the 31 highest ranked features were used, was up to 77.32{\%}. The experimental results are evaluated, and the efficacy of the ranked features discussed.
null
null
10.26615/978-954-452-049-6_086
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,362
inproceedings
simov-etal-2017-towards
Towards Lexical Chains for Knowledge-Graph-based Word Embeddings
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1087/
Simov, Kiril and Boytcheva, Svetla and Osenova, Petya
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
679--685
Word vectors with varying dimensionalities and produced by different algorithms have been extensively used in NLP. The corpora that the algorithms are trained on can contain either natural language text (e.g. Wikipedia or newswire articles) or artificially-generated pseudo corpora due to natural data sparseness. We exploit Lexical Chain based templates over Knowledge Graph for generating pseudo-corpora with controlled linguistic value. These corpora are then used for learning word embeddings. A number of experiments have been conducted over the following test sets: WordSim353 Similarity, WordSim353 Relatedness and SimLex-999. The results show that, on the one hand, the incorporation of many-relation lexical chains improves results, but on the other hand, unrestricted-length chains remain difficult to handle with respect to their huge quantity.
null
null
10.26615/978-954-452-049-6_087
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,363
inproceedings
simova-uszkoreit-2017-word
Word Embeddings as Features for Supervised Coreference Resolution
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1088/
Simova, Iliana and Uszkoreit, Hans
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
686--693
A common reason for errors in coreference resolution is the lack of semantic information to help determine the compatibility between mentions referring to the same entity. Distributed representations, which have been shown successful in encoding relatedness between words, could potentially be a good source of such knowledge. Moreover, being obtained in an unsupervised manner, they could help address data sparsity issues in labeled training data at a small cost. In this work we investigate whether and to what extend features derived from word embeddings can be successfully used for supervised coreference resolution. We experiment with several word embedding models, and several different types of embeddingbased features, including embedding cluster and cosine similarity-based features. Our evaluations show improvements in the performance of a supervised state-of-theart coreference system.
null
null
10.26615/978-954-452-049-6_088
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,364
inproceedings
steinberger-etal-2017-cross
Cross-lingual Flames Detection in News Discussions
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1089/
Steinberger, Josef and Brychc{\'i}n, Tom{\'a}{\v{s}} and Hercig, Tom{\'a}{\v{s}} and Krejzl, Peter
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
694--700
We introduce Flames Detector, an online system for measuring flames, i.e. strong negative feelings or emotions, insults or other verbal offences, in news commentaries across five languages. It is designed to assist journalists, public institutions or discussion moderators to detect news topics which evoke wrangles. We propose a machine learning approach to flames detection and calculate an aggregated score for a set of comment threads. The demo application shows the most flaming topics of the current period in several language variants. The search functionality gives a possibility to measure flames in any topic specified by a query. The evaluation shows that the flame detection in discussions is a difficult task, however, the application can already reveal interesting information about the actual news discussions.
null
null
10.26615/978-954-452-049-6_089
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,365
inproceedings
steinberger-etal-2017-pyramid
Pyramid-based Summary Evaluation Using {A}bstract {M}eaning {R}epresentation
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1090/
Steinberger, Josef and Krejzl, Peter and Brychc{\'i}n, Tom{\'a}{\v{s}}
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
701--706
We propose a novel metric for evaluating summary content coverage. The evaluation framework follows the Pyramid approach to measure how many summarization content units, considered important by human annotators, are contained in an automatic summary. Our approach automatizes the evaluation process, which does not need any manual intervention on the evaluated summary side. Our approach compares abstract meaning representations of each content unit mention and each summary sentence. We found that the proposed metric complements well the widely-used ROUGE metrics.
null
null
10.26615/978-954-452-049-6_090
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,366
inproceedings
steinberger-etal-2017-large
Large-scale news entity sentiment analysis
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1091/
Steinberger, Ralf and Hegele, Stefanie and Tanev, Hristo and Della Rocca, Leonida
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
707--715
We work on detecting positive or negative sentiment towards named entities in very large volumes of news articles. The aim is to monitor changes over time, as well as to work towards media bias detection by com-paring differences across news sources and countries. With view to applying the same method to dozens of languages, we use lin-guistically light-weight methods: searching for positive and negative terms in bags of words around entity mentions (also consid-ering negation). Evaluation results are good and better than a third-party baseline sys-tem, but precision is not sufficiently high to display the results publicly in our multilin-gual news analysis system Europe Media Monitor (EMM). In this paper, we focus on describing our effort to improve the English language results by avoiding the biggest sources of errors. We also present new work on using a syntactic parser to identify safe opinion recognition rules, such as predica-tive structures in which sentiment words di-rectly refer to an entity. The precision of this method is good, but recall is very low.
null
null
10.26615/978-954-452-049-6_091
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,367
inproceedings
sulea-etal-2017-predicting
Predicting the Law Area and Decisions of {F}rench {S}upreme {C}ourt Cases
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1092/
{\c{S}}ulea, Octavia-Maria and Zampieri, Marcos and Vela, Mihaela and van Genabith, Josef
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
716--722
In this paper, we investigate the application of text classification methods to predict the law area and the decision of cases judged by the French Supreme Court. We also investigate the influence of the time period in which a ruling was made over the textual form of the case description and the extent to which it is necessary to mask the judge`s motivation for a ruling to emulate a real-world test scenario. We report results of 96{\%} f1 score in predicting a case ruling, 90{\%} f1 score in predicting the law area of a case, and 75.9{\%} f1 score in estimating the time span when a ruling has been issued using a linear Support Vector Machine (SVM) classifier trained on lexical features.
null
null
10.26615/978-954-452-049-6_092
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,368
inproceedings
sumalvico-2017-unsupervised
Unsupervised Learning of Morphology with Graph Sampling
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1093/
Sumalvico, Maciej
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
723--732
We introduce a language-independent, graph-based probabilistic model of morphology, which uses transformation rules operating on whole words instead of the traditional morphological segmentation. The morphological analysis of a set of words is expressed through a graph having words as vertices and structural relationships between words as edges. We define a probability distribution over such graphs and develop a sampler based on the Metropolis-Hastings algorithm. The sampling is applied in order to determine the strength of morphological relationships between words, filter out accidental similarities and reduce the set of rules necessary to explain the data. The model is evaluated on the task of finding pairs of morphologically similar words, as well as generating new words. The results are compared to a state-of-the-art segmentation-based approach.
null
null
10.26615/978-954-452-049-6_093
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,369
inproceedings
sweeney-padmanabhan-2017-multi
Multi-entity sentiment analysis using entity-level feature extraction and word embeddings approach
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1094/
Sweeney, Colm and Padmanabhan, Deepak
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
733--740
The sentiment analysis task has been traditionally divided into lexicon or machine learning approaches, but recently the use of word embeddings methods have emerged, that provide powerful algorithms to allow semantic understanding without the task of creating large amounts of annotated test data. One problem with this type of binary classification, is that the sentiment output will be in the form of {\textquoteleft}1' (positive) or {\textquoteleft}0' (negative) for the string of text in the tweet, regardless if there are one or more entities referred to in the text. This paper plans to enhance the word embeddings approach with the deployment of a sentiment lexicon-based technique to appoint a total score that indicates the polarity of opinion in relation to a particular entity or entities. This type of sentiment classification is a way of associating a given entity with the adjectives, adverbs, and verbs describing it, and extracting the associated sentiment to try and infer if the text is positive or negative in relation to the entity or entities.
null
null
10.26615/978-954-452-049-6_094
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,370
inproceedings
tahmasebi-risse-2017-finding
Finding Individual Word Sense Changes and their Delay in Appearance
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1095/
Tahmasebi, Nina and Risse, Thomas
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
741--749
We present a method for detecting word sense changes by utilizing automatically induced word senses. Our method works on the level of individual senses and allows a word to have e.g. one stable sense and then add a novel sense that later experiences change. Senses are grouped based on polysemy to find linguistic concepts and we can find broadening and narrowing as well as novel (polysemous and homonymic) senses. We evaluate on a testset, present recall and estimates of the time between expected and found change.
null
null
10.26615/978-954-452-049-6_095
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,371
inproceedings
thomas-etal-2017-streaming
Streaming Text Analytics for Real-Time Event Recognition
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1096/
Thomas, Philippe and Kirschnick, Johannes and Hennig, Leonhard and Ai, Renlong and Schmeier, Sven and Hemsen, Holmer and Xu, Feiyu and Uszkoreit, Hans
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
750--757
A huge body of continuously growing written knowledge is available on the web in the form of social media posts, RSS feeds, and news articles. Real-time information extraction from such high velocity, high volume text streams requires scalable, distributed natural language processing pipelines. We introduce such a system for fine-grained event recognition within the big data framework Flink, and demonstrate its capabilities for extracting and geo-locating mobility- and industry-related events from heterogeneous text sources. Performance analyses conducted on several large datasets show that our system achieves high throughput and maintains low latency, which is crucial when events need to be detected and acted upon in real-time. We also present promising experimental results for the event extraction component of our system, which recognizes a novel set of event types. The demo system is available at \url{http://dfki.de/sd4m-sta-demo/}.
null
null
10.26615/978-954-452-049-6_096
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,372
inproceedings
tokunaga-etal-2017-eye
An Eye-tracking Study of Named Entity Annotation
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1097/
Tokunaga, Takenobu and Nishikawa, Hitoshi and Iwakura, Tomoya
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
758--764
Utilising effective features in machine learning-based natural language processing (NLP) is crucial in achieving good performance for a given NLP task. The paper describes a pilot study on the analysis of eye-tracking data during named entity (NE) annotation, aiming at obtaining insights into effective features for the NE recognition task. The eye gaze data were collected from 10 annotators and analysed regarding working time and fixation distribution. The results of the preliminary qualitative analysis showed that human annotators tend to look at broader contexts around the target NE than recent state-of-the-art automatic NE recognition systems and to use predicate argument relations to identify the NE categories.
null
null
10.26615/978-954-452-049-6_097
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,373
inproceedings
tsekouras-etal-2017-graph
A Graph-based Text Similarity Measure That Employs Named Entity Information
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1098/
Tsekouras, Leonidas and Varlamis, Iraklis and Giannakopoulos, George
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
765--771
Text comparison is an interesting though hard task, with many applications in Natural Language Processing. This work introduces a new text-similarity measure, which employs named-entities' information extracted from the texts and the n-gram graphs' model for representing documents. Using OpenCalais as a named-entity recognition service and the JINSECT toolkit for constructing and managing n-gram graphs, the text similarity measure is embedded in a text clustering algorithm (k-Means). The evaluation of the produced clusters with various clustering validity metrics shows that the extraction of named entities at a first step can be profitable for the time-performance of similarity measures that are based on the n-gram graph representation without affecting the overall performance of the NLP task.
null
null
10.26615/978-954-452-049-6_098
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,374
inproceedings
wawer-mykowiecka-2017-detecting
Detecting Metaphorical Phrases in the {P}olish Language
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1099/
Wawer, Aleksander and Mykowiecka, Agnieszka
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
772--777
In this paper we describe experiments with automated detection of metaphors in the Polish language. We focus our analysis on noun phrases composed of an adjective and a noun, and distinguish three types of expressions: with literal sense, with metaphorical sense, and expressions both literal and methaphorical (context-dependent). We propose a method of automatically recognizing expression type using word embeddings and neural networks. We evaluate multiple neural network architectures and demonstrate that the method significantly outperforms strong baselines.
null
null
10.26615/978-954-452-049-6_099
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,375
inproceedings
weegar-etal-2017-efficient
Efficient Encoding of Pathology Reports Using Natural Language Processing
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1100/
Weegar, Rebecka and Nyg{\r{a}}rd, Jan F and Dalianis, Hercules
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
778--783
In this article we present a system that extracts information from pathology reports. The reports are written in Norwegian and contain free text describing prostate biopsies. Currently, these reports are manually coded for research and statistical purposes by trained experts at the Cancer Registry of Norway where the coders extract values for a set of predefined fields that are specific for prostate cancer. The presented system is rule based and achieves an average F-score of 0.91 for the fields Gleason grade, Gleason score, the number of biopsies that contain tumor tissue, and the orientation of the biopsies. The system also identifies reports that contain ambiguity or other content that should be reviewed by an expert. The system shows potential to encode the reports considerably faster, with less resources, and similar high quality to the manual encoding.
null
null
10.26615/978-954-452-049-6_100
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,376
inproceedings
yang-etal-2017-neural-reranking
Neural Reranking for Named Entity Recognition
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1101/
Yang, Jie and Zhang, Yue and Dong, Fei
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
784--792
We propose a neural reranking system for named entity recognition (NER), leverages recurrent neural network models to learn sentence-level patterns that involve named entity mentions. In particular, given an output sentence produced by a baseline NER model, we replace all entity mentions, such as \textit{Barack Obama}, into their entity types, such as \textit{PER}. The resulting sentence patterns contain direct output information, yet is less sparse without specific named entities. For example, {\textquotedblleft}PER was born in LOC{\textquotedblright} can be such a pattern. LSTM and CNN structures are utilised for learning deep representations of such sentences for reranking. Results show that our system can significantly improve the NER accuracies over two different baselines, giving the best reported results on a standard benchmark.
null
null
10.26615/978-954-452-049-6_101
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,377
inproceedings
yao-etal-2017-online
Online Deception Detection Refueled by Real World Data Collection
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1102/
Yao, Wenlin and Dai, Zeyu and Huang, Ruihong and Caverlee, James
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
793--802
The lack of large realistic datasets presents a bottleneck in online deception detection studies. In this paper, we apply a data collection method based on social network analysis to quickly identify high quality deceptive and truthful online reviews1 from Amazon. The dataset contains more than 10,000 deceptive reviews and is diverse in product domains and reviewers. Using this dataset, we explore effective general features for online deception detection that perform well across domains. We demonstrate that with generalized features {--} advertising speak and writing complexity scores {--} deception detection performance can be further improved by adding additional deceptive reviews from assorted domains in training. Finally, reviewer level evaluation gives an interesting insight into different deceptive reviewers' writing styles.
null
null
10.26615/978-954-452-049-6_102
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,378
inproceedings
yao-etal-2017-weakly
A Weakly Supervised Approach to Train Temporal Relation Classifiers and Acquire Regular Event Pairs Simultaneously
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1103/
Yao, Wenlin and Nettyam, Saipravallika and Huang, Ruihong
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
803--812
Capabilities of detecting temporal and causal relations between two events can benefit many applications. Most of existing temporal relation classifiers were trained in a supervised manner. Instead, we explore the observation that regular event pairs show a consistent temporal relation despite of their various contexts and these rich contexts can be used to train a contextual temporal relation classifier, which can further recognize new temporal relation contexts and identify new regular event pairs. We focus on detecting after and before temporal relations and design a weakly supervised learning approach that extracts thousands of regular event pairs and learns a contextual temporal relation classifier simultaneously. Evaluation shows that the acquired regular event pairs are of high quality and contain rich commonsense knowledge and domain specific knowledge. In addition, the weakly supervised trained temporal relation classifier achieves comparable performance with the state-of-the-art supervised systems.
null
null
10.26615/978-954-452-049-6_103
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,379
inproceedings
yimam-etal-2017-multilingual
Multilingual and Cross-Lingual Complex Word Identification
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1104/
Yimam, Seid Muhie and {\v{S}}tajner, Sanja and Riedl, Martin and Biemann, Chris
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
813--822
Complex Word Identification (CWI) is an important task in lexical simplification and text accessibility. Due to the lack of CWI datasets, previous works largely depend on Simple English Wikipedia and edit histories for obtaining {\textquoteleft}gold standard' annotations, which are of doubtable quality, and limited only to English. We collect complex words/phrases (CP) for English, German and Spanish, annotated by both native and non-native speakers, and propose language independent features that can be used to train multilingual and cross-lingual CWI models. We show that the performance of cross-lingual CWI systems (using a model trained on one language and applying it on the other languages) is comparable to the performance of monolingual CWI systems.
null
null
10.26615/978-954-452-049-6_104
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,380
inproceedings
yordanova-2017-automatic
Automatic Generation of Situation Models for Plan Recognition Problems
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1105/
Yordanova, Kristina
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
823--830
Recent attempts at behaviour understanding through language grounding have shown that it is possible to automatically generate models for planning problems from textual instructions. One drawback of these approaches is that they either do not make use of the semantic structure behind the model elements identified in the text, or they manually incorporate a collection of concepts with semantic relationships between them. We call this collection of knowledge situation model. The situation model introduces additional context information to the model. It could also potentially reduce the complexity of the planning problem compared to models that do not use situation models. To address this problem, we propose an approach that automatically generates the situation model from textual instructions. The approach is able to identify various hierarchical, spatial, directional, and causal relations. We use the situation model to automatically generate planning problems in a PDDL notation and we show that the situation model reduces the complexity of the PDDL model in terms of number of operators and branching factor compared to planning models that do not make use of situation models.
null
null
10.26615/978-954-452-049-6_105
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,381
inproceedings
yordanova-2017-simple
A Simple Model for Improving the Performance of the {S}tanford Parser for Action Detection in Textual Instructions
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1106/
Yordanova, Kristina
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
831--838
Different approaches for behaviour understanding rely on textual instructions to generate models of human behaviour. These approaches usually use state of the art parsers to obtain the part of speech (POS) meaning and dependencies of the words in the instructions. For them it is essential that the parser is able to correctly annotate the instructions and especially the verbs as they describe the actions of the person. State of the art parsers usually make errors when annotating textual instructions, as they have short sentence structure often in imperative form. The inability of the parser to identify the verbs results in the inability of behaviour understanding systems to identify the relevant actions. To address this problem, we propose a simple rule-based model that attempts to correct any incorrectly annotated verbs. We argue that the model is able to significantly improve the parser`s performance without the need of additional training data. We evaluate our approach by extracting the actions from 61 textual instructions annotated only with the Stanford parser and once again after applying our model. The results show a significant improvement in the recognition rate when applying the rules (75{\%} accuracy compared to 68{\%} without the rules, p-value {\ensuremath{<}} 0.001).
null
null
10.26615/978-954-452-049-6_106
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,382
inproceedings
zilio-etal-2017-using
Using {NLP} for Enhancing Second Language Acquisition
Mitkov, Ruslan and Angelova, Galia
sep
2017
Varna, Bulgaria
INCOMA Ltd.
https://aclanthology.org/R17-1107/
Zilio, Leonardo and Wilkens, Rodrigo and Fairon, C{\'e}drick
Proceedings of the International Conference Recent Advances in Natural Language Processing, {RANLP} 2017
839--846
This study presents SMILLE, a system that draws on the Noticing Hypothesis and on input enhancements, addressing the lack of salience of grammatical infor mation in online documents chosen by a given user. By means of input enhancements, the system can draw the user`s attention to grammar, which could possibly lead to a higher intake per input ratio for metalinguistic information. The system receives as input an online document and submits it to a combined processing of parser and hand-written rules for detecting its grammatical structures. The input text can be freely chosen by the user, providing a more engaging experience and reflecting the user`s interests. The system can enhance a total of 107 fine-grained types of grammatical structures that are based on the CEFR. An evaluation of some of those structures resulted in an overall precision of 87{\%}.
null
null
10.26615/978-954-452-049-6_107
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,383
inproceedings
clairet-2017-dish
Dish Classification using Knowledge based Dietary Conflict Detection
Kovatchev, Venelin and Temnikova, Irina and Gencheva, Pepa and Kiprov, Yasen and Nikolova, Ivelina
sep
2017
Varna
INCOMA Ltd.
https://aclanthology.org/R17-2001/
Clairet, Nadia
Proceedings of the Student Research Workshop Associated with {RANLP} 2017
1--9
The present paper considers the problem of dietary conflict detection from dish titles. The proposed method explores the semantics associated with the dish title in order to discover a certain or possible incompatibility of a particular dish with a particular diet. Dish titles are parts of the elusive and metaphoric gastronomy language, their processing can be viewed as a combination of short text and domain-specific texts analysis. We build our algorithm on the basis of a common knowledge lexical semantic network and show how such network can be used for domain specific short text processing.
null
null
10.26615/issn.1314-9156.2017_001
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,385
inproceedings
daudert-2017-analysing
Analysing Market Sentiments: Utilising Deep Learning to Exploit Relationships within the Economy
Kovatchev, Venelin and Temnikova, Irina and Gencheva, Pepa and Kiprov, Yasen and Nikolova, Ivelina
sep
2017
Varna
INCOMA Ltd.
https://aclanthology.org/R17-2002/
Daudert, Tobias
Proceedings of the Student Research Workshop Associated with {RANLP} 2017
10--16
In today`s world, globalisation is not only affecting inter-culturalism but also linking markets across the globe. Given that all markets are affecting each other and are not only driven by fundamental data but also by sentiments, sentiment analysis regarding the markets becomes a tool to predict, anticipate, and milden future economic crises such as the one we faced in 2008. In this paper, an approach to improve sentiment analysis by exploiting relationships among different kinds of sentiment, together with supplementary information, from and across various data sources is proposed.
null
null
10.26615/issn.1314-9156.2017_002
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,386
inproceedings
jwalapuram-2017-evaluating
Evaluating Dialogs based on {G}rice`s Maxims
Kovatchev, Venelin and Temnikova, Irina and Gencheva, Pepa and Kiprov, Yasen and Nikolova, Ivelina
sep
2017
Varna
INCOMA Ltd.
https://aclanthology.org/R17-2003/
Jwalapuram, Prathyusha
Proceedings of the Student Research Workshop Associated with {RANLP} 2017
17--24
There is no agreed upon standard for the evaluation of conversational dialog systems, which are well-known to be hard to evaluate due to the difficulty in pinning down metrics that will correspond to human judgements and the subjective nature of human judgment itself. We explored the possibility of using Grice`s Maxims to evaluate effective communication in conversation. We collected some system generated dialogs from popular conversational chatbots across the spectrum and conducted a survey to see how the human judgements based on Gricean maxims correlate, and if such human judgments can be used as an effective evaluation metric for conversational dialog.
null
null
10.26615/issn.1314-9156.2017_003
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,387
inproceedings
popov-2017-word
Word Sense Disambiguation with Recurrent Neural Networks
Kovatchev, Venelin and Temnikova, Irina and Gencheva, Pepa and Kiprov, Yasen and Nikolova, Ivelina
sep
2017
Varna
INCOMA Ltd.
https://aclanthology.org/R17-2004/
Popov, Alexander
Proceedings of the Student Research Workshop Associated with {RANLP} 2017
25--34
This paper presents a neural network architecture for word sense disambiguation (WSD). The architecture employs recurrent neural layers and more specifically LSTM cells, in order to capture information about word order and to easily incorporate distributed word representations (embeddings) as features, without having to use a fixed window of text. The paper demonstrates that the architecture is able to compete with the most successful supervised systems for WSD and that there is an abundance of possible improvements to take it to the current state of the art. In addition, it explores briefly the potential of combining different types of embeddings as input features; it also discusses possible ways for generating {\textquotedblleft}artificial corpora{\textquotedblright} from knowledge bases {--} for the purpose of producing training data and in relation to possible applications of embedding lemmas and word senses in the same space.
null
null
10.26615/issn.1314-9156.2017_004
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,388
inproceedings
rohanian-2017-multi
Multi-Document Summarization of {P}ersian Text using Paragraph Vectors
Kovatchev, Venelin and Temnikova, Irina and Gencheva, Pepa and Kiprov, Yasen and Nikolova, Ivelina
sep
2017
Varna
INCOMA Ltd.
https://aclanthology.org/R17-2005/
Rohanian, Morteza
Proceedings of the Student Research Workshop Associated with {RANLP} 2017
35--40
A multi-document summarizer finds the key topics from multiple textual sources and organizes information around them. In this paper we propose a summarization method for Persian text using paragraph vectors that can represent textual units of arbitrary lengths. We use these vectors to calculate the semantic relatedness between documents, cluster them to a number of predetermined groups, weight them based on their distance to the centroids and the intra-cluster homogeneity and take out the key paragraphs. We compare the final summaries with the gold-standard summaries of 21 digital topics using the ROUGE evaluation metric. Experimental results show the advantages of using paragraph vectors over earlier attempts at developing similar methods for a low resource language like Persian.
null
null
10.26615/issn.1314-9156.2017_005
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,389
inproceedings
simeonova-2017-gradient
Gradient Emotional Analysis
Kovatchev, Venelin and Temnikova, Irina and Gencheva, Pepa and Kiprov, Yasen and Nikolova, Ivelina
sep
2017
Varna
INCOMA Ltd.
https://aclanthology.org/R17-2006/
Simeonova, Lilia
Proceedings of the Student Research Workshop Associated with {RANLP} 2017
41--45
Over the past few years a lot of research has been done on sentiment analysis, however, the emotional analysis, being so subjective, is not a well examined dis-cipline. The main focus of this proposal is to categorize a given sentence in two dimensions - sentiment and arousal. For this purpose two techniques will be com-bined {-- Machine Learning approach and Lexicon-based approach. The first di-mension will give the sentiment value {-- positive versus negative. This will be re-solved by using Na{\"ive Bayes Classifier. The second and more interesting dimen-sion will determine the level of arousal. This will be achieved by evaluation of given a phrase or sentence based on lexi-con with affective ratings for 14 thousand English words.
null
null
10.26615/issn.1314-9156.2017_006
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,390
inproceedings
tran-2017-applying
Applying Deep Neural Network to Retrieve Relevant Civil Law Articles
Kovatchev, Venelin and Temnikova, Irina and Gencheva, Pepa and Kiprov, Yasen and Nikolova, Ivelina
sep
2017
Varna
INCOMA Ltd.
https://aclanthology.org/R17-2007/
Tran, Anh Hang Nga
Proceedings of the Student Research Workshop Associated with {RANLP} 2017
46--48
The paper aims to achieve the legal question answering information retrieval (IR) task at Competition on Legal Information Extraction/Entailment (COLIEE) 2017. Our proposal methodology for the task is to utilize deep neural network, natural language processing and word2vec. The system was evaluated using training and testing data from the competition on legal information extraction/entailment (COLIEE). Our system mainly focuses on giving relevant civil law articles for given bar exams. The corpus of legal questions is drawn from Japanese Legal Bar exam queries. We implemented a combined deep neural network with additional features NLP and word2vec to gain the corresponding civil law articles based on a given bar exam {\textquoteleft}Yes/No' questions. This paper focuses on clustering words-with-relation in order to acquire relevant civil law articles. All evaluation processes were done on the COLIEE 2017 training and test data set. The experimental result shows a very promising result.
null
null
10.26615/issn.1314-9156.2017_007
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,391
article
smith-etal-2017-evaluating
Evaluating Visual Representations for Topic Understanding and Their Effects on Manually Generated Topic Labels
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1001/
Smith, Alison and Lee, Tak Yeon and Poursabzi-Sangdeh, Forough and Boyd-Graber, Jordan and Elmqvist, Niklas and Findlater, Leah
null
1--16
Probabilistic topic models are important tools for indexing, summarizing, and analyzing large document collections by their themes. However, promoting end-user understanding of topics remains an open research problem. We compare labels generated by users given four topic visualization techniques{---}word lists, word lists with bars, word clouds, and network graphs{---}against each other and against automatically generated labels. Our basis of comparison is participant ratings of how well labels describe documents from the topic. Our study has two phases: a labeling phase where participants label visualized topics and a validation phase where different participants select which labels best describe the topics' documents. Although all visualizations produce similar quality labels, simple visualizations such as word lists allow participants to quickly understand topics, while complex visualizations take longer but expose multi-word expressions that simpler visualizations obscure. Automatic labels lag behind user-created labels, but our dataset of manually labeled topics highlights linguistic patterns (e.g., hypernyms, phrases) that can be used to improve automatic topic labeling algorithms.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00042
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,393
article
anderson-etal-2017-visually
Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Abstract Nouns
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1002/
Anderson, Andrew J. and Kiela, Douwe and Clark, Stephen and Poesio, Massimo
null
17--30
Important advances have recently been made using computational semantic models to decode brain activity patterns associated with concepts; however, this work has almost exclusively focused on concrete nouns. How well these models extend to decoding abstract nouns is largely unknown. We address this question by applying state-of-the-art computational models to decode functional Magnetic Resonance Imaging (fMRI) activity patterns, elicited by participants reading and imagining a diverse set of both concrete and abstract nouns. One of the models we use is linguistic, exploiting the recent word2vec skipgram approach trained on Wikipedia. The second is visually grounded, using deep convolutional neural networks trained on Google Images. Dual coding theory considers concrete concepts to be encoded in the brain both linguistically and visually, and abstract concepts only linguistically. Splitting the fMRI data according to human concreteness ratings, we indeed observe that both models significantly decode the most concrete nouns; however, accuracy is significantly greater using the text-based models for the most abstract nouns. More generally this confirms that current computational models are sufficiently advanced to assist in investigating the representational structure of abstract concepts in the brain.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00043
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,394
article
modi-etal-2017-modeling
Modeling Semantic Expectation: Using Script Knowledge for Referent Prediction
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1003/
Modi, Ashutosh and Titov, Ivan and Demberg, Vera and Sayeed, Asad and Pinkal, Manfred
null
31--44
Recent research in psycholinguistics has provided increasing evidence that humans predict upcoming content. Prediction also affects perception and might be a key to robustness in human language processing. In this paper, we investigate the factors that affect human prediction by building a computational model that can predict upcoming discourse referents based on linguistic knowledge alone vs. linguistic knowledge jointly with common-sense knowledge in the form of scripts. We find that script knowledge significantly improves model estimates of human predictions. In a second study, we test the highly controversial hypothesis that predictability influences referring expression type but do not find evidence for such an effect.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00044
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,395
article
liu-zhang-2017-shift
Shift-Reduce Constituent Parsing with Neural Lookahead Features
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1004/
Liu, Jiangming and Zhang, Yue
null
45--58
Transition-based models can be fast and accurate for constituent parsing. Compared with chart-based models, they leverage richer features by extracting history information from a parser stack, which consists of a sequence of non-local constituents. On the other hand, during incremental parsing, constituent information on the right hand side of the current word is not utilized, which is a relative weakness of shift-reduce parsing. To address this limitation, we leverage a fast neural model to extract lookahead features. In particular, we build a bidirectional LSTM model, which leverages full sentence information to predict the hierarchy of constituents that each word starts and ends. The results are then passed to a strong transition-based constituent parser as lookahead features. The resulting parser gives 1.3{\%} absolute improvement in WSJ and 2.3{\%} in CTB compared to the baseline, giving the highest reported accuracies for fully-supervised parsing.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00045
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,396
article
chang-collins-2017-polynomial
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1005/
Chang, Yin-Wen and Collins, Michael
null
59--71
Decoding of phrase-based translation models in the general case is known to be NP-complete, by a reduction from the traveling salesman problem (Knight, 1999). In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lhd+1) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00046
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,397
article
futrell-etal-2017-generative
A Generative Model of Phonotactics
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1006/
Futrell, Richard and Albright, Adam and Graff, Peter and O{'}Donnell, Timothy J.
null
73--86
We present a probabilistic model of phonotactics, the set of well-formed phoneme sequences in a language. Unlike most computational models of phonotactics (Hayes and Wilson, 2008; Goldsmith and Riggle, 2012), we take a fully generative approach, modeling a process where forms are built up out of subparts by phonologically-informed structure building operations. We learn an inventory of subparts by applying stochastic memoization (Johnson et al., 2007; Goodman et al., 2008) to a generative process for phonemes structured as an and-or graph, based on concepts of feature hierarchy from generative phonology (Clements, 1985; Dresher, 2009). Subparts are combined in a way that allows tier-based feature interactions. We evaluate our models' ability to capture phonotactic distributions in the lexicons of 14 languages drawn from the WOLEX corpus (Graff, 2012). Our full model robustly assigns higher probabilities to held-out forms than a sophisticated N-gram model for all languages. We also present novel analyses that probe model behavior in more detail.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00047
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,398
article
tu-etal-2017-context
Context Gates for Neural Machine Translation
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1007/
Tu, Zhaopeng and Liu, Yang and Lu, Zhengdong and Liu, Xiaohua and Li, Hang
null
87--99
In neural machine translation (NMT), generation of a target word depends on both source and target contexts. We find that source contexts have a direct impact on the adequacy of a translation while target contexts affect the fluency. Intuitively, generation of a content word should rely more on the source context and generation of a functional word should rely more on the target context. Due to the lack of effective control over the influence from source and target contexts, conventional NMT tends to yield fluent but inadequate translations. To address this problem, we propose context gates which dynamically control the ratios at which source and target contexts contribute to the generation of target words. In this way, we can enhance both the adequacy and fluency of NMT with more careful control of the information flow from contexts. Experiments show that our approach significantly improves upon a standard attention-based NMT system by +2.3 BLEU points.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00048
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,399
article
peng-etal-2017-cross
Cross-Sentence N-ary Relation Extraction with Graph {LSTM}s
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1008/
Peng, Nanyun and Poon, Hoifung and Quirk, Chris and Toutanova, Kristina and Yih, Wen-tau
null
101--115
Past work in relation extraction has focused on binary relations in single sentences. Recent NLP inroads in high-value domains have sparked interest in the more general setting of extracting n-ary relations that span multiple sentences. In this paper, we explore a general relation extraction framework based on graph long short-term memory networks (graph LSTMs) that can be easily extended to cross-sentence n-ary relation extraction. The graph formulation provides a unified way of exploring different LSTM approaches and incorporating various intra-sentential and inter-sentential dependencies, such as sequential, syntactic, and discourse relations. A robust contextual representation is learned for the entities, which serves as input to the relation classifier. This simplifies handling of relations with arbitrary arity, and enables multi-task learning with related relations. We evaluate this framework in two important precision medicine settings, demonstrating its effectiveness with both conventional supervised learning and distant supervision. Cross-sentence extraction produced larger knowledge bases. and multi-task learning significantly improved extraction accuracy. A thorough analysis of various LSTM approaches yielded useful insight the impact of linguistic analysis on extraction accuracy.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00049
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,400
article
dunietz-etal-2017-automatically
Automatically Tagging Constructions of Causation and Their Slot-Fillers
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1009/
Dunietz, Jesse and Levin, Lori and Carbonell, Jaime
null
117--133
This paper explores extending shallow semantic parsing beyond lexical-unit triggers, using causal relations as a test case. Semantic parsing becomes difficult in the face of the wide variety of linguistic realizations that causation can take on. We therefore base our approach on the concept of constructions from the linguistic paradigm known as Construction Grammar (CxG). In CxG, a construction is a form/function pairing that can rely on arbitrary linguistic and semantic features. Rather than codifying all aspects of each construction`s form, as some attempts to employ CxG in NLP have done, we propose methods that offload that problem to machine learning. We describe two supervised approaches for tagging causal constructions and their arguments. Both approaches combine automatically induced pattern-matching rules with statistical classifiers that learn the subtler parameters of the constructions. Our results show that these approaches are promising: they significantly outperform na{\"ive baselines for both construction recognition and cause and effect head matches.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00050
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,401
article
bojanowski-etal-2017-enriching
Enriching Word Vectors with Subword Information
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1010/
Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas
null
135--146
Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram; words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00051
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,402
article
wang-eisner-2017-fine
Fine-Grained Prediction of Syntactic Typology: Discovering Latent Structure with Supervised Learning
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1011/
Wang, Dingquan and Eisner, Jason
null
147--161
We show how to predict the basic word-order facts of a novel language given only a corpus of part-of-speech (POS) sequences. We predict how often direct objects follow their verbs, how often adjectives follow their nouns, and in general the directionalities of all dependency relations. Such typological properties could be helpful in grammar induction. While such a problem is usually regarded as unsupervised learning, our innovation is to treat it as supervised learning, using a large collection of realistic synthetic languages as training data. The supervised learner must identify surface features of a language`s POS sequence (hand-engineered or neural features) that correlate with the language`s deeper structure (latent trees). In the experiment, we show: 1) Given a small set of real languages, it helps to add many synthetic languages to the training data. 2) Our system is robust even when the POS sequences include noise. 3) Our system on this task outperforms a grammar induction baseline by a large margin.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00052
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,403
article
teng-zhang-2017-head
Head-Lexicalized Bidirectional Tree {LSTM}s
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1012/
Teng, Zhiyang and Zhang, Yue
null
163--177
Sequential LSTMs have been extended to model tree structures, giving competitive results for a number of tasks. Existing methods model constituent trees by bottom-up combinations of constituent nodes, making direct use of input word information only for leaf nodes. This is different from sequential LSTMs, which contain references to input words for each node. In this paper, we propose a method for automatic head-lexicalization for tree-structure LSTMs, propagating head words from leaf nodes to every constituent node. In addition, enabled by head lexicalization, we build a tree LSTM in the top-down direction, which corresponds to bidirectional sequential LSTMs in structure. Experiments show that both extensions give better representations of tree structures. Our final model gives the best results on the Stanford Sentiment Treebank and highly competitive results on the TREC question type classification task.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00053
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,404
article
fujii-etal-2017-nonparametric
Nonparametric {B}ayesian Semi-supervised Word Segmentation
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1013/
Fujii, Ryo and Domoto, Ryo and Mochihashi, Daichi
null
179--189
This paper presents a novel hybrid generative/discriminative model of word segmentation based on nonparametric Bayesian methods. Unlike ordinary discriminative word segmentation which relies only on labeled data, our semi-supervised model also leverages a huge amounts of unlabeled text to automatically learn new {\textquotedblleft}words{\textquotedblright}, and further constrains them by using a labeled data to segment non-standard texts such as those found in social networking services. Specifically, our hybrid model combines a discriminative classifier (CRF; Lafferty et al. (2001) and unsupervised word segmentation (NPYLM; Mochihashi et al. (2009)), with a transparent exchange of information between these two model structures within the semi-supervised framework (JESS-CM; Suzuki and Isozaki (2008)). We confirmed that it can appropriately segment non-standard texts like those in Twitter and Weibo and has nearly state-of-the-art accuracy on standard datasets in Japanese, Chinese, and Thai.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00054
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,405
article
kim-etal-2017-joint
Joint Modeling of Topics, Citations, and Topical Authority in Academic Corpora
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1014/
Kim, Jooyeon and Kim, Dongwoo and Oh, Alice
null
191--204
Much of scientific progress stems from previously published findings, but searching through the vast sea of scientific publications is difficult. We often rely on metrics of scholarly authority to find the prominent authors but these authority indices do not differentiate authority based on research topics. We present Latent Topical-Authority Indexing (LTAI) for jointly modeling the topics, citations, and topical authority in a corpus of academic papers. Compared to previous models, LTAI differs in two main aspects. First, it explicitly models the generative process of the citations, rather than treating the citations as given. Second, it models each author`s influence on citations of a paper based on the topics of the cited papers, as well as the citing papers. We fit LTAI into four academic corpora: CORA, Arxiv Physics, PNAS, and Citeseer. We compare the performance of LTAI against various baselines, starting with the latent Dirichlet allocation, to the more advanced models including author-link topic model and dynamic author citation topic model. The results show that LTAI achieves improved accuracy over other similar models when predicting words, citations and authors of publications.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00055
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,406
article
martins-etal-2017-pushing
Pushing the Limits of Translation Quality Estimation
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1015/
Martins, Andr{\'e} F. T. and Junczys-Dowmunt, Marcin and Kepler, Fabio N. and Astudillo, Ram{\'o}n and Hokamp, Chris and Grundkiewicz, Roman
null
205--218
Translation quality estimation is a task of growing importance in NLP, due to its potential to reduce post-editing human effort in disruptive ways. However, this potential is currently limited by the relatively low accuracy of existing systems. In this paper, we achieve remarkable improvements by exploiting synergies between the related tasks of word-level quality estimation and automatic post-editing. First, we stack a new, carefully engineered, neural model into a rich feature-based word-level quality estimation system. Then, we use the output of an automatic post-editing system as an extra feature, obtaining striking results on WMT16: a word-level FMULT1 score of 57.47{\%} (an absolute gain of +7.95{\%} over the current state of the art), and a Pearson correlation score of 65.56{\%} for sentence-level HTER prediction (an absolute gain of +13.36{\%}).
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00056
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,407
article
wang-etal-2017-winning
Winning on the Merits: The Joint Effects of Content and Style on Debate Outcomes
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1016/
Wang, Lu and Beauchamp, Nick and Shugars, Sarah and Qin, Kechen
null
219--232
Debate and deliberation play essential roles in politics and government, but most models presume that debates are won mainly via superior style or agenda control. Ideally, however, debates would be won on the merits, as a function of which side has the stronger arguments. We propose a predictive model of debate that estimates the effects of linguistic features and the latent persuasive strengths of different topics, as well as the interactions between the two. Using a dataset of 118 Oxford-style debates, our model`s combination of content (as latent topics) and style (as linguistic features) allows us to predict audience-adjudicated winners with 74{\%} accuracy, significantly outperforming linguistic features alone (66{\%}). Our model finds that winning sides employ stronger arguments, and allows us to identify the linguistic features associated with strong or weak arguments.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00057
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,408
article
dalvi-mishra-etal-2017-domain
Domain-Targeted, High Precision Knowledge Extraction
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1017/
Dalvi Mishra, Bhavana and Tandon, Niket and Clark, Peter
null
233--246
Our goal is to construct a domain-targeted, high precision knowledge base (KB), containing general (subject,predicate,object) statements about the world, in support of a downstream question-answering (QA) application. Despite recent advances in information extraction (IE) techniques, no suitable resource for our task already exists; existing resources are either too noisy, too named-entity centric, or too incomplete, and typically have not been constructed with a clear scope or purpose. To address these, we have created a domain-targeted, high precision knowledge extraction pipeline, leveraging Open IE, crowdsourcing, and a novel canonical schema learning algorithm (called CASI), that produces high precision knowledge targeted to a particular domain - in our case, elementary science. To measure the KB`s coverage of the target domain`s knowledge (its {\textquotedblleft}comprehensiveness{\textquotedblright} with respect to science) we measure recall with respect to an independent corpus of domain text, and show that our pipeline produces output with over 80{\%} precision and 23{\%} recall with respect to that target, a substantially higher coverage of tuple-expressible science knowledge than other comparable resources. We have made the KB publicly available.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00058
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,409
article
berend-2017-sparse
Sparse Coding of Neural Word Embeddings for Multilingual Sequence Labeling
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1018/
Berend, G{\'a}bor
null
247--261
In this paper we propose and carefully evaluate a sequence labeling framework which solely utilizes sparse indicator features derived from dense distributed word representations. The proposed model obtains (near) state-of-the art performance for both part-of-speech tagging and named entity recognition for a variety of languages. Our model relies only on a few thousand sparse coding-derived features, without applying any modification of the word representations employed for the different tasks. The proposed model has favorable generalization properties as it retains over 89.8{\%} of its average POS tagging accuracy when trained at 1.2{\%} of the total available training data, i.e. 150 sentences per language.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00059
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,410
article
vieira-eisner-2017-learning
Learning to Prune: Exploring the Frontier of Fast and Accurate Parsing
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1019/
Vieira, Tim and Eisner, Jason
null
263--278
Pruning hypotheses during dynamic programming is commonly used to speed up inference in settings such as parsing. Unlike prior work, we train a pruning policy under an objective that measures end-to-end performance: we search for a fast and accurate policy. This poses a difficult machine learning problem, which we tackle with the lols algorithm. lols training must continually compute the effects of changing pruning decisions: we show how to make this efficient in the constituency parsing setting, via dynamic programming and change propagation algorithms. We find that optimizing end-to-end performance in this way leads to a better Pareto frontier{---}i.e., parsers which are more accurate for a given runtime.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00060
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,411
article
rasooli-collins-2017-cross
Cross-Lingual Syntactic Transfer with Limited Resources
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1020/
Rasooli, Mohammad Sadegh and Collins, Michael
null
279--293
We describe a simple but effective method for cross-lingual syntactic transfer of dependency parsers, in the scenario where a large amount of translation data is not available. This method makes use of three steps: 1) a method for deriving cross-lingual word clusters, which can then be used in a multilingual parser; 2) a method for transferring lexical information from a target language to source language treebanks; 3) a method for integrating these steps with the density-driven annotation projection method of Rasooli and Collins (2015). Experiments show improvements over the state-of-the-art in several languages used in previous work, in a setting where the only source of translation data is the Bible, a considerably smaller corpus than the Europarl corpus used in previous work. Results using the Europarl corpus as a source of translation data show additional improvements over the results of Rasooli and Collins (2015). We conclude with results on 38 datasets from the Universal Dependencies corpora.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00061
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,412
article
yang-eisenstein-2017-overcoming
Overcoming Language Variation in Sentiment Analysis with Social Attention
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1021/
Yang, Yi and Eisenstein, Jacob
null
295--307
Variation in language is ubiquitous, particularly in newer forms of writing such as social media. Fortunately, variation is not random; it is often linked to social properties of the author. In this paper, we show how to exploit social networks to make sentiment analysis more robust to social language variation. The key idea is linguistic homophily: the tendency of socially linked individuals to use language in similar ways. We formalize this idea in a novel attention-based neural network architecture, in which attention is divided among several basis models, depending on the author`s position in the social network. This has the effect of smoothing the classification function across the social network, and makes it possible to induce personalized classifiers even for authors for whom there is no labeled data or demographic metadata. This model significantly improves the accuracies of sentiment analysis on Twitter and on review data.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00062
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,413
article
mrksic-etal-2017-semantic
Semantic Specialization of Distributional Word Vector Spaces using Monolingual and Cross-Lingual Constraints
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1022/
Mrk{\v{s}}i{\'c}, Nikola and Vuli{\'c}, Ivan and {\'O} S{\'e}aghdha, Diarmuid and Leviant, Ira and Reichart, Roi and Ga{\v{s}}i{\'c}, Milica and Korhonen, Anna and Young, Steve
null
309--324
We present Attract-Repel, an algorithm for improving the semantic quality of word vectors by injecting constraints extracted from lexical resources. Attract-Repel facilitates the use of constraints from mono- and cross-lingual resources, yielding semantically specialized cross-lingual vector spaces. Our evaluation shows that the method can make use of existing cross-lingual lexicons to construct high-quality vector spaces for a plethora of different languages, facilitating semantic transfer from high- to lower-resource ones. The effectiveness of our approach is demonstrated with state-of-the-art results on semantic similarity datasets in six languages. We next show that Attract-Repel-specialized vectors boost performance in the downstream task of dialogue state tracking (DST) across multiple languages. Finally, we show that cross-lingual vector spaces produced by our algorithm facilitate the training of multilingual DST models, which brings further performance improvements.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00063
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,414
article
monroe-etal-2017-colors
Colors in Context: A Pragmatic Neural Model for Grounded Language Understanding
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1023/
Monroe, Will and Hawkins, Robert X.D. and Goodman, Noah D. and Potts, Christopher
null
325--338
We present a model of pragmatic referring expression interpretation in a grounded communication task (identifying colors from descriptions) that draws upon predictions from two recurrent neural network classifiers, a speaker and a listener, unified by a recursive pragmatic reasoning framework. Experiments show that this combined pragmatic model interprets color descriptions more accurately than the classifiers from which it is built, and that much of this improvement results from combining the speaker and listener perspectives. We observe that pragmatic reasoning helps primarily in the hardest cases: when the model must distinguish very similar colors, or when few utterances adequately express the target color. Our findings make use of a newly-collected corpus of human utterances in color reference games, which exhibit a variety of pragmatic behaviors. We also show that the embedded speaker model reproduces many of these pragmatic behaviors.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00064
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,415
article
johnson-etal-2017-googles
{G}oogle`s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1024/
Johnson, Melvin and Schuster, Mike and Le, Quoc V. and Krikun, Maxim and Wu, Yonghui and Chen, Zhifeng and Thorat, Nikhil and Vi{\'e}gas, Fernanda and Wattenberg, Martin and Corrado, Greg and Hughes, Macduff and Dean, Jeffrey
null
339--351
We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT systems using a single model. On the WMT`14 benchmarks, a single multilingual model achieves comparable performance for English{\textrightarrow}French and surpasses state-of-theart results for English{\textrightarrow}German. Similarly, a single multilingual model surpasses state-of-the-art results for French{\textrightarrow}English and German{\textrightarrow}English on WMT`14 and WMT`15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and also show some interesting examples when mixing languages.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00065
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,416
article
luo-etal-2017-unsupervised
Unsupervised Learning of Morphological Forests
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1025/
Luo, Jiaming and Narasimhan, Karthik and Barzilay, Regina
null
353--364
This paper focuses on unsupervised modeling of morphological families, collectively comprising a forest over the language vocabulary. This formulation enables us to capture edge-wise properties reflecting single-step morphological derivations, along with global distributional properties of the entire forest. These global properties constrain the size of the affix set and encourage formation of tight morphological families. The resulting objective is solved using Integer Linear Programming (ILP) paired with contrastive estimation. We train the model by alternating between optimizing the local log-linear model and the global ILP objective. We evaluate our system on three tasks: root detection, clustering of morphological families, and segmentation. Our experiments demonstrate that our model yields consistent gains in all three tasks compared with the best published results.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00066
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,417
article
lee-etal-2017-fully
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1026/
Lee, Jason and Cho, Kyunghyun and Hofmann, Thomas
null
365--378
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT`15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of the BLEU score and human judgment.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00067
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,418
article
zhang-etal-2017-ordinal
Ordinal Common-sense Inference
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1027/
Zhang, Sheng and Rudinger, Rachel and Duh, Kevin and Van Durme, Benjamin
null
379--395
Humans have the capacity to draw common-sense inferences from natural language: various things that are likely but not certain to hold based on established discourse, and are rarely stated explicitly. We propose an evaluation of automated common-sense inference based on an extension of recognizing textual entailment: predicting ordinal human responses on the subjective likelihood of an inference holding in a given context. We describe a framework for extracting common-sense knowledge from corpora, which is then used to construct a dataset for this ordinal entailment task. We train a neural sequence-to-sequence model on this dataset, which we use to score and generate possible inferences. Further, we annotate subsets of previously established datasets via our ordinal annotation protocol in order to then analyze the distinctions between these and what we have constructed.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00068
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,419
article
yamada-etal-2017-learning
Learning Distributed Representations of Texts and Entities from Knowledge Base
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1028/
Yamada, Ikuya and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu
null
397--411
We describe a neural network model that jointly learns distributed representations of texts and knowledge base (KB) entities. Given a text in the KB, we train our proposed model to predict entities that are relevant to the text. Our model is designed to be generic with the ability to address various NLP tasks with ease. We train the model using a large corpus of texts and their entity annotations extracted from Wikipedia. We evaluated the model on three important NLP tasks (i.e., sentence textual similarity, entity linking, and factoid question answering) involving both unsupervised and supervised settings. As a result, we achieved state-of-the-art results on all three of these tasks. Our code and trained models are publicly available for further academic research.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00069
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,420
article
liu-zhang-2017-order
In-Order Transition-based Constituent Parsing
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1029/
Liu, Jiangming and Zhang, Yue
null
413--424
Both bottom-up and top-down strategies have been used for neural transition-based constituent parsing. The parsing strategies differ in terms of the order in which they recognize productions in the derivation tree, where bottom-up strategies and top-down strategies take post-order and pre-order traversal over trees, respectively. Bottom-up parsers benefit from rich features from readily built partial parses, but lack lookahead guidance in the parsing process; top-down parsers benefit from non-local guidance for local decisions, but rely on a strong encoder over the input to predict a constituent hierarchy before its construction. To mitigate both issues, we propose a novel parsing system based on in-order traversal over syntactic trees, designing a set of transition actions to find a compromise between bottom-up constituent information and top-down lookahead information. Based on stack-LSTM, our psycholinguistically motivated constituent parsing system achieves 91.8 F1 on the WSJ benchmark. Furthermore, the system achieves 93.6 F1 with supervised reranking and 94.2 F1 with semi-supervised reranking, which are the best results on the WSJ benchmark.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00070
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,421
article
richter-etal-2017-evaluating
Evaluating Low-Level Speech Features Against Human Perceptual Data
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1030/
Richter, Caitlin and Feldman, Naomi H. and Salgado, Harini and Jansen, Aren
null
425--440
We introduce a method for measuring the correspondence between low-level speech features and human perception, using a cognitive model of speech perception implemented directly on speech recordings. We evaluate two speaker normalization techniques using this method and find that in both cases, speech features that are normalized across speakers predict human data better than unnormalized speech features, consistent with previous research. Results further reveal differences across normalization methods in how well each predicts human data. This work provides a new framework for evaluating low-level representations of speech on their match to human perception, and lays the groundwork for creating more ecologically valid models of speech perception.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00071
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,422
article
kummerfeld-klein-2017-parsing
Parsing with Traces: An {O}(n4) Algorithm and a Structural Representation
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1031/
Kummerfeld, Jonathan K. and Klein, Dan
null
441--454
General treebank analyses are graph structured, but parsers are typically restricted to tree structures for efficiency and modeling reasons. We propose a new representation and algorithm for a class of graph structures that is flexible enough to cover almost all treebank structures, while still admitting efficient learning and inference. In particular, we consider directed, acyclic, one-endpoint-crossing graph structures, which cover most long-distance dislocation, shared argumentation, and similar tree-violating linguistic phenomena. We describe how to convert phrase structure parses, including traces, to our new representation in a reversible manner. Our dynamic program uniquely decomposes structures, is sound and complete, and covers 97.3{\%} of the Penn English Treebank. We also implement a proof-of-concept parser that recovers a range of null elements and trace types.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00072
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,423
article
brooke-etal-2017-unsupervised
Unsupervised Acquisition of Comprehensive Multiword Lexicons using Competition in an n-gram Lattice
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1032/
Brooke, Julian and {\v{S}}najder, Jan and Baldwin, Timothy
null
455--470
We present a new model for acquiring comprehensive multiword lexicons from large corpora based on competition among n-gram candidates. In contrast to the standard approach of simple ranking by association measure, in our model n-grams are arranged in a lattice structure based on subsumption and overlap relationships, with nodes inhibiting other nodes in their vicinity when they are selected as a lexical item. We show how the configuration of such a lattice can be optimized tractably, and demonstrate using annotations of sampled n-grams that our method consistently outperforms alternatives by at least 0.05 F-score across several corpora and languages.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00073
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,424
article
dror-etal-2017-replicability
Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1033/
Dror, Rotem and Baumer, Gili and Bogomolov, Marina and Reichart, Roi
null
471--486
With the ever growing amount of textual data from a large variety of languages, domains, and genres, it has become standard to evaluate NLP algorithms on multiple datasets in order to ensure a consistent performance across heterogeneous setups. However, such multiple comparisons pose significant challenges to traditional statistical analysis methods in NLP and can lead to erroneous conclusions. In this paper we propose a Replicability Analysis framework for a statistically sound analysis of multiple comparisons between algorithms for NLP tasks. We discuss the theoretical advantages of this framework over the current, statistically unjustified, practice in the NLP literature, and demonstrate its empirical value across four applications: multi-domain dependency parsing, multilingual POS tagging, cross-domain sentiment classification and word similarity prediction.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00074
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,425
article
marie-fujita-2017-phrase
Phrase Table Induction Using In-Domain Monolingual Data for Domain Adaptation in Statistical Machine Translation
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1034/
Marie, Benjamin and Fujita, Atsushi
null
487--500
We present a new framework to induce an in-domain phrase table from in-domain monolingual data that can be used to adapt a general-domain statistical machine translation system to the targeted domain. Our method first compiles sets of phrases in source and target languages separately and generates candidate phrase pairs by taking the Cartesian product of the two phrase sets. It then computes inexpensive features for each candidate phrase pair and filters them using a supervised classifier in order to induce an in-domain phrase table. We experimented on the language pair English{--}French, both translation directions, in two domains and obtained consistently better results than a strong baseline system that uses an in-domain bilingual lexicon. We also conducted an error analysis that showed the induced phrase tables proposed useful translations, especially for words and phrases unseen in the parallel data used to train the general-domain baseline system.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00075
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,426
article
mansouri-bigvand-etal-2017-joint
Joint Prediction of Word Alignment with Alignment Types
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1035/
Mansouri Bigvand, Anahita and Bu, Te and Sarkar, Anoop
null
501--514
Current word alignment models do not distinguish between different types of alignment links. In this paper, we provide a new probabilistic model for word alignment where word alignments are associated with linguistically motivated alignment types. We propose a novel task of joint prediction of word alignment and alignment types and propose novel semi-supervised learning algorithms for this task. We also solve a sub-task of predicting the alignment type given an aligned word pair. In our experimental results, the generative models we introduce to model alignment types significantly outperform the models without alignment types.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00076
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,427
article
zhang-etal-2017-aspect
Aspect-augmented Adversarial Networks for Domain Adaptation
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1036/
Zhang, Yuan and Barzilay, Regina and Jaakkola, Tommi
null
515--528
We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27{\%} on a pathology dataset and 5{\%} on a review dataset.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00077
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,428
article
gallagher-etal-2017-anchored
Anchored Correlation Explanation: Topic Modeling with Minimal Domain Knowledge
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2017
Cambridge, MA
MIT Press
https://aclanthology.org/Q17-1037/
Gallagher, Ryan J. and Reing, Kyle and Kale, David and Ver Steeg, Greg
null
529--542
While generative models such as Latent Dirichlet Allocation (LDA) have proven fruitful in topic modeling, they often require detailed assumptions and careful specification of hyperparameters. Such model complexity issues only compound when trying to generalize generative models to incorporate human input. We introduce Correlation Explanation (CorEx), an alternative approach to topic modeling that does not assume an underlying generative model, and instead learns maximally informative topics through an information-theoretic framework. This framework naturally generalizes to hierarchical and semi-supervised extensions with no additional modeling assumptions. In particular, word-level domain knowledge can be flexibly incorporated within CorEx through anchor words, allowing topic separability and representation to be promoted with minimal human intervention. Across a variety of datasets, metrics, and experiments, we demonstrate that CorEx produces topics that are comparable in quality to those produced by unsupervised and semi-supervised variants of LDA.
Transactions of the Association for Computational Linguistics
5
10.1162/tacl_a_00078
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,429
inproceedings
liu-etal-2017-adversarial
Adversarial Multi-task Learning for Text Classification
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1001/
Liu, Pengfei and Qiu, Xipeng and Huang, Xuanjing
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1--10
Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features or the noise brought by other tasks. In this paper, we propose an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other. We conduct extensive experiments on 16 different text classification tasks, which demonstrates the benefits of our approach. Besides, we show that the shared knowledge learned by our proposed model can be regarded as off-the-shelf knowledge and easily transferred to new tasks. The datasets of all 16 tasks are publicly available at \url{http://nlp.fudan.edu.cn/data/}.
null
null
10.18653/v1/P17-1001
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,431
inproceedings
eger-etal-2017-neural
Neural End-to-End Learning for Computational Argumentation Mining
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1002/
Eger, Steffen and Daxenberger, Johannes and Gurevych, Iryna
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
11--22
We investigate neural techniques for end-to-end computational argumentation mining (AM). We frame AM both as a token-based dependency parsing and as a token-based sequence tagging problem, including a multi-task learning setup. Contrary to models that operate on the argument component level, we find that framing AM as dependency parsing leads to subpar performance results. In contrast, less complex (local) tagging models based on BiLSTMs perform robustly across classification scenarios, being able to catch long-range dependencies inherent to the AM problem. Moreover, we find that jointly learning {\textquoteleft}natural' subtasks, in a multi-task learning setup, improves performance.
null
null
10.18653/v1/P17-1002
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,432
inproceedings
liang-etal-2017-neural
Neural Symbolic Machines: Learning Semantic Parsers on {F}reebase with Weak Supervision
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1003/
Liang, Chen and Berant, Jonathan and Le, Quoc and Forbus, Kenneth D. and Lao, Ni
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
23--33
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce a Neural Symbolic Machine, which contains (a) a neural {\textquotedblleft}programmer{\textquotedblright}, i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a key-variable memory to handle compositionality (b) a symbolic {\textquotedblleft}computer{\textquotedblright}, i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE, we augment it with an iterative maximum-likelihood training process. NSM outperforms the state-of-the-art on the WebQuestionsSP dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge.
null
null
10.18653/v1/P17-1003
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,433
inproceedings
lin-etal-2017-neural
Neural Relation Extraction with Multi-lingual Attention
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1004/
Lin, Yankai and Liu, Zhiyuan and Sun, Maosong
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
34--43
Relation extraction has been widely used for finding unknown relational facts from plain text. Most existing methods focus on exploiting mono-lingual data for relation extraction, ignoring massive information from the texts in various languages. To address this issue, we introduce a multi-lingual neural relation extraction framework, which employs mono-lingual attention to utilize the information within mono-lingual texts and further proposes cross-lingual attention to consider the information consistency and complementarity among cross-lingual texts. Experimental results on real-world datasets show that, our model can take advantage of multi-lingual texts and consistently achieve significant improvements on relation extraction as compared with baselines.
null
null
10.18653/v1/P17-1004
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,434
inproceedings
cheng-etal-2017-learning
Learning Structured Natural Language Representations for Semantic Parsing
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1005/
Cheng, Jianpeng and Reddy, Siva and Saraswat, Vijay and Lapata, Mirella
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
44--55
We introduce a neural semantic parser which is interpretable and scalable. Our model converts natural language utterances to intermediate, domain-general natural language representations in the form of predicate-argument structures, which are induced with a transition system and subsequently mapped to target domains. The semantic parser is trained end-to-end using annotated logical forms or their denotations. We achieve the state of the art on SPADES and GRAPHQUESTIONS and obtain competitive results on GEOQUERY and WEBQUESTIONS. The induced predicate-argument structures shed light on the types of representations useful for semantic parsing and how these are different from linguistically motivated ones.
null
null
10.18653/v1/P17-1005
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,435
inproceedings
vulic-etal-2017-morph
Morph-fitting: Fine-Tuning Word Vector Spaces with Simple Language-Specific Rules
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1006/
Vuli{\'c}, Ivan and Mrk{\v{s}}i{\'c}, Nikola and Reichart, Roi and {\'O} S{\'e}aghdha, Diarmuid and Young, Steve and Korhonen, Anna
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
56--68
Morphologically rich languages accentuate two properties of distributional vector space models: 1) the difficulty of inducing accurate representations for low-frequency word forms; and 2) insensitivity to distinct lexical relations that have similar distributional signatures. These effects are detrimental for language understanding systems, which may infer that {\textquoteleft}inexpensive' is a rephrasing for {\textquoteleft}expensive' or may not associate {\textquoteleft}acquire' with {\textquoteleft}acquires'. In this work, we propose a novel morph-fitting procedure which moves past the use of curated semantic lexicons for improving distributional vector spaces. Instead, our method injects morphological constraints generated using simple language-specific rules, pulling inflectional forms of the same word close together and pushing derivational antonyms far apart. In intrinsic evaluation over four languages, we show that our approach: 1) improves low-frequency word estimates; and 2) boosts the semantic quality of the entire word vector collection. Finally, we show that morph-fitted vectors yield large gains in the downstream task of dialogue state tracking, highlighting the importance of morphology for tackling long-tail phenomena in language understanding tasks.
null
null
10.18653/v1/P17-1006
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,436
inproceedings
gittens-etal-2017-skip
Skip-Gram {\ensuremath{-}} {Z}ipf + Uniform = Vector Additivity
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1007/
Gittens, Alex and Achlioptas, Dimitris and Mahoney, Michael W.
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
69--76
In recent years word-embedding models have gained great popularity due to their remarkable performance on several tasks, including word analogy questions and caption generation. An unexpected {\textquotedblleft}side-effect{\textquotedblright} of such models is that their vectors often exhibit compositionality, i.e., \textit{adding}two word-vectors results in a vector that is only a small angle away from the vector of a word representing the semantic composite of the original words, e.g., {\textquotedblleft}man{\textquotedblright} + {\textquotedblleft}royal{\textquotedblright} = {\textquotedblleft}king{\textquotedblright}. This work provides a theoretical justification for the presence of additive compositionality in word vectors learned using the Skip-Gram model. In particular, it shows that additive compositionality holds in an even stricter sense (small distance rather than small angle) under certain assumptions on the process generating the corpus. As a corollary, it explains the success of vector calculus in solving word analogies. When these assumptions do not hold, this work describes the correct non-linear composition operator. Finally, this work establishes a connection between the Skip-Gram model and the Sufficient Dimensionality Reduction (SDR) framework of Globerson and Tishby: the parameters of SDR models can be obtained from those of Skip-Gram models simply by adding information on symbol frequencies. This shows that Skip-Gram embeddings are optimal in the sense of Globerson and Tishby and, further, implies that the heuristics commonly used to approximately fit Skip-Gram models can be used to fit SDR models.
null
null
10.18653/v1/P17-1007
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,437
inproceedings
abend-rappoport-2017-state
The State of the Art in Semantic Representation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1008/
Abend, Omri and Rappoport, Ari
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
77--89
Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes (e.g., AMR, UCCA, GMB, UDS) have been put forth. Yet, little has been done to assess the achievements and the shortcomings of these new contenders, compare them with syntactic schemes, and clarify the general goals of research on semantic representation. We address these gaps by critically surveying the state of the art in the field.
null
null
10.18653/v1/P17-1008
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,438
inproceedings
lu-ng-2017-joint
Joint Learning for Event Coreference Resolution
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1009/
Lu, Jing and Ng, Vincent
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
90--101
While joint models have been developed for many NLP tasks, the vast majority of event coreference resolvers, including the top-performing resolvers competing in the recent TAC KBP 2016 Event Nugget Detection and Coreference task, are pipeline-based, where the propagation of errors from the trigger detection component to the event coreference component is a major performance limiting factor. To address this problem, we propose a model for jointly learning event coreference, trigger detection, and event anaphoricity. Our joint model is novel in its choice of tasks and its features for capturing cross-task interactions. To our knowledge, this is the first attempt to train a mention-ranking model and employ event anaphoricity for event coreference. Our model achieves the best results to date on the KBP 2016 English and Chinese datasets.
null
null
10.18653/v1/P17-1009
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,439
inproceedings
liu-etal-2017-generating
Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1010/
Liu, Ting and Cui, Yiming and Yin, Qingyu and Zhang, Wei-Nan and Wang, Shijin and Hu, Guoping
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
102--111
Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers. Therefore, the lack of annotated data becomes a major obstacle in the progress of zero pronoun resolution task. Also, it is expensive to spend manpower on labeling the data for better performance. To alleviate the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Furthermore, we successfully transfer the cloze-style reading comprehension neural network model into zero pronoun resolution task and propose a two-step training mechanism to overcome the gap between the pseudo training data and the real one. Experimental results show that the proposed approach significantly outperforms the state-of-the-art systems with an absolute improvements of 3.1{\%} F-score on OntoNotes 5.0 data.
null
null
10.18653/v1/P17-1010
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,440
inproceedings
song-etal-2017-discourse
Discourse Mode Identification in Essays
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1011/
Song, Wei and Wang, Dong and Fu, Ruiji and Liu, Lizhen and Liu, Ting and Hu, Guoping
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
112--122
Discourse modes play an important role in writing composition and evaluation. This paper presents a study on the manual and automatic identification of narration,exposition, description, argument and emotion expressing sentences in narrative essays. We annotate a corpus to study the characteristics of discourse modes and describe a neural sequence labeling model for identification. Evaluation results show that discourse modes can be identified automatically with an average F1-score of 0.7. We further demonstrate that discourse modes can be used as features that improve automatic essay scoring (AES). The impacts of discourse modes for AES are also discussed.
null
null
10.18653/v1/P17-1011
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,441
inproceedings
gehring-etal-2017-convolutional
A Convolutional Encoder Model for Neural Machine Translation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1012/
Gehring, Jonas and Auli, Michael and Grangier, David and Dauphin, Yann
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
123--135
The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. We present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT`16 English-Romanian translation we achieve competitive accuracy to the state-of-the-art and on WMT`15 English-German we outperform several recently published results. Our models obtain almost the same accuracy as a very deep LSTM setup on WMT`14 English-French translation. We speed up CPU decoding by more than two times at the same or higher accuracy as a strong bi-directional LSTM.
null
null
10.18653/v1/P17-1012
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,442
inproceedings
wang-etal-2017-deep-neural
Deep Neural Machine Translation with Linear Associative Unit
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1013/
Wang, Mingxuan and Lu, Zhengdong and Zhou, Jie and Liu, Qun
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
136--145
Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with its capability in modeling complex functions and capturing complex linguistic structures. However NMT with deep architecture in its encoder or decoder RNNs often suffer from severe gradient diffusion due to the non-linear recurrent activations, which often makes the optimization much more difficult. To address this problem we propose a novel linear associative units (LAU) to reduce the gradient propagation path inside the recurrent unit. Different from conventional approaches (LSTM unit and GRU), LAUs uses linear associative connections between input and output of the recurrent unit, which allows unimpeded information flow through both space and time The model is quite simple, but it is surprisingly effective. Our empirical study on Chinese-English translation shows that our model with proper configuration can improve by 11.7 BLEU upon Groundhog and the best reported on results in the same setting. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art.
null
null
10.18653/v1/P17-1013
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,443
inproceedings
konstas-etal-2017-neural
Neural {AMR}: Sequence-to-Sequence Models for Parsing and Generation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1014/
Konstas, Ioannis and Iyer, Srinivasan and Yatskar, Mark and Choi, Yejin and Zettlemoyer, Luke
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
146--157
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the non-sequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequence-based AMR models are robust against ordering variations of graph-to-sequence conversions.
null
null
10.18653/v1/P17-1014
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,444
inproceedings
ling-etal-2017-program
Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1015/
Ling, Wang and Yogatama, Dani and Dyer, Chris and Blunsom, Phil
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
158--167
Solving algebraic word problems requires executing a series of arithmetic operations{---}a program{---}to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
null
null
10.18653/v1/P17-1015
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,445
inproceedings
hopkins-kiela-2017-automatically
Automatically Generating Rhythmic Verse with Neural Networks
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1016/
Hopkins, Jack and Kiela, Douwe
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
168--178
We propose two novel methodologies for the automatic generation of rhythmic poetry in a variety of forms. The first approach uses a neural language model trained on a phonetic encoding to learn an implicit representation of both the form and content of English poetry. This model can effectively learn common poetic devices such as rhyme, rhythm and alliteration. The second approach considers poetry generation as a constraint satisfaction problem where a generative neural language model is tasked with learning a representation of content, and a discriminative weighted finite state machine constrains it on the basis of form. By manipulating the constraints of the latter model, we can generate coherent poetry with arbitrary forms and themes. A large-scale extrinsic evaluation demonstrated that participants consider machine-generated poems to be written by humans 54{\%} of the time. In addition, participants rated a machine-generated poem to be the best amongst all evaluated.
null
null
10.18653/v1/P17-1016
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,446
inproceedings
gardent-etal-2017-creating
Creating Training Corpora for {NLG} Micro-Planners
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1017/
Gardent, Claire and Shimorina, Anastasia and Narayan, Shashi and Perez-Beltrachini, Laura
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
179--188
In this paper, we present a novel framework for semi-automatically creating linguistically challenging micro-planning data-to-text corpora from existing Knowledge Bases. Because our method pairs data of varying size and shape with texts ranging from simple clauses to short texts, a dataset created using this framework provides a challenging benchmark for microplanning. Another feature of this framework is that it can be applied to any large scale knowledge base and can therefore be used to train and learn KB verbalisers. We apply our framework to DBpedia data and compare the resulting dataset with Wen et al. 2016`s. We show that while Wen et al.`s dataset is more than twice larger than ours, it is less diverse both in terms of input and in terms of text. We thus propose our corpus generation framework as a novel method for creating challenging data sets from which NLG models can be learned which are capable of handling the complex interactions occurring during in micro-planning between lexicalisation, aggregation, surface realisation, referring expression generation and sentence segmentation. To encourage researchers to take up this challenge, we made available a dataset of 21,855 data/text pairs created using this framework in the context of the WebNLG shared task.
null
null
10.18653/v1/P17-1017
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,447
inproceedings
wang-etal-2017-gated
Gated Self-Matching Networks for Reading Comprehension and Question Answering
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1018/
Wang, Wenhui and Yang, Nan and Wei, Furu and Chang, Baobao and Zhou, Ming
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
189--198
In this paper, we present the gated self-matching networks for reading comprehension style question answering, which aims to answer questions from a given passage. We first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation. Then we propose a self-matching attention mechanism to refine the representation by matching the passage against itself, which effectively encodes information from the whole passage. We finally employ the pointer networks to locate the positions of answers from the passages. We conduct extensive experiments on the SQuAD dataset. The single model achieves 71.3{\%} on the evaluation metrics of exact match on the hidden test set, while the ensemble model further boosts the results to 75.9{\%}. At the time of submission of the paper, our model holds the first place on the SQuAD leaderboard for both single and ensemble model.
null
null
10.18653/v1/P17-1018
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,448
inproceedings
he-etal-2017-generating
Generating Natural Answers by Incorporating Copying and Retrieving Mechanisms in Sequence-to-Sequence Learning
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1019/
He, Shizhu and Liu, Cao and Liu, Kang and Zhao, Jun
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
199--208
Generating answer with natural language sentence is very important in real-world question answering systems, which needs to obtain a right answer as well as a coherent natural response. In this paper, we propose an end-to-end question answering system called COREQA in sequence-to-sequence learning, which incorporates copying and retrieving mechanisms to generate natural answers within an encoder-decoder framework. Specifically, in COREQA, the semantic units (words, phrases and entities) in a natural answer are dynamically predicted from the vocabulary, copied from the given question and/or retrieved from the corresponding knowledge base jointly. Our empirical study on both synthetic and real-world datasets demonstrates the efficiency of COREQA, which is able to generate correct, coherent and natural answers for knowledge inquired questions.
null
null
10.18653/v1/P17-1019
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,449
inproceedings
choi-etal-2017-coarse
Coarse-to-Fine Question Answering for Long Documents
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1020/
Choi, Eunsol and Hewlett, Daniel and Uszkoreit, Jakob and Polosukhin, Illia and Lacoste, Alexandre and Berant, Jonathan
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
209--220
We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of-the-art models. While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences. Inspired by how people first skim the document, identify relevant parts, and carefully read these parts to produce an answer, we combine a coarse, fast model for selecting relevant sentences and a more expensive RNN for producing the answer from those sentences. We treat sentence selection as a latent variable trained jointly from the answer only using reinforcement learning. Experiments demonstrate state-of-the-art performance on a challenging subset of the WikiReading dataset and on a new dataset, while speeding up the model by 3.5x-6.7x.
null
null
10.18653/v1/P17-1020
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,450
inproceedings
hao-etal-2017-end
An End-to-End Model for Question Answering over Knowledge Base with Cross-Attention Combining Global Knowledge
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1021/
Hao, Yanchao and Zhang, Yuanzhe and Liu, Kang and He, Shizhu and Liu, Zhanyi and Wu, Hua and Zhao, Jun
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
221--231
With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural network-based (NN-based) methods develop, NN-based KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the cross-attention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach.
null
null
10.18653/v1/P17-1021
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,451
inproceedings
andreas-etal-2017-translating
Translating Neuralese
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1022/
Andreas, Jacob and Dragan, Anca and Klein, Dan
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
232--242
Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents' messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.
null
null
10.18653/v1/P17-1022
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,452
inproceedings
zarriess-schlangen-2017-obtaining
Obtaining referential word meanings from visual and distributional information: Experiments on object naming
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1023/
Zarrie{\ss}, Sina and Schlangen, David
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
243--254
We investigate object naming, which is an important sub-task of referring expression generation on real-world images. As opposed to mutually exclusive labels used in object recognition, object names are more flexible, subject to communicative preferences and semantically related to each other. Therefore, we investigate models of referential word meaning that link visual to lexical information which we assume to be given through distributional word embeddings. We present a model that learns individual predictors for object names that link visual and distributional aspects of word meaning during training. We show that this is particularly beneficial for zero-shot learning, as compared to projecting visual objects directly into the distributional space. In a standard object naming task, we find that different ways of combining lexical and visual information achieve very similar performance, though experiments on model combination suggest that they capture complementary aspects of referential meaning.
null
null
10.18653/v1/P17-1023
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,453
inproceedings
shekhar-etal-2017-foil
{FOIL} it! Find One mismatch between Image and Language caption
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1024/
Shekhar, Ravi and Pezzelle, Sandro and Klimovich, Yauhen and Herbelot, Aur{\'e}lie and Nabi, Moin and Sangineto, Enver and Bernardi, Raffaella
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
255--265
In this paper, we aim to understand whether current language and vision (LaVi) models truly grasp the interaction between the two modalities. To this end, we propose an extension of the MS-COCO dataset, FOIL-COCO, which associates images with both correct and {\textquoteleft}foil' captions, that is, descriptions of the image that are highly similar to the original ones, but contain one single mistake ({\textquoteleft}foil word'). We show that current LaVi models fall into the traps of this data and perform badly on three tasks: a) caption classification (correct vs. foil); b) foil word detection; c) foil word correction. Humans, in contrast, have near-perfect performance on those tasks. We demonstrate that merely utilising language cues is not enough to model FOIL-COCO and that it challenges the state-of-the-art by requiring a fine-grained understanding of the relation between text and image.
null
null
10.18653/v1/P17-1024
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,454