entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
zhang-etal-2017-flexible
Flexible and Creative {C}hinese Poetry Generation Using Neural Memory
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1125/
Zhang, Jiyuan and Feng, Yang and Wang, Dong and Wang, Yang and Abel, Andrew and Zhang, Shiyue and Zhang, Andi
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1364--1373
It has been shown that Chinese poems can be successfully generated by sequence-to-sequence neural models, particularly with the attention mechanism. A potential problem of this approach, however, is that neural models can only learn abstract rules, while poem generation is a highly creative process that involves not only rules but also innovations for which pure statistical models are not appropriate in principle. This work proposes a memory augmented neural model for Chinese poem generation, where the neural model and the augmented memory work together to balance the requirements of linguistic accordance and aesthetic innovation, leading to innovative generations that are still rule-compliant. In addition, it is found that the memory mechanism provides interesting flexibility that can be used to generate poems with different styles.
null
null
10.18653/v1/P17-1125
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,555
inproceedings
murakami-etal-2017-learning
Learning to Generate Market Comments from Stock Prices
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1126/
Murakami, Soichiro and Watanabe, Akihiko and Miyazawa, Akira and Goshima, Keiichi and Yanase, Toshihiko and Takamura, Hiroya and Miyao, Yusuke
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1374--1384
This paper presents a novel encoder-decoder model for automatically generating market comments from stock prices. The model first encodes both short- and long-term series of stock prices so that it can mention short- and long-term changes in stock prices. In the decoding phase, our model can also generate a numerical value by selecting an appropriate arithmetic operation such as subtraction or rounding, and applying it to the input stock prices. Empirical experiments show that our best model generates market comments at the fluency and the informativeness approaching human-generated reference texts.
null
null
10.18653/v1/P17-1126
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,556
inproceedings
wang-etal-2017-syntax
Can Syntax Help? Improving an {LSTM}-based Sentence Compression Model for New Domains
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1127/
Wang, Liangguo and Jiang, Jing and Chieu, Hai Leong and Ong, Chen Hui and Song, Dandan and Liao, Lejian
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1385--1393
In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression. We hypothesize that syntactic information helps in making such models more robust across domains. We propose two major changes to the model: using explicit syntactic features and introducing syntactic constraints through Integer Linear Programming (ILP). Our evaluation shows that the proposed model works better than the original model as well as a traditional non-neural-network-based model in a cross-domain setting.
null
null
10.18653/v1/P17-1127
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,557
inproceedings
wang-etal-2017-transductive
Transductive Non-linear Learning for {C}hinese Hypernym Prediction
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1128/
Wang, Chengyu and Yan, Junchi and Zhou, Aoying and He, Xiaofeng
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1394--1404
Finding the correct hypernyms for entities is essential for taxonomy learning, fine-grained entity categorization, query understanding, etc. Due to the flexibility of the Chinese language, it is challenging to identify hypernyms in Chinese accurately. Rather than extracting hypernyms from texts, in this paper, we present a transductive learning approach to establish mappings from entities to hypernyms in the embedding space directly. It combines linear and non-linear embedding projection models, with the capacity of encoding arbitrary language-specific rules. Experiments on real-world datasets illustrate that our approach outperforms previous methods for Chinese hypernym prediction.
null
null
10.18653/v1/P17-1128
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,558
inproceedings
xie-xing-2017-constituent
A Constituent-Centric Neural Architecture for Reading Comprehension
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1129/
Xie, Pengtao and Xing, Eric
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1405--1414
Reading comprehension (RC), aiming to understand natural texts and answer questions therein, is a challenging task. In this paper, we study the RC problem on the Stanford Question Answering Dataset (SQuAD). Observing from the training set that most correct answers are centered around constituents in the parse tree, we design a constituent-centric neural architecture where the generation of candidate answers and their representation learning are both based on constituents and guided by the parse tree. Under this architecture, the search space of candidate answers can be greatly reduced without sacrificing the coverage of correct answers and the syntactic, hierarchical and compositional structure among constituents can be well captured, which contributes to better representation learning of the candidate answers. On SQuAD, our method achieves the state of the art performance and the ablation study corroborates the effectiveness of individual modules.
null
null
10.18653/v1/P17-1129
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,559
inproceedings
xu-yang-2017-cross
Cross-lingual Distillation for Text Classification
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1130/
Xu, Ruochen and Yang, Yiming
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1415--1425
Cross-lingual text classification(CLTC) is the task of classifying documents written in different languages into the same taxonomy of categories. This paper presents a novel approach to CLTC that builds on model distillation, which adapts and extends a framework originally proposed for model compression. Using soft probabilistic predictions for the documents in a label-rich language as the (induced) supervisory labels in a parallel corpus of documents, we train classifiers successfully for new languages in which labeled training data are not available. An adversarial feature adaptation technique is also applied during the model training to reduce distribution mismatch. We conducted experiments on two benchmark CLTC datasets, treating English as the source language and German, French, Japan and Chinese as the unlabeled target languages. The proposed approach had the advantageous or comparable performance of the other state-of-art methods.
null
null
10.18653/v1/P17-1130
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,560
inproceedings
perez-rosas-etal-2017-understanding
Understanding and Predicting Empathic Behavior in Counseling Therapy
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1131/
P{\'e}rez-Rosas, Ver{\'o}nica and Mihalcea, Rada and Resnicow, Kenneth and Singh, Satinder and An, Lawrence
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1426--1435
Counselor empathy is associated with better outcomes in psychology and behavioral counseling. In this paper, we explore several aspects pertaining to counseling interaction dynamics and their relation to counselor empathy during motivational interviewing encounters. Particularly, we analyze aspects such as participants' engagement, participants' verbal and nonverbal accommodation, as well as topics being discussed during the conversation, with the final goal of identifying linguistic and acoustic markers of counselor empathy. We also show how we can use these findings alongside other raw linguistic and acoustic features to build accurate counselor empathy classifiers with accuracies of up to 80{\%}.
null
null
10.18653/v1/P17-1131
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,561
inproceedings
yang-mitchell-2017-leveraging
Leveraging Knowledge Bases in {LSTM}s for Improving Machine Reading
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1132/
Yang, Bishan and Mitchell, Tom
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1436--1446
This paper focuses on how to take advantage of external knowledge bases (KBs) to improve recurrent neural networks for machine reading. Traditional methods that exploit knowledge from KBs encode knowledge as discrete indicator features. Not only do these features generalize poorly, but they require task-specific feature engineering to achieve good performance. We propose KBLSTM, a novel neural model that leverages continuous representations of KBs to enhance the learning of recurrent neural networks for machine reading. To effectively integrate background knowledge with information from the currently processed text, our model employs an attention mechanism with a sentinel to adaptively decide whether to attend to background knowledge and which information from KBs is useful. Experimental results show that our model achieves accuracies that surpass the previous state-of-the-art results for both entity extraction and event extraction on the widely used ACE2005 dataset.
null
null
10.18653/v1/P17-1132
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,562
inproceedings
pan-etal-2017-prerequisite
Prerequisite Relation Learning for Concepts in {MOOC}s
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1133/
Pan, Liangming and Li, Chengjiang and Li, Juanzi and Tang, Jie
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1447--1456
What prerequisite knowledge should students achieve a level of mastery before moving forward to learn subsequent coursewares? We study the extent to which the prerequisite relation between knowledge concepts in Massive Open Online Courses (MOOCs) can be inferred automatically. In particular, what kinds of information can be leverage to uncover the potential prerequisite relation between knowledge concepts. We first propose a representation learning-based method for learning latent representations of course concepts, and then investigate how different features capture the prerequisite relations between concepts. Our experiments on three datasets form Coursera show that the proposed method achieves significant improvements (+5.9-48.0{\%} by F1-score) comparing with existing methods.
null
null
10.18653/v1/P17-1133
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,563
inproceedings
malmasi-etal-2017-unsupervised
Unsupervised Text Segmentation Based on Native Language Characteristics
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1134/
Malmasi, Shervin and Dras, Mark and Johnson, Mark and Du, Lan and Wolska, Magdalena
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1457--1469
Most work on segmenting text does so on the basis of topic changes, but it can be of interest to segment by other, stylistically expressed characteristics such as change of authorship or native language. We propose a Bayesian unsupervised text segmentation approach to the latter. While baseline models achieve essentially random segmentation on our task, indicating its difficulty, a Bayesian model that incorporates appropriately compact language models and alternating asymmetric priors can achieve scores on the standard metrics around halfway to perfect segmentation.
null
null
10.18653/v1/P17-1134
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,564
inproceedings
ni-etal-2017-weakly
Weakly Supervised Cross-Lingual Named Entity Recognition via Effective Annotation and Representation Projection
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1135/
Ni, Jian and Dinu, Georgiana and Florian, Radu
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1470--1480
The state-of-the-art named entity recognition (NER) systems are supervised machine learning models that require large amounts of manually annotated data to achieve high accuracy. However, annotating NER data by human is expensive and time-consuming, and can be quite difficult for a new language. In this paper, we present two weakly supervised approaches for cross-lingual NER with no human annotation in a target language. The first approach is to create automatically labeled NER data for a target language via annotation projection on comparable corpora, where we develop a heuristic scheme that effectively selects good-quality projection-labeled data from noisy data. The second approach is to project distributed representations of words (word embeddings) from a target language to a source language, so that the source-language NER system can be applied to the target language without re-training. We also design two co-decoding schemes that effectively combine the outputs of the two projection-based approaches. We evaluate the performance of the proposed approaches on both in-house and open NER data for several target languages. The results show that the combined systems outperform three other weakly supervised approaches on the CoNLL data.
null
null
10.18653/v1/P17-1135
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,565
inproceedings
chakrabarty-etal-2017-context
Context Sensitive Lemmatization Using Two Successive Bidirectional Gated Recurrent Networks
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1136/
Chakrabarty, Abhisek and Pandit, Onkar Arun and Garain, Utpal
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1481--1491
We introduce a composite deep neural network architecture for supervised and language independent context sensitive lemmatization. The proposed method considers the task as to identify the correct edit tree representing the transformation between a word-lemma pair. To find the lemma of a surface word, we exploit two successive bidirectional gated recurrent structures - the first one is used to extract the character level dependencies and the next one captures the contextual information of the given word. The key advantages of our model compared to the state-of-the-art lemmatizers such as Lemming and Morfette are - (i) it is independent of human decided features (ii) except the gold lemma, no other expensive morphological attribute is required for joint learning. We evaluate the lemmatizer on nine languages - Bengali, Catalan, Dutch, Hindi, Hungarian, Italian, Latin, Romanian and Spanish. It is found that except Bengali, the proposed method outperforms Lemming and Morfette on the other languages. To train the model on Bengali, we develop a gold lemma annotated dataset (having 1,702 sentences with a total of 20,257 word tokens), which is an additional contribution of this work.
null
null
10.18653/v1/P17-1136
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,566
inproceedings
kawakami-etal-2017-learning
Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1137/
Kawakami, Kazuya and Dyer, Chris and Blunsom, Phil
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1492--1502
Fixed-vocabulary language models fail to account for one of the most characteristic statistical facts of natural language: the frequent creation and reuse of new word types. Although character-level language models offer a partial solution in that they can create word types not attested in the training corpus, they do not capture the {\textquotedblleft}bursty{\textquotedblright} distribution of such words. In this paper, we augment a hierarchical LSTM language model that generates sequences of word tokens character by character with a caching mechanism that learns to reuse previously generated words. To validate our model we construct a new open-vocabulary language modeling corpus (the Multilingual Wikipedia Corpus; MWC) from comparable Wikipedia articles in 7 typologically diverse languages and demonstrate the effectiveness of our model across this range of languages.
null
null
10.18653/v1/P17-1137
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,567
inproceedings
kreutzer-etal-2017-bandit
Bandit Structured Prediction for Neural Sequence-to-Sequence Learning
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1138/
Kreutzer, Julia and Sokolov, Artem and Riezler, Stefan
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1503--1513
Bandit structured prediction describes a stochastic optimization framework where learning is performed from partial feedback. This feedback is received in the form of a task loss evaluation to a predicted output structure, without having access to gold standard structures. We advance this framework by lifting linear bandit learning to neural sequence-to-sequence learning problems using attention-based recurrent neural networks. Furthermore, we show how to incorporate control variates into our learning algorithms for variance reduction and improved generalization. We present an evaluation on a neural machine translation task that shows improvements of up to 5.89 BLEU points for domain adaptation from simulated bandit feedback.
null
null
10.18653/v1/P17-1138
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,568
inproceedings
zhang-etal-2017-prior
Prior Knowledge Integration for Neural Machine Translation using Posterior Regularization
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1139/
Zhang, Jiacheng and Liu, Yang and Luan, Huanbo and Xu, Jingfang and Sun, Maosong
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1514--1523
Although neural machine translation has made significant progress recently, how to integrate multiple overlapping, arbitrary prior knowledge sources remains a challenge. In this work, we propose to use posterior regularization to provide a general framework for integrating prior knowledge into neural machine translation. We represent prior knowledge sources as features in a log-linear model, which guides the learning processing of the neural translation model. Experiments on Chinese-English dataset show that our approach leads to significant improvements.
null
null
10.18653/v1/P17-1139
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,569
inproceedings
zhang-etal-2017-incorporating
Incorporating Word Reordering Knowledge into Attention-based Neural Machine Translation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1140/
Zhang, Jinchao and Wang, Mingxuan and Liu, Qun and Zhou, Jie
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1524--1534
This paper proposes three distortion models to explicitly incorporate the word reordering knowledge into attention-based Neural Machine Translation (NMT) for further improving translation performance. Our proposed models enable attention mechanism to attend to source words regarding both the semantic requirement and the word reordering penalty. Experiments on Chinese-English translation show that the approaches can improve word alignment quality and achieve significant translation improvements over a basic attention-based NMT by large margins. Compared with previous works on identical corpora, our system achieves the state-of-the-art performance on translation quality.
null
null
10.18653/v1/P17-1140
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,570
inproceedings
hokamp-liu-2017-lexically
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1141/
Hokamp, Chris and Liu, Qun
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1535--1546
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model which generates sequences token by token. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate auxillary knowledge into a model`s output without requiring any modification of the parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
null
null
10.18653/v1/P17-1141
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,571
inproceedings
tong-etal-2017-combating
Combating Human Trafficking with Multimodal Deep Models
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1142/
Tong, Edmund and Zadeh, Amir and Jones, Cara and Morency, Louis-Philippe
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1547--1556
Human trafficking is a global epidemic affecting millions of people across the planet. Sex trafficking, the dominant form of human trafficking, has seen a significant rise mostly due to the abundance of escort websites, where human traffickers can openly advertise among at-will escort advertisements. In this paper, we take a major step in the automatic detection of advertisements suspected to pertain to human trafficking. We present a novel dataset called Trafficking-10k, with more than 10,000 advertisements annotated for this task. The dataset contains two sources of information per advertisement: text and images. For the accurate detection of trafficking advertisements, we designed and trained a deep multimodal model called the Human Trafficking Deep Network (HTDN).
null
null
10.18653/v1/P17-1142
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,572
inproceedings
lim-etal-2017-malwaretextdb
{M}alware{T}ext{DB}: A Database for Annotated Malware Articles
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1143/
Lim, Swee Kiat and Muis, Aldrian Obaja and Lu, Wei and Ong, Chen Hui
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1557--1567
Cybersecurity risks and malware threats are becoming increasingly dangerous and common. Despite the severity of the problem, there has been few NLP efforts focused on tackling cybersecurity. In this paper, we discuss the construction of a new database for annotated malware texts. An annotation framework is introduced based on the MAEC vocabulary for defining malware characteristics, along with a database consisting of 39 annotated APT reports with a total of 6,819 sentences. We also use the database to construct models that can potentially help cybersecurity researchers in their data collection and analytics efforts.
null
null
10.18653/v1/P17-1143
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,573
inproceedings
zhang-etal-2017-corpus
A Corpus of Annotated Revisions for Studying Argumentative Writing
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1144/
Zhang, Fan and Hashemi, Homa B. and Hwa, Rebecca and Litman, Diane
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1568--1578
This paper presents ArgRewrite, a corpus of between-draft revisions of argumentative essays. Drafts are manually aligned at the sentence level, and the writer`s purpose for each revision is annotated with categories analogous to those used in argument mining and discourse analysis. The corpus should enable advanced research in writing comparison and revision analysis, as demonstrated via our own studies of student revision behavior and of automatic revision purpose prediction.
null
null
10.18653/v1/P17-1144
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,574
inproceedings
ustalov-etal-2017-watset
{W}atset: Automatic Induction of Synsets from a Graph of Synonyms
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1145/
Ustalov, Dmitry and Panchenko, Alexander and Biemann, Chris
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1579--1590
This paper presents a new graph-based approach that induces synsets using synonymy dictionaries and word embeddings. First, we build a weighted graph of synonyms extracted from commonly available resources, such as Wiktionary. Second, we apply word sense induction to deal with ambiguous words. Finally, we cluster the disambiguated version of the ambiguous input graph into synsets. Our meta-clustering approach lets us use an efficient hard clustering algorithm to perform a fuzzy clustering of the graph. Despite its simplicity, our approach shows excellent results, outperforming five competitive state-of-the-art methods in terms of F-score on three gold standard datasets for English and Russian derived from large-scale manually constructed lexical resources.
null
null
10.18653/v1/P17-1145
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,575
inproceedings
ouchi-etal-2017-neural
Neural Modeling of Multi-Predicate Interactions for {J}apanese Predicate Argument Structure Analysis
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1146/
Ouchi, Hiroki and Shindo, Hiroyuki and Matsumoto, Yuji
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1591--1600
The performance of Japanese predicate argument structure (PAS) analysis has improved in recent years thanks to the joint modeling of interactions between multiple predicates. However, this approach relies heavily on syntactic information predicted by parsers, and suffers from errorpropagation. To remedy this problem, we introduce a model that uses grid-type recurrent neural networks. The proposed model automatically induces features sensitive to multi-predicate interactions from the word sequence information of a sentence. Experiments on the NAIST Text Corpus demonstrate that without syntactic information, our model outperforms previous syntax-dependent models.
null
null
10.18653/v1/P17-1146
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,576
inproceedings
joshi-etal-2017-triviaqa
{T}rivia{QA}: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1147/
Joshi, Mandar and Choi, Eunsol and Weld, Daniel and Zettlemoyer, Luke
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1601--1611
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23{\%} and 40{\%} vs. 80{\%}), suggesting that TriviaQA is a challenging testbed that is worth significant future study.
null
null
10.18653/v1/P17-1147
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,577
inproceedings
richardson-kuhn-2017-learning
Learning Semantic Correspondences in Technical Documentation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1148/
Richardson, Kyle and Kuhn, Jonas
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1612--1622
We consider the problem of translating high-level textual descriptions to formal representations in technical documentation as part of an effort to model the meaning of such documentation. We focus specifically on the problem of learning translational correspondences between text descriptions and grounded representations in the target documentation, such as formal representation of functions or code templates. Our approach exploits the parallel nature of such documentation, or the tight coupling between high-level text and the low-level representations we aim to learn. Data is collected by mining technical documents for such parallel text-representation pairs, which we use to train a simple semantic parsing model. We report new baseline results on sixteen novel datasets, including the standard library documentation for nine popular programming languages across seven natural languages, and a small collection of Unix utility manuals.
null
null
10.18653/v1/P17-1148
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,578
inproceedings
cao-etal-2017-bridge
Bridge Text and Knowledge by Learning Multi-Prototype Entity Mention Embedding
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1149/
Cao, Yixin and Huang, Lifu and Ji, Heng and Chen, Xu and Li, Juanzi
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1623--1633
Integrating text and knowledge into a unified semantic space has attracted significant research interests recently. However, the ambiguity in the common space remains a challenge, namely that the same mention phrase usually refers to various entities. In this paper, to deal with the ambiguity of entity mentions, we propose a novel Multi-Prototype Mention Embedding model, which learns multiple sense embeddings for each mention by jointly modeling words from textual contexts and entities derived from a knowledge base. In addition, we further design an efficient language model based approach to disambiguate each mention to a specific sense. In experiments, both qualitative and quantitative analysis demonstrate the high quality of the word, entity and multi-prototype mention embeddings. Using entity linking as a study case, we apply our disambiguation method as well as the multi-prototype mention embeddings on the benchmark dataset, and achieve state-of-the-art performance.
null
null
10.18653/v1/P17-1149
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,579
inproceedings
she-chai-2017-interactive
Interactive Learning of Grounded Verb Semantics towards Human-Robot Communication
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1150/
She, Lanbo and Chai, Joyce
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1634--1644
To enable human-robot communication and collaboration, previous works represent grounded verb semantics as the potential change of state to the physical world caused by these verbs. Grounded verb semantics are acquired mainly based on the parallel data of the use of a verb phrase and its corresponding sequences of primitive actions demonstrated by humans. The rich interaction between teachers and students that is considered important in learning new skills has not yet been explored. To address this limitation, this paper presents a new interactive learning approach that allows robots to proactively engage in interaction with human partners by asking good questions to learn models for grounded verb semantics. The proposed approach uses reinforcement learning to allow the robot to acquire an optimal policy for its question-asking behaviors by maximizing the long-term reward. Our empirical results have shown that the interactive learning approach leads to more reliable models for grounded verb semantics, especially in the noisy environment which is full of uncertainties. Compared to previous work, the models acquired from interactive learning result in a 48{\%} to 145{\%} performance gain when applied in new situations.
null
null
10.18653/v1/P17-1150
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,580
inproceedings
athiwaratkun-wilson-2017-multimodal
Multimodal Word Distributions
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1151/
Athiwaratkun, Ben and Wilson, Andrew
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1645--1656
Word embeddings provide point representations of words containing useful semantic information. We introduce multimodal word distributions formed from Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty information. To learn these distributions, we propose an energy-based max-margin objective. We show that the resulting approach captures uniquely expressive semantic information, and outperforms alternatives, such as word2vec skip-grams, and Gaussian embeddings, on benchmark datasets such as word similarity and entailment.
null
null
10.18653/v1/P17-1151
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,581
inproceedings
chen-etal-2017-enhanced
Enhanced {LSTM} for Natural Language Inference
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1152/
Chen, Qian and Zhu, Xiaodan and Ling, Zhen-Hua and Wei, Si and Jiang, Hui and Inkpen, Diana
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1657--1668
Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is very challenging. With the availability of large annotated data (Bowman et al., 2015), it has recently become feasible to train neural network based inference models, which have shown to be very effective. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.6{\%} on the Stanford Natural Language Inference Dataset. Unlike the previous top models that use very complicated network architectures, we first demonstrate that carefully designing sequential inference models based on chain LSTMs can outperform all previous models. Based on this, we further show that by explicitly considering recursive architectures in both local inference modeling and inference composition, we achieve additional improvement. Particularly, incorporating syntactic parsing information contributes to our best result{---}it further improves the performance even when added to the already very strong model.
null
null
10.18653/v1/P17-1152
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,582
inproceedings
ramakrishna-etal-2017-linguistic
Linguistic analysis of differences in portrayal of movie characters
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1153/
Ramakrishna, Anil and Mart{\'i}nez, Victor R. and Malandrakis, Nikolaos and Singla, Karan and Narayanan, Shrikanth
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1669--1678
We examine differences in portrayal of characters in movies using psycholinguistic and graph theoretic measures computed directly from screenplays. Differences are examined with respect to characters' gender, race, age and other metadata. Psycholinguistic metrics are extrapolated to dialogues in movies using a linear regression model built on a set of manually annotated seed words. Interesting patterns are revealed about relationships between genders of production team and the gender ratio of characters. Several correlations are noted between gender, race, age of characters and the linguistic metrics.
null
null
10.18653/v1/P17-1153
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,583
inproceedings
qian-etal-2017-linguistically
Linguistically Regularized {LSTM} for Sentiment Classification
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1154/
Qian, Qiao and Huang, Minlie and Lei, Jinhao and Zhu, Xiaoyan
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1679--1689
This paper deals with sentence-level sentiment classification. Though a variety of neural network models have been proposed recently, however, previous models either depend on expensive phrase-level annotation, most of which has remarkably degraded performance when trained with only sentence-level annotation; or do not fully employ linguistic resources (e.g., sentiment lexicons, negation words, intensity words). In this paper, we propose simple models trained with sentence-level annotation, but also attempt to model the linguistic role of sentiment lexicons, negation words, and intensity words. Results show that our models are able to capture the linguistic role of sentiment words, negation words, and intensity words in sentiment expression.
null
null
10.18653/v1/P17-1154
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,584
inproceedings
peled-reichart-2017-sarcasm
Sarcasm {SIGN}: Interpreting Sarcasm with Sentiment Based Monolingual Machine Translation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1155/
Peled, Lotem and Reichart, Roi
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1690--1700
Sarcasm is a form of speech in which speakers say the opposite of what they truly mean in order to convey a strong sentiment. In other words, {\textquotedblleft}Sarcasm is the giant chasm between what I say, and the person who doesn`t get it.{\textquotedblright}. In this paper we present the novel task of sarcasm interpretation, defined as the generation of a non-sarcastic utterance conveying the same message as the original sarcastic one. We introduce a novel dataset of 3000 sarcastic tweets, each interpreted by five human judges. Addressing the task as monolingual machine translation (MT), we experiment with MT algorithms and evaluation measures. We then present SIGN: an MT based sarcasm interpretation algorithm that targets sentiment words, a defining element of textual sarcasm. We show that while the scores of n-gram based automatic measures are similar for all interpretation models, SIGN`s interpretations are scored higher by humans for adequacy and sentiment polarity. We conclude with a discussion on future research directions for our new task.
null
null
10.18653/v1/P17-1155
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,585
inproceedings
wu-etal-2017-active
Active Sentiment Domain Adaptation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1156/
Wu, Fangzhao and Huang, Yongfeng and Yan, Jun
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1701--1711
Domain adaptation is an important technology to handle domain dependence problem in sentiment analysis field. Existing methods usually rely on sentiment classifiers trained in source domains. However, their performance may heavily decline if the distributions of sentiment features in source and target domains have significant difference. In this paper, we propose an active sentiment domain adaptation approach to handle this problem. Instead of the source domain sentiment classifiers, our approach adapts the general-purpose sentiment lexicons to target domain with the help of a small number of labeled samples which are selected and annotated in an active learning mode, as well as the domain-specific sentiment similarities among words mined from unlabeled samples of target domain. A unified model is proposed to fuse different types of sentiment information and train sentiment classifier for target domain. Extensive experiments on benchmark datasets show that our approach can train accurate sentiment classifier with less labeled samples.
null
null
10.18653/v1/P17-1156
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,586
inproceedings
rekabsaz-etal-2017-volatility
Volatility Prediction using Financial Disclosures Sentiments with Word Embedding-based {IR} Models
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1157/
Rekabsaz, Navid and Lupu, Mihai and Baklanov, Artem and D{\"ur, Alexander and Andersson, Linda and Hanbury, Allan
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1712--1721
Volatility prediction{---}an essential concept in financial markets{---}has recently been addressed using sentiment analysis methods. We investigate the sentiment of annual disclosures of companies in stock markets to forecast volatility. We specifically explore the use of recent Information Retrieval (IR) term weighting models that are effectively extended by related terms using word embeddings. In parallel to textual information, factual market data have been widely used as the mainstream approach to forecast market risk. We therefore study different fusion methods to combine text and market data resources. Our word embedding-based approach significantly outperforms state-of-the-art methods. In addition, we investigate the characteristics of the reports of the companies in different financial sectors.
null
null
10.18653/v1/P17-1157
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,587
inproceedings
tu-etal-2017-cane
{CANE}: Context-Aware Network Embedding for Relation Modeling
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1158/
Tu, Cunchao and Liu, Han and Liu, Zhiyuan and Sun, Maosong
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1722--1731
Network embedding (NE) is playing a critical role in network analysis, due to its ability to represent vertices with efficient low-dimensional embedding vectors. However, existing NE models aim to learn a fixed context-free embedding for each vertex and neglect the diverse roles when interacting with other vertices. In this paper, we assume that one vertex usually shows different aspects when interacting with different neighbor vertices, and should own different embeddings respectively. Therefore, we present Context-Aware Network Embedding (CANE), a novel NE model to address this issue. CANE learns context-aware embeddings for vertices with mutual attention mechanism and is expected to model the semantic relationships between vertices more precisely. In experiments, we compare our model with existing NE models on three real-world datasets. Experimental results show that CANE achieves significant improvement than state-of-the-art methods on link prediction and comparable performance on vertex classification. The source code and datasets can be obtained from \url{https://github.com/thunlp/CANE}.
null
null
10.18653/v1/P17-1158
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,588
inproceedings
wang-etal-2017-universal
{U}niversal {D}ependencies Parsing for Colloquial {S}ingaporean {E}nglish
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1159/
Wang, Hongmin and Zhang, Yue and Chan, GuangYong Leonard and Yang, Jie and Chieu, Hai Leong
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1732--1744
Singlish can be interesting to the ACL community both linguistically as a major creole based on English, and computationally for information extraction and sentiment analysis of regional social media. We investigate dependency parsing of Singlish by constructing a dependency treebank under the Universal Dependencies scheme, and then training a neural network model by integrating English syntactic knowledge into a state-of-the-art parser trained on the Singlish treebank. Results show that English knowledge can lead to 25{\%} relative error reduction, resulting in a parser of 84.47{\%} accuracies. To the best of our knowledge, we are the first to use neural stacking to improve cross-lingual dependency parsing on low-resource languages. We make both our annotation and parser available for further research.
null
null
10.18653/v1/P17-1159
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,589
inproceedings
yli-jyra-gomez-rodriguez-2017-generic
Generic Axiomatization of Families of Noncrossing Graphs in Dependency Parsing
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1160/
Yli-Jyr{\"a, Anssi and G{\'omez-Rodr{\'iguez, Carlos
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1745--1755
We present a simple encoding for unlabeled noncrossing graphs and show how its latent counterpart helps us to represent several families of directed and undirected graphs used in syntactic and semantic parsing of natural language as context-free languages. The families are separated purely on the basis of forbidden patterns in latent encoding, eliminating the need to differentiate the families of non-crossing graphs in inference algorithms: one algorithm works for all when the search space can be controlled in parser input.
null
null
10.18653/v1/P17-1160
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,590
inproceedings
peters-etal-2017-semi
Semi-supervised sequence tagging with bidirectional language models
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1161/
Peters, Matthew E. and Ammar, Waleed and Bhagavatula, Chandra and Power, Russell
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1756--1765
Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data. In this paper, we demonstrate a general semi-supervised approach for adding pretrained context embeddings from bidirectional language models to NLP systems and apply it to sequence labeling tasks. We evaluate our model on two standard datasets for named entity recognition (NER) and chunking, and in both cases achieve state of the art results, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers.
null
null
10.18653/v1/P17-1161
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,591
inproceedings
he-etal-2017-learning
Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1162/
He, He and Balakrishnan, Anusha and Eric, Mihail and Liang, Percy
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1766--1776
We study a \textit{symmetric collaborative dialogue} setting in which two agents, each with private knowledge, must strategically communicate to achieve a common goal. The open-ended dialogue state in this setting poses new challenges for existing dialogue systems. We collected a dataset of 11K human-human dialogues, which exhibits interesting lexical, semantic, and strategic elements. To model both structured knowledge and unstructured language, we propose a neural model with dynamic knowledge graph embeddings that evolve as the dialogue progresses. Automatic and human evaluations show that our model is both more effective at achieving the goal and more human-like than baseline neural and rule-based models.
null
null
10.18653/v1/P17-1162
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,592
inproceedings
mrksic-etal-2017-neural
Neural Belief Tracker: Data-Driven Dialogue State Tracking
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1163/
Mrk{\v{s}}i{\'c}, Nikola and {\'O} S{\'e}aghdha, Diarmuid and Wen, Tsung-Hsien and Thomson, Blaise and Young, Steve
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1777--1788
One of the core components of modern spoken dialogue systems is the belief tracker, which estimates the user`s goal at every step of the dialogue. However, most current approaches have difficulty scaling to larger, more complex dialogue domains. This is due to their dependency on either: a) Spoken Language Understanding models that require large amounts of annotated training data; or b) hand-crafted lexicons for capturing some of the linguistic variation in users' language. We propose a novel Neural Belief Tracking (NBT) framework which overcomes these problems by building on recent advances in representation learning. NBT models reason over pre-trained word vectors, learning to compose them into distributed representations of user utterances and dialogue context. Our evaluation on two datasets shows that this approach surpasses past limitations, matching the performance of state-of-the-art models which rely on hand-crafted semantic lexicons and outperforming them when such lexicons are not provided.
null
null
10.18653/v1/P17-1163
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,593
inproceedings
liu-etal-2017-exploiting
Exploiting Argument Information to Improve Event Detection via Supervised Attention Mechanisms
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1164/
Liu, Shulin and Chen, Yubo and Liu, Kang and Zhao, Jun
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1789--1798
This paper tackles the task of event detection (ED), which involves identifying and categorizing events. We argue that arguments provide significant clues to this task, but they are either completely ignored or exploited in an indirect manner in existing detection approaches. In this work, we propose to exploit argument information explicitly for ED via supervised attention mechanisms. In specific, we systematically investigate the proposed model under the supervision of different attention strategies. Experimental results show that our approach advances state-of-the-arts and achieves the best F1 score on ACE 2005 dataset.
null
null
10.18653/v1/P17-1164
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,594
inproceedings
amoualian-etal-2017-topical
Topical Coherence in {LDA}-based Models through Induced Segmentation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1165/
Amoualian, Hesam and Lu, Wei and Gaussier, Eric and Balikas, Georgios and Amini, Massih R. and Clausel, Marianne
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1799--1809
This paper presents an LDA-based model that generates topically coherent segments within documents by jointly segmenting documents and assigning topics to their words. The coherence between topics is ensured through a copula, binding the topics associated to the words of a segment. In addition, this model relies on both document and segment specific topic distributions so as to capture fine grained differences in topic assignments. We show that the proposed model naturally encompasses other state-of-the-art LDA-based models designed for similar tasks. Furthermore, our experiments, conducted on six different publicly available datasets, show the effectiveness of our model in terms of perplexity, Normalized Pointwise Mutual Information, which captures the coherence between the generated topics, and the Micro F1 measure for text classification.
null
null
10.18653/v1/P17-1165
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,595
inproceedings
ye-etal-2017-jointly
Jointly Extracting Relations with Class Ties via Effective Deep Ranking
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1166/
Ye, Hai and Chao, Wenhan and Luo, Zhunchen and Li, Zhoujun
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1810--1820
Connections between relations in relation extraction, which we call class ties, are common. In distantly supervised scenario, one entity tuple may have multiple relation facts. Exploiting class ties between relations of one entity tuple will be promising for distantly supervised relation extraction. However, previous models are not effective or ignore to model this property. In this work, to effectively leverage class ties, we propose to make joint relation extraction with a unified model that integrates convolutional neural network (CNN) with a general pairwise ranking framework, in which three novel ranking loss functions are introduced. Additionally, an effective method is presented to relieve the severe class imbalance problem from NR (not relation) for model training. Experiments on a widely used dataset show that leveraging class ties will enhance extraction and demonstrate the effectiveness of our model to learn class ties. Our model outperforms the baselines significantly, achieving state-of-the-art performance.
null
null
10.18653/v1/P17-1166
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,596
inproceedings
iyyer-etal-2017-search
Search-based Neural Structured Learning for Sequential Question Answering
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1167/
Iyyer, Mohit and Yih, Wen-tau and Chang, Ming-Wei
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1821--1831
Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.
null
null
10.18653/v1/P17-1167
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,597
inproceedings
dhingra-etal-2017-gated
Gated-Attention Readers for Text Comprehension
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1168/
Dhingra, Bhuwan and Liu, Hanxiao and Yang, Zhilin and Cohen, William and Salakhutdinov, Ruslan
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1832--1846
In this paper we study the problem of answering cloze-style questions over documents. Our model, the Gated-Attention (GA) Reader, integrates a multi-hop architecture with a novel attention mechanism, which is based on multiplicative interactions between the query embedding and the intermediate states of a recurrent neural network document reader. This enables the reader to build query-specific representations of tokens in the document for accurate answer selection. The GA Reader obtains state-of-the-art results on three benchmarks for this task{--}the CNN {\&} Daily Mail news stories and the Who Did What dataset. The effectiveness of multiplicative interaction is demonstrated by an ablation study, and by comparing to alternative compositional operators for implementing the gated-attention.
null
null
10.18653/v1/P17-1168
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,598
inproceedings
ye-etal-2017-determining
Determining Gains Acquired from Word Embedding Quantitatively Using Discrete Distribution Clustering
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1169/
Ye, Jianbo and Li, Yanran and Wu, Zhaohui and Wang, James Z. and Li, Wenjie and Li, Jia
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1847--1856
Word embeddings have become widely-used in document analysis. While a large number of models for mapping words to vector spaces have been developed, it remains undetermined how much net gain can be achieved over traditional approaches based on bag-of-words. In this paper, we propose a new document clustering approach by combining any word embedding with a state-of-the-art algorithm for clustering empirical distributions. By using the Wasserstein distance between distributions, the word-to-word semantic relationship is taken into account in a principled way. The new clustering method is easy to use and consistently outperforms other methods on a variety of data sets. More importantly, the method provides an effective framework for determining when and how much word embeddings contribute to document analysis. Experimental results with multiple embedding models are reported.
null
null
10.18653/v1/P17-1169
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,599
inproceedings
pilehvar-etal-2017-towards
Towards a Seamless Integration of Word Senses into Downstream {NLP} Applications
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1170/
Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Navigli, Roberto and Collier, Nigel
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1857--1869
Lexical ambiguity can impede NLP systems from accurate understanding of semantics. Despite its potential benefits, the integration of sense-level information into NLP systems has remained understudied. By incorporating a novel disambiguation algorithm into a state-of-the-art classification model, we create a pipeline to integrate sense-level information into downstream NLP applications. We show that a simple disambiguation of the input text can lead to consistent performance improvement on multiple topic categorization and polarity detection datasets, particularly when the fine granularity of the underlying sense inventory is reduced and the document is sufficiently large. Our results also point to the need for sense representation research to focus more on in vivo evaluations which target the performance in downstream NLP applications rather than artificial benchmarks.
null
null
10.18653/v1/P17-1170
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,600
inproceedings
chen-etal-2017-reading
Reading {W}ikipedia to Answer Open-Domain Questions
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1171/
Chen, Danqi and Fisch, Adam and Weston, Jason and Bordes, Antoine
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1870--1879
This paper proposes to tackle open-domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules are highly competitive with respect to existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task.
null
null
10.18653/v1/P17-1171
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,601
inproceedings
yu-etal-2017-learning
Learning to Skim Text
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1172/
Yu, Adams Wei and Lee, Hongrae and Le, Quoc
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1880--1890
Recurrent Neural Networks are showing much promise in many sub-areas of natural language processing, ranging from document classification to machine translation to automatic question answering. Despite their promise, many recurrent models have to read the whole text word by word, making it slow to handle long documents. For example, it is difficult to use a recurrent network to read a book and answer questions about it. In this paper, we present an approach of reading text while skipping irrelevant information if needed. The underlying model is a recurrent network that learns how far to jump after reading a few words of the input text. We employ a standard policy gradient method to train the model to make discrete jumping decisions. In our benchmarks on four different tasks, including number prediction, sentiment analysis, news article classification and automatic Q{\&}A, our proposed model, a modified LSTM with jumping, is up to 6 times faster than the standard sequential LSTM, while maintaining the same or even better accuracy.
null
null
10.18653/v1/P17-1172
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,602
inproceedings
srikumar-2017-algebra
An Algebra for Feature Extraction
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1173/
Srikumar, Vivek
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1891--1900
Though feature extraction is a necessary first step in statistical NLP, it is often seen as a mere preprocessing step. Yet, it can dominate computation time, both during training, and especially at deployment. In this paper, we formalize feature extraction from an algebraic perspective. Our formalization allows us to define a message passing algorithm that can restructure feature templates to be more computationally efficient. We show via experiments on text chunking and relation extraction that this restructuring does indeed speed up feature extraction in practice by reducing redundant computation.
null
null
10.18653/v1/P17-1173
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,603
inproceedings
ishiwatari-etal-2017-chunk
Chunk-based Decoder for Neural Machine Translation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1174/
Ishiwatari, Shonosuke and Yao, Jingtao and Liu, Shujie and Li, Mu and Zhou, Ming and Yoshinaga, Naoki and Kitsuregawa, Masaru and Jia, Weijia
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1901--1912
Chunks (or phrases) once played a pivotal role in machine translation. By using a chunk rather than a word as the basic translation unit, local (intra-chunk) and global (inter-chunk) word orders and dependencies can be easily modeled. The chunk structure, despite its importance, has not been considered in the decoders used for neural machine translation (NMT). In this paper, we propose chunk-based decoders for (NMT), each of which consists of a chunk-level decoder and a word-level decoder. The chunk-level decoder models global dependencies while the word-level decoder decides the local word order in a chunk. To output a target sentence, the chunk-level decoder generates a chunk representation containing global information, which the word-level decoder then uses as a basis to predict the words inside the chunk. Experimental results show that our proposed decoders can significantly improve translation performance in a WAT {\textquoteleft}16 English-to-Japanese translation task.
null
null
10.18653/v1/P17-1174
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,604
inproceedings
calixto-etal-2017-doubly
Doubly-Attentive Decoder for Multi-modal Neural Machine Translation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1175/
Calixto, Iacer and Liu, Qun and Campbell, Nick
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1913--1924
We introduce a Multi-modal Neural Machine Translation model in which a doubly-attentive decoder naturally incorporates spatial visual features obtained using pre-trained convolutional neural networks, bridging the gap between image description and translation. Our decoder learns to attend to source-language words and parts of an image independently by means of two separate attention mechanisms as it generates words in the target language. We find that our model can efficiently exploit not just back-translated in-domain multi-modal data but also large general-domain text-only MT corpora. We also report state-of-the-art results on the Multi30k data set.
null
null
10.18653/v1/P17-1175
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,605
inproceedings
chen-etal-2017-teacher
A Teacher-Student Framework for Zero-Resource Neural Machine Translation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1176/
Chen, Yun and Liu, Yang and Cheng, Yong and Li, Victor O.K.
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1925--1935
While end-to-end neural machine translation (NMT) has made remarkable progress recently, it still suffers from the data scarcity problem for low-resource language pairs and domains. In this paper, we propose a method for zero-resource NMT by assuming that parallel sentences have close probabilities of generating a sentence in a third language. Based on the assumption, our method is able to train a source-to-target NMT model ({\textquotedblleft}student{\textquotedblright}) without parallel corpora available guided by an existing pivot-to-target NMT model ({\textquotedblleft}teacher{\textquotedblright}) on a source-pivot parallel corpus. Experimental results show that the proposed method significantly improves over a baseline pivot-based model by +3.0 BLEU points across various language pairs.
null
null
10.18653/v1/P17-1176
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,606
inproceedings
chen-etal-2017-improved
Improved Neural Machine Translation with a Syntax-Aware Encoder and Decoder
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1177/
Chen, Huadong and Huang, Shujian and Chiang, David and Chen, Jiajun
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1936--1945
Most neural machine translation (NMT) models are based on the sequential encoder-decoder framework, which makes no use of syntactic information. In this paper, we improve this model by explicitly incorporating source-side syntactic trees. More specifically, we propose (1) a bidirectional tree encoder which learns both sequential and tree structured representations; (2) a tree-coverage model that lets the attention depend on the source-side syntax. Experiments on Chinese-English translation demonstrate that our proposed models outperform the sequential attentional model as well as a stronger baseline with a bottom-up tree encoder and word coverage.
null
null
10.18653/v1/P17-1177
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,607
inproceedings
pan-etal-2017-cross
Cross-lingual Name Tagging and Linking for 282 Languages
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1178/
Pan, Xiaoman and Zhang, Boliang and May, Jonathan and Nothman, Joel and Knight, Kevin and Ji, Heng
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1946--1958
The ambitious goal of this work is to develop a cross-lingual name tagging and linking framework for 282 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to an English Knowledge Base (KB) if it is linkable. We achieve this goal by performing a series of new KB mining methods: generating {\textquotedblleft}silver-standard{\textquotedblright} annotations by transferring annotations from English to other languages through cross-lingual links and KB properties, refining annotations through self-training and topic selection, deriving language-specific morphology features from anchor links, and mining word translation pairs from cross-lingual links. Both name tagging and linking results for 282 languages are promising on Wikipedia data and on-Wikipedia data.
null
null
10.18653/v1/P17-1178
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,608
inproceedings
zhang-etal-2017-adversarial
Adversarial Training for Unsupervised Bilingual Lexicon Induction
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1179/
Zhang, Meng and Liu, Yang and Luan, Huanbo and Sun, Maosong
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1959--1970
Word embeddings are well known to capture linguistic regularities of the language on which they are trained. Researchers also observe that these regularities can transfer across languages. However, previous endeavors to connect separate monolingual word embeddings typically require cross-lingual signals as supervision, either in the form of parallel corpus or seed lexicon. In this work, we show that such cross-lingual connection can actually be established without any form of supervision. We achieve this end by formulating the problem as a natural adversarial game, and investigating techniques that are crucial to successful training. We carry out evaluation on the unsupervised bilingual lexicon induction task. Even though this task appears intrinsically cross-lingual, we are able to demonstrate encouraging performance without any cross-lingual clues.
null
null
10.18653/v1/P17-1179
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,609
inproceedings
rijhwani-etal-2017-estimating
Estimating Code-Switching on {T}witter with a Novel Generalized Word-Level Language Detection Technique
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1180/
Rijhwani, Shruti and Sequiera, Royal and Choudhury, Monojit and Bali, Kalika and Maddila, Chandra Shekhar
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1971--1982
Word-level language detection is necessary for analyzing code-switched text, where multiple languages could be mixed within a sentence. Existing models are restricted to code-switching between two specific languages and fail in real-world scenarios as text input rarely has a priori information on the languages used. We present a novel unsupervised word-level language detection technique for code-switched text for an arbitrarily large number of languages, which does not require any manually annotated training data. Our experiments with tweets in seven languages show a 74{\%} relative error reduction in word-level labeling with respect to competitive baselines. We then use this system to conduct a large-scale quantitative analysis of code-switching patterns on Twitter, both global as well as region-specific, with 58M tweets.
null
null
10.18653/v1/P17-1180
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,610
inproceedings
bloodgood-strauss-2017-using
Using Global Constraints and Reranking to Improve Cognates Detection
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1181/
Bloodgood, Michael and Strauss, Benjamin
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1983--1992
Global constraints and reranking have not been used in cognates detection research to date. We propose methods for using global constraints by performing rescoring of the score matrices produced by state of the art cognates detection systems. Using global constraints to perform rescoring is complementary to state of the art methods for performing cognates detection and results in significant performance improvements beyond current state of the art performance on publicly available datasets with different language pairs and various conditions such as different levels of baseline state of the art performance and different data size conditions, including with more realistic large data size conditions than have been evaluated with in the past.
null
null
10.18653/v1/P17-1181
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,611
inproceedings
kann-etal-2017-one
One-Shot Neural Cross-Lingual Transfer for Paradigm Completion
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1182/
Kann, Katharina and Cotterell, Ryan and Sch{\"utze, Hinrich
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1993--2003
We present a novel cross-lingual transfer method for paradigm completion, the task of mapping a lemma to its inflected forms, using a neural encoder-decoder model, the state of the art for the monolingual task. We use labeled data from a high-resource language to increase performance on a low-resource language. In experiments on 21 language pairs from four different language families, we obtain up to 58{\%} higher accuracy than without transfer and show that even zero-shot and one-shot learning are possible. We further find that the degree of language relatedness strongly influences the ability to transfer morphological knowledge.
null
null
10.18653/v1/P17-1182
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,612
inproceedings
aharoni-goldberg-2017-morphological
Morphological Inflection Generation with Hard Monotonic Attention
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1183/
Aharoni, Roee and Goldberg, Yoav
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2004--2015
We present a neural model for morphological inflection generation which employs a hard attention mechanism, inspired by the nearly-monotonic alignment commonly found between the characters in a word and the characters in its inflection. We evaluate the model on three previously studied morphological inflection generation datasets and show that it provides state of the art results in various setups compared to previous neural and non-neural approaches. Finally we present an analysis of the continuous representations learned by both the hard and soft (Bahdanau, 2014) attention models for the task, shedding some light on the features such models extract.
null
null
10.18653/v1/P17-1183
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,613
inproceedings
vania-lopez-2017-characters
From Characters to Words to in Between: Do We Capture Morphology?
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1184/
Vania, Clara and Lopez, Adam
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2016--2027
Words can be represented by composing the representations of subword units such as word segments, characters, and/or character n-grams. While such representations are effective and may capture the morphological regularities of words, they have not been systematically compared, and it is not understood how they interact with different morphological typologies. On a language modeling task, we present experiments that systematically vary (1) the basic unit of representation, (2) the composition of these representations, and (3) the morphological typology of the language modeled. Our results extend previous findings that character representations are effective across typologies, and we find that a previously unstudied combination of character trigram representations composed with bi-LSTMs outperforms most others. But we also find room for improvement: none of the character-level models match the predictive accuracy of a model with access to true morphological analyses, even when learned from an order of magnitude more data.
null
null
10.18653/v1/P17-1184
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,614
inproceedings
fonarev-etal-2017-riemannian
{R}iemannian Optimization for Skip-Gram Negative Sampling
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1185/
Fonarev, Alexander and Grinchuk, Oleksii and Gusev, Gleb and Serdyukov, Pavel and Oseledets, Ivan
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2028--2036
Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in {\textquotedblleft}word2vec{\textquotedblright} software, is usually optimized by stochastic gradient descent. However, the optimization of SGNS objective can be viewed as a problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix.
null
null
10.18653/v1/P17-1185
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,615
inproceedings
peng-etal-2017-deep
Deep Multitask Learning for Semantic Dependency Parsing
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1186/
Peng, Hao and Thomson, Sam and Smith, Noah A.
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2037--2048
We present a deep neural architecture that parses sentences into three semantic dependency graph formalisms. By using efficient, nearly arc-factored inference and a bidirectional-LSTM composed with a multi-layer perceptron, our base system is able to significantly improve the state of the art for semantic dependency parsing, without using hand-engineered features or syntax. We then explore two multitask learning approaches{---}one that shares parameters across formalisms, and one that uses higher-order structures to predict the graphs jointly. We find that both approaches improve performance across formalisms on average, achieving a new state of the art. Our code is open-source and available at \url{https://github.com/Noahs-ARK/NeurboParser}.
null
null
10.18653/v1/P17-1186
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,616
inproceedings
niu-etal-2017-improved
Improved Word Representation Learning with Sememes
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1187/
Niu, Yilin and Xie, Ruobing and Liu, Zhiyuan and Sun, Maosong
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2049--2058
Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed by several sememes. Since sememes are not explicit for each word, people manually annotate word sememes and form linguistic common-sense knowledge bases. In this paper, we present that, word sememe information can improve word representation learning (WRL), which maps words into a low-dimensional semantic space and serves as a fundamental step for many NLP tasks. The key idea is to utilize word sememes to capture exact meanings of a word within specific contexts accurately. More specifically, we follow the framework of Skip-gram and present three sememe-encoded models to learn representations of sememes, senses and words, where we apply the attention scheme to detect word senses in various contexts. We conduct experiments on two tasks including word similarity and word analogy, and our models significantly outperform baselines. The results indicate that WRL can benefit from sememes via the attention scheme, and also confirm our models being capable of correctly modeling sememe information.
null
null
10.18653/v1/P17-1187
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,617
inproceedings
liu-etal-2017-learning
Learning Character-level Compositionality with Visual Features
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1188/
Liu, Frederick and Lu, Han and Lo, Chieh and Neubig, Graham
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2059--2068
Previous work has modeled the compositionality of words by creating character-level models of meaning, reducing problems of sparsity for rare words. However, in many writing systems compositionality has an effect even on the character-level: the meaning of a character is derived by the sum of its parts. In this paper, we model this effect by creating embeddings for characters based on their visual characteristics, creating an image for the character and running it through a convolutional neural network to produce a visual character embedding. Experiments on a text classification task demonstrate that such model allows for better processing of instances with rare characters in languages such as Chinese, Japanese, and Korean. Additionally, qualitative analyses demonstrate that our proposed model learns to focus on the parts of characters that carry topical content which resulting in embeddings that are coherent in visual space.
null
null
10.18653/v1/P17-1188
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,618
inproceedings
xia-etal-2017-progressive
A Progressive Learning Approach to {C}hinese {SRL} Using Heterogeneous Data
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1189/
Xia, Qiaolin and Sha, Lei and Chang, Baobao and Sui, Zhifang
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2069--2077
Previous studies on Chinese semantic role labeling (SRL) have concentrated on a single semantically annotated corpus. But the training data of single corpus is often limited. Whereas the other existing semantically annotated corpora for Chinese SRL are scattered across different annotation frameworks. But still, Data sparsity remains a bottleneck. This situation calls for larger training datasets, or effective approaches which can take advantage of highly heterogeneous data. In this paper, we focus mainly on the latter, that is, to improve Chinese SRL by using heterogeneous corpora together. We propose a novel progressive learning model which augments the Progressive Neural Network with Gated Recurrent Adapters. The model can accommodate heterogeneous inputs and effectively transfer knowledge between them. We also release a new corpus, Chinese SemBank, for Chinese SRL. Experiments on CPB 1.0 show that our model outperforms state-of-the-art methods.
null
null
10.18653/v1/P17-1189
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,619
inproceedings
wieting-gimpel-2017-revisiting
Revisiting Recurrent Networks for Paraphrastic Sentence Embeddings
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1190/
Wieting, John and Gimpel, Kevin
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2078--2088
We consider the problem of learning general-purpose, paraphrastic sentence embeddings, revisiting the setting of Wieting et al. (2016b). While they found LSTM recurrent networks to underperform word averaging, we present several developments that together produce the opposite conclusion. These include training on sentence pairs rather than phrase pairs, averaging states to represent sequences, and regularizing aggressively. These improve LSTMs in both transfer learning and supervised settings. We also introduce a new recurrent architecture, the Gated Recurrent Averaging Network, that is inspired by averaging and LSTMs while outperforming them both. We analyze our learned models, finding evidence of preferences for particular parts of speech and dependency relations.
null
null
10.18653/v1/P17-1190
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,620
inproceedings
dasigi-etal-2017-ontology
Ontology-Aware Token Embeddings for Prepositional Phrase Attachment
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1191/
Dasigi, Pradeep and Ammar, Waleed and Dyer, Chris and Hovy, Eduard
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2089--2098
Type-level word embeddings use the same set of parameters to represent all instances of a word regardless of its context, ignoring the inherent lexical ambiguity in language. Instead, we embed semantic concepts (or synsets) as defined in WordNet and represent a word token in a particular context by estimating a distribution over relevant semantic concepts. We use the new, context-sensitive embeddings in a model for predicting prepositional phrase (PP) attachments and jointly learn the concept embeddings and model parameters. We show that using context-sensitive embeddings improves the accuracy of the PP attachment model by 5.4{\%} absolute points, which amounts to a 34.4{\%} relative reduction in errors.
null
null
10.18653/v1/P17-1191
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,621
inproceedings
pavlick-pasca-2017-identifying
Identifying 1950s {A}merican Jazz Musicians: Fine-Grained {I}s{A} Extraction via Modifier Composition
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1192/
Pavlick, Ellie and Pa{\c{s}}ca, Marius
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2099--2109
We present a method for populating fine-grained classes (e.g., {\textquotedblleft}1950s American jazz musicians{\textquotedblright}) with instances (e.g., Charles Mingus ). While state-of-the-art methods tend to treat class labels as single lexical units, the proposed method considers each of the individual modifiers in the class label relative to the head. An evaluation on the task of reconstructing Wikipedia category pages demonstrates a {\ensuremath{>}}10 point increase in AUC, over a strong baseline relying on widely-used Hearst patterns.
null
null
10.18653/v1/P17-1192
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,622
inproceedings
cao-etal-2017-parsing
Parsing to 1-Endpoint-Crossing, Pagenumber-2 Graphs
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1193/
Cao, Junjie and Huang, Sheng and Sun, Weiwei and Wan, Xiaojun
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2110--2120
We study the Maximum Subgraph problem in deep dependency parsing. We consider two restrictions to deep dependency graphs: (a) 1-endpoint-crossing and (b) pagenumber-2. Our main contribution is an exact algorithm that obtains maximum subgraphs satisfying both restrictions simultaneously in time O(n5). Moreover, ignoring one linguistically-rare structure descreases the complexity to O(n4). We also extend our quartic-time algorithm into a practical parser with a discriminative disambiguation model and evaluate its performance on four linguistic data sets used in semantic dependency parsing.
null
null
10.18653/v1/P17-1193
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,623
inproceedings
rei-2017-semi
Semi-supervised Multitask Learning for Sequence Labeling
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1194/
Rei, Marek
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2121--2130
We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset. This language modeling objective incentivises the system to learn general-purpose patterns of semantic and syntactic composition, which are also useful for improving accuracy on different sequence labeling tasks. The architecture was evaluated on a range of datasets, covering the tasks of error detection in learner texts, named entity recognition, chunking and POS-tagging. The novel language modeling objective provided consistent performance improvements on every benchmark, without requiring any additional annotated or unannotated data.
null
null
10.18653/v1/P17-1194
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,624
inproceedings
matsuzaki-etal-2017-semantic
Semantic Parsing of Pre-university Math Problems
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-1195/
Matsuzaki, Takuya and Ito, Takumi and Iwane, Hidenao and Anai, Hirokazu and Arai, Noriko H.
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2131--2141
We have been developing an end-to-end math problem solving system that accepts natural language input. The current paper focuses on how we analyze the problem sentences to produce logical forms. We chose a hybrid approach combining a shallow syntactic analyzer and a manually-developed lexicalized grammar. A feature of the grammar is that it is extensively typed on the basis of a formal ontology for pre-university math. These types are helpful in semantic disambiguation inside and across sentences. Experimental results show that the hybrid system produces a well-formed logical form with 88{\%} precision and 56{\%} recall.
null
null
10.18653/v1/P17-1195
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,625
inproceedings
cheng-miyao-2017-classifying
Classifying Temporal Relations by Bidirectional {LSTM} over Dependency Paths
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2001/
Cheng, Fei and Miyao, Yusuke
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
1--6
Temporal relation classification is becoming an active research field. Lots of methods have been proposed, while most of them focus on extracting features from external resources. Less attention has been paid to a significant advance in a closely related task: relation extraction. In this work, we borrow a state-of-the-art method in relation extraction by adopting bidirectional long short-term memory (Bi-LSTM) along dependency paths (DP). We make a {\textquotedblleft}common root{\textquotedblright} assumption to extend DP representations of cross-sentence links. In the final comparison to two state-of-the-art systems on TimeBank-Dense, our model achieves comparable performance, without using external knowledge, as well as manually annotated attributes of entities (class, tense, polarity, etc.).
null
null
10.18653/v1/P17-2001
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,627
inproceedings
song-etal-2017-amr
{AMR}-to-text Generation with Synchronous Node Replacement Grammar
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2002/
Song, Linfeng and Peng, Xiaochang and Zhang, Yue and Wang, Zhiguo and Gildea, Daniel
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
7--13
This paper addresses the task of AMR-to-text generation by leveraging synchronous node replacement grammar. During training, graph-to-string rules are learned using a heuristic extraction algorithm. At test time, a graph transducer is applied to collapse input AMRs and generate output sentences. Evaluated on a standard benchmark, our method gives the state-of-the-art result.
null
null
10.18653/v1/P17-2002
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,628
inproceedings
moosavi-strube-2017-lexical
Lexical Features in Coreference Resolution: To be Used With Caution
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2003/
Moosavi, Nafise Sadat and Strube, Michael
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
14--19
Lexical features are a major source of information in state-of-the-art coreference resolvers. Lexical features implicitly model some of the linguistic phenomena at a fine granularity level. They are especially useful for representing the context of mentions. In this paper we investigate a drawback of using many lexical features in state-of-the-art coreference resolvers. We show that if coreference resolvers mainly rely on lexical features, they can hardly generalize to unseen domains. Furthermore, we show that the current coreference resolution evaluation is clearly flawed by only evaluating on a specific split of a specific dataset in which there is a notable overlap between the training, development and test sets.
null
null
10.18653/v1/P17-2003
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,629
inproceedings
stanojevic-simaan-2017-alternative
Alternative Objective Functions for Training {MT} Evaluation Metrics
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2004/
Stanojevi{\'c}, Milo{\v{s}} and Sima{'}an, Khalil
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
20--25
MT evaluation metrics are tested for correlation with human judgments either at the sentence- or the corpus-level. Trained metrics ignore corpus-level judgments and are trained for high sentence-level correlation only. We show that training only for one objective (sentence or corpus level), can not only harm the performance on the other objective, but it can also be suboptimal for the objective being optimized. To this end we present a metric trained for corpus-level and show empirical comparison against a metric trained for sentence-level exemplifying how their performance may vary per language pair, type and level of judgment. Subsequently we propose a model trained to optimize both objectives simultaneously and show that it is far more stable than{--}and on average outperforms{--}both models on both objectives.
null
null
10.18653/v1/P17-2004
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,630
inproceedings
peyrard-eckle-kohler-2017-principled
A Principled Framework for Evaluating Summarizers: Comparing Models of Summary Quality against Human Judgments
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2005/
Peyrard, Maxime and Eckle-Kohler, Judith
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
26--31
We present a new framework for evaluating extractive summarizers, which is based on a principled representation as optimization problem. We prove that every extractive summarizer can be decomposed into an objective function and an optimization technique. We perform a comparative analysis and evaluation of several objective functions embedded in well-known summarizers regarding their correlation with human judgments. Our comparison of these correlations across two datasets yields surprising insights into the role and performance of objective functions in the different summarizers.
null
null
10.18653/v1/P17-2005
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,631
inproceedings
prudhommeaux-etal-2017-vector
Vector space models for evaluating semantic fluency in autism
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2006/
Prud{'}hommeaux, Emily and van Santen, Jan and Gliner, Douglas
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
32--37
A common test administered during neurological examination is the semantic fluency test, in which the patient must list as many examples of a given semantic category as possible under timed conditions. Poor performance is associated with neurological conditions characterized by impairments in executive function, such as dementia, schizophrenia, and autism spectrum disorder (ASD). Methods for analyzing semantic fluency responses at the level of detail necessary to uncover these differences have typically relied on subjective manual annotation. In this paper, we explore automated approaches for scoring semantic fluency responses that leverage ontological resources and distributional semantic models to characterize the semantic fluency responses produced by young children with and without ASD. Using these methods, we find significant differences in the semantic fluency responses of children with ASD, demonstrating the utility of using objective methods for clinical language analysis.
null
null
10.18653/v1/P17-2006
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,632
inproceedings
susanto-lu-2017-neural
Neural Architectures for Multilingual Semantic Parsing
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2007/
Susanto, Raymond Hendy and Lu, Wei
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
38--44
In this paper, we address semantic parsing in a multilingual context. We train one multilingual model that is capable of parsing natural language sentences from multiple different languages into their corresponding formal semantic representations. We extend an existing sequence-to-tree model to a multi-task learning framework which shares the decoder for generating semantic representations. We report evaluation results on the multilingual GeoQuery corpus and introduce a new multilingual version of the ATIS corpus.
null
null
10.18653/v1/P17-2007
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,633
inproceedings
malinin-etal-2017-incorporating
Incorporating Uncertainty into Deep Learning for Spoken Language Assessment
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2008/
Malinin, Andrey and Ragni, Anton and Knill, Kate and Gales, Mark
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
45--50
There is a growing demand for automatic assessment of spoken English proficiency. These systems need to handle large variations in input data owing to the wide range of candidate skill levels and L1s, and errors from ASR. Some candidates will be a poor match to the training data set, undermining the validity of the predicted grade. For high stakes tests it is essential for such systems not only to grade well, but also to provide a measure of their uncertainty in their predictions, enabling rejection to human graders. Previous work examined Gaussian Process (GP) graders which, though successful, do not scale well with large data sets. Deep Neural Network (DNN) may also be used to provide uncertainty using Monte-Carlo Dropout (MCD). This paper proposes a novel method to yield uncertainty and compares it to GPs and DNNs with MCD. The proposed approach explicitly teaches a DNN to have low uncertainty on training data and high uncertainty on generated artificial data. On experiments conducted on data from the Business Language Testing Service (BULATS), the proposed approach is found to outperform GPs and DNNs with MCD in uncertainty-based rejection whilst achieving comparable grading performance.
null
null
10.18653/v1/P17-2008
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,634
inproceedings
jurgens-etal-2017-incorporating
Incorporating Dialectal Variability for Socially Equitable Language Identification
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2009/
Jurgens, David and Tsvetkov, Yulia and Jurafsky, Dan
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
51--57
Language identification (LID) is a critical first step for processing multilingual text. Yet most LID systems are not designed to handle the linguistic diversity of global platforms like Twitter, where local dialects and rampant code-switching lead language classifiers to systematically miss minority dialect speakers and multilingual speakers. We propose a new dataset and a character-based sequence-to-sequence model for LID designed to support dialectal and multilingual language varieties. Our model achieves state-of-the-art performance on multiple LID benchmarks. Furthermore, in a case study using Twitter for health tracking, our method substantially increases the availability of texts written by underrepresented populations, enabling the development of {\textquotedblleft}socially inclusive{\textquotedblright} NLP tools.
null
null
10.18653/v1/P17-2009
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,635
inproceedings
jagfeld-etal-2017-evaluating
Evaluating Compound Splitters Extrinsically with Textual Entailment
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2010/
Jagfeld, Glorianna and Ziering, Patrick and van der Plas, Lonneke
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
58--63
Traditionally, compound splitters are evaluated intrinsically on gold-standard data or extrinsically on the task of statistical machine translation. We explore a novel way for the extrinsic evaluation of compound splitters, namely recognizing textual entailment. Compound splitting has great potential for this novel task that is both transparent and well-defined. Moreover, we show that it addresses certain aspects that are either ignored in intrinsic evaluations or compensated for by taskinternal mechanisms in statistical machine translation. We show significant improvements using different compound splitting methods on a German textual entailment dataset.
null
null
10.18653/v1/P17-2010
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,636
inproceedings
gella-keller-2017-analysis
An Analysis of Action Recognition Datasets for Language and Vision Tasks
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2011/
Gella, Spandana and Keller, Frank
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
64--71
A large amount of recent research has focused on tasks that combine language and vision, resulting in a proliferation of datasets and methods. One such task is action recognition, whose applications include image annotation, scene understanding and image retrieval. In this survey, we categorize the existing approaches based on how they conceptualize this problem and provide a detailed review of existing datasets, highlighting their diversity as well as advantages and disadvantages. We focus on recently developed datasets which link visual information with linguistic resources and provide a fine-grained syntactic and semantic analysis of actions in images.
null
null
10.18653/v1/P17-2011
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,637
inproceedings
eriguchi-etal-2017-learning
Learning to Parse and Translate Improves Neural Machine Translation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2012/
Eriguchi, Akiko and Tsuruoka, Yoshimasa and Cho, Kyunghyun
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
72--78
There has been relatively little attention to incorporating linguistic prior to neural machine translation. Much of the previous work was further constrained to considering linguistic prior on the source side. In this paper, we propose a hybrid model, called NMT+RNNG, that learns to parse and translate by combining the recurrent neural network grammar into the attention-based neural machine translation. Our approach encourages the neural machine translation model to incorporate linguistic prior during training, and lets it translate on its own afterward. Extensive experiments with four language pairs show the effectiveness of the proposed NMT+RNNG.
null
null
10.18653/v1/P17-2012
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,638
inproceedings
almodaresi-etal-2017-distribution
On the Distribution of Lexical Features at Multiple Levels of Analysis
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2013/
Almodaresi, Fatemeh and Ungar, Lyle and Kulkarni, Vivek and Zakeri, Mohsen and Giorgi, Salvatore and Schwartz, H. Andrew
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
79--84
Natural language processing has increasingly moved from modeling documents and words toward studying the people behind the language. This move to working with data at the user or community level has presented the field with different characteristics of linguistic data. In this paper, we empirically characterize various lexical distributions at different levels of analysis, showing that, while most features are decidedly sparse and non-normal at the message-level (as with traditional NLP), they follow the central limit theorem to become much more Log-normal or even Normal at the user- and county-levels. Finally, we demonstrate that modeling lexical features for the correct level of analysis leads to marked improvements in common social scientific prediction tasks.
null
null
10.18653/v1/P17-2013
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,639
inproceedings
nisioi-etal-2017-exploring
Exploring Neural Text Simplification Models
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2014/
Nisioi, Sergiu and {\v{S}}tajner, Sanja and Ponzetto, Simone Paolo and Dinu, Liviu P.
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
85--91
We present the first attempt at using sequence to sequence neural networks to model text simplification (TS). Unlike the previously proposed automated TS systems, our neural text simplification (NTS) systems are able to simultaneously perform lexical simplification and content reduction. An extensive human evaluation of the output has shown that NTS systems achieve almost perfect grammaticality and meaning preservation of output sentences and higher level of simplification than the state-of-the-art automated TS systems
null
null
10.18653/v1/P17-2014
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,640
inproceedings
dahlmeier-2017-challenges
On the Challenges of Translating {NLP} Research into Commercial Products
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2015/
Dahlmeier, Daniel
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
92--96
This paper highlights challenges in industrial research related to translating research in natural language processing into commercial products. While the interest in natural language processing from industry is significant, the transfer of research to commercial products is non-trivial and its challenges are often unknown to or underestimated by many researchers. I discuss current obstacles and provide suggestions for increasing the chances for translating research to commercial success based on my experience in industrial research.
null
null
10.18653/v1/P17-2015
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,641
inproceedings
stajner-etal-2017-sentence
Sentence Alignment Methods for Improving Text Simplification Systems
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2016/
{\v{S}}tajner, Sanja and Franco-Salvador, Marc and Ponzetto, Simone Paolo and Rosso, Paolo and Stuckenschmidt, Heiner
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
97--102
We provide several methods for sentence-alignment of texts with different complexity levels. Using the best of them, we sentence-align the Newsela corpora, thus providing large training materials for automatic text simplification (ATS) systems. We show that using this dataset, even the standard phrase-based statistical machine translation models for ATS can outperform the state-of-the-art ATS systems.
null
null
10.18653/v1/P17-2016
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,642
inproceedings
jiang-etal-2017-understanding
Understanding Task Design Trade-offs in Crowdsourced Paraphrase Collection
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2017/
Jiang, Youxuan and Kummerfeld, Jonathan K. and Lasecki, Walter S.
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
103--109
Linguistically diverse datasets are critical for training and evaluating robust machine learning systems, but data collection is a costly process that often requires experts. Crowdsourcing the process of paraphrase generation is an effective means of expanding natural language datasets, but there has been limited analysis of the trade-offs that arise when designing tasks. In this paper, we present the first systematic study of the key factors in crowdsourcing paraphrase collection. We consider variations in instructions, incentives, data domains, and workflows. We manually analyzed paraphrases for correctness, grammaticality, and linguistic diversity. Our observations provide new insight into the trade-offs between accuracy and diversity in crowd responses that arise as a result of task design, providing guidance for future paraphrase generation procedures.
null
null
10.18653/v1/P17-2017
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,643
inproceedings
qi-manning-2017-arc
Arc-swift: A Novel Transition System for Dependency Parsing
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2018/
Qi, Peng and Manning, Christopher D.
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
110--117
Transition-based dependency parsers often need sequences of local shift and reduce operations to produce certain attachments. Correct individual decisions hence require global information about the sentence context and mistakes cause error propagation. This paper proposes a novel transition system, arc-swift, that enables direct attachments between tokens farther apart with a single transition. This allows the parser to leverage lexical information more directly in transition decisions. Hence, arc-swift can achieve significantly better performance with a very small beam size. Our parsers reduce error by 3.7{--}7.6{\%} relative to those using existing transition systems on the Penn Treebank dependency parsing task and English Universal Dependencies.
null
null
10.18653/v1/P17-2018
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,644
inproceedings
cheng-etal-2017-generative
A Generative Parser with a Discriminative Recognition Algorithm
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2019/
Cheng, Jianpeng and Lopez, Adam and Lapata, Mirella
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
118--124
Generative models defining joint distributions over parse trees and sentences are useful for parsing and language modeling, but impose restrictions on the scope of features and are often outperformed by discriminative models. We propose a framework for parsing and language modeling which marries a generative model with a discriminative recognition model in an encoder-decoder setting. We provide interpretations of the framework based on expectation maximization and variational inference, and show that it enables parsing and language modeling within a single implementation. On the English Penn Treen-bank, our framework obtains competitive performance on constituency parsing while matching the state-of-the-art single-model language modeling score.
null
null
10.18653/v1/P17-2019
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,645
inproceedings
wang-etal-2017-hybrid
Hybrid Neural Network Alignment and Lexicon Model in Direct {HMM} for Statistical Machine Translation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2020/
Wang, Weiyue and Alkhouli, Tamer and Zhu, Derui and Ney, Hermann
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
125--131
Recently, the neural machine translation systems showed their promising performance and surpassed the phrase-based systems for most translation tasks. Retreating into conventional concepts machine translation while utilizing effective neural models is vital for comprehending the leap accomplished by neural machine translation over phrase-based methods. This work proposes a direct HMM with neural network-based lexicon and alignment models, which are trained jointly using the Baum-Welch algorithm. The direct HMM is applied to rerank the n-best list created by a state-of-the-art phrase-based translation system and it provides improvements by up to 1.0{\%} Bleu scores on two different translation tasks.
null
null
10.18653/v1/P17-2020
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,646
inproceedings
aharoni-goldberg-2017-towards
Towards String-To-Tree Neural Machine Translation
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2021/
Aharoni, Roee and Goldberg, Yoav
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
132--140
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. An experiment on the WMT16 German-English news translation task resulted in an improved BLEU score when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A small-scale human evaluation also showed an advantage to the syntax-aware system.
null
null
10.18653/v1/P17-2021
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,647
inproceedings
reed-etal-2017-learning
Learning Lexico-Functional Patterns for First-Person Affect
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2022/
Reed, Lena and Wu, Jiaqi and Oraby, Shereen and Anand, Pranav and Walker, Marilyn
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
141--147
Informal first-person narratives are a unique resource for computational models of everyday events and people`s affective reactions to them. People blogging about their day tend not to explicitly say I am happy. Instead they describe situations from which other humans can readily infer their affective reactions. However current sentiment dictionaries are missing much of the information needed to make similar inferences. We build on recent work that models affect in terms of lexical predicate functions and affect on the predicate`s arguments. We present a method to learn proxies for these functions from first-person narratives. We construct a novel fine-grained test set, and show that the patterns we learn improve our ability to predict first-person affective reactions to everyday events, from a Stanford sentiment baseline of .67F to .75F.
null
null
10.18653/v1/P17-2022
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,648
inproceedings
shu-etal-2017-lifelong
Lifelong Learning {CRF} for Supervised Aspect Extraction
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2023/
Shu, Lei and Xu, Hu and Liu, Bing
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
148--154
This paper makes a focused contribution to supervised aspect extraction. It shows that if the system has performed aspect extraction from many past domains and retained their results as knowledge, Conditional Random Fields (CRF) can leverage this knowledge in a lifelong learning manner to extract in a new domain markedly better than the traditional CRF without using this prior knowledge. The key innovation is that even after CRF training, the model can still improve its extraction with experiences in its applications.
null
null
10.18653/v1/P17-2023
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,649
inproceedings
zhang-etal-2017-exploiting
Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2024/
Zhang, Ye and Lease, Matthew and Wallace, Byron C.
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
155--160
A fundamental advantage of neural models for NLP is their ability to learn representations from scratch. However, in practice this often means ignoring existing external linguistic resources, e.g., WordNet or domain specific ontologies such as the Unified Medical Language System (UMLS). We propose a general, novel method for exploiting such resources via weight sharing. Prior work on weight sharing in neural networks has considered it largely as a means of model compression. In contrast, we treat weight sharing as a flexible mechanism for incorporating prior knowledge into neural models. We show that this approach consistently yields improved performance on classification tasks compared to baseline strategies that do not exploit weight sharing.
null
null
10.18653/v1/P17-2024
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,650
inproceedings
fried-etal-2017-improving
Improving Neural Parsing by Disentangling Model Combination and Reranking Effects
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2025/
Fried, Daniel and Stern, Mitchell and Klein, Dan
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
161--166
Recent work has proposed several generative neural models for constituency parsing that achieve state-of-the-art results. Since direct search in these generative models is difficult, they have primarily been used to rescore candidate outputs from base parsers in which decoding is more straightforward. We first present an algorithm for direct search in these generative models. We then demonstrate that the rescoring results are at least partly due to implicit model combination rather than reranking effects. Finally, we show that explicit model combination can improve performance even further, resulting in new state-of-the-art numbers on the PTB of 94.25 F1 when training only on gold data and 94.66 F1 when using external data.
null
null
10.18653/v1/P17-2025
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,651
inproceedings
melamud-goldberger-2017-information
Information-Theory Interpretation of the Skip-Gram Negative-Sampling Objective Function
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2026/
Melamud, Oren and Goldberger, Jacob
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
167--171
In this paper we define a measure of dependency between two random variables, based on the Jensen-Shannon (JS) divergence between their joint distribution and the product of their marginal distributions. Then, we show that word2vec`s skip-gram with negative sampling embedding algorithm finds the optimal low-dimensional approximation of this JS dependency measure between the words and their contexts. The gap between the optimal score and the low-dimensional approximation is demonstrated on a standard text corpus.
null
null
10.18653/v1/P17-2026
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,652
inproceedings
kazi-thompson-2017-implicitly
Implicitly-Defined Neural Networks for Sequence Labeling
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2027/
Kazi, Michaeel and Thompson, Brian
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
172--177
In this work, we propose a novel, implicitly-defined neural network architecture and describe a method to compute its components. The proposed architecture forgoes the causality assumption used to formulate recurrent neural networks and instead couples the hidden states of the network, allowing improvement on problems with complex, long-distance dependencies. Initial experiments demonstrate the new architecture outperforms both the Stanford Parser and baseline bidirectional networks on the Penn Treebank Part-of-Speech tagging task and a baseline bidirectional network on an additional artificial random biased walk task.
null
null
10.18653/v1/P17-2027
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,653
inproceedings
ludusan-etal-2017-role
The Role of Prosody and Speech Register in Word Segmentation: A Computational Modelling Perspective
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2028/
Ludusan, Bogdan and Mazuka, Reiko and Bernard, Mathieu and Cristia, Alejandrina and Dupoux, Emmanuel
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
178--183
This study explores the role of speech register and prosody for the task of word segmentation. Since these two factors are thought to play an important role in early language acquisition, we aim to quantify their contribution for this task. We study a Japanese corpus containing both infant- and adult-directed speech and we apply four different word segmentation models, with and without knowledge of prosodic boundaries. The results showed that the difference between registers is smaller than previously reported and that prosodic boundary information helps more adult- than infant-directed speech.
null
null
10.18653/v1/P17-2028
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,654
inproceedings
wang-etal-2017-two
A Two-Stage Parsing Method for Text-Level Discourse Analysis
Barzilay, Regina and Kan, Min-Yen
jul
2017
Vancouver, Canada
Association for Computational Linguistics
https://aclanthology.org/P17-2029/
Wang, Yizhong and Li, Sujian and Wang, Houfeng
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
184--188
Previous work introduced transition-based algorithms to form a unified architecture of parsing rhetorical structures (including span, nuclearity and relation), but did not achieve satisfactory performance. In this paper, we propose that transition-based model is more appropriate for parsing the naked discourse tree (i.e., identifying span and nuclearity) due to data sparsity. At the same time, we argue that relation labeling can benefit from naked tree structure and should be treated elaborately with consideration of three kinds of relations including within-sentence, across-sentence and across-paragraph relations. Thus, we design a pipelined two-stage parsing method for generating an RST tree from text. Experimental results show that our method achieves state-of-the-art performance, especially on span and nuclearity identification.
null
null
10.18653/v1/P17-2029
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,655