entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | forbes-choi-2017-verb | Verb Physics: Relative Physical Knowledge of Actions and Objects | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1025/ | Forbes, Maxwell and Choi, Yejin | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 266--276 | Learning commonsense knowledge from natural language text is nontrivial due to reporting bias: people rarely state the obvious, e.g., {\textquotedblleft}My house is bigger than me.{\textquotedblright} However, while rarely stated explicitly, this trivial everyday knowledge does influence the way people talk about the world, which provides indirect clues to reason about the world. For example, a statement like, {\textquotedblleft}Tyler entered his house{\textquotedblright} implies that his house is bigger than Tyler. In this paper, we present an approach to infer relative physical knowledge of actions and objects along five dimensions (e.g., size, weight, and strength) from unstructured natural language text. We frame knowledge acquisition as joint inference over two closely related problems: learning (1) relative physical knowledge of object pairs and (2) physical implications of actions when applied to those object pairs. Empirical results demonstrate that it is possible to extract knowledge of actions and objects from language and that joint inference over different types of knowledge improves performance. | null | null | 10.18653/v1/P17-1025 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,455 |
inproceedings | yoshikawa-etal-2017-ccg | {A}* {CCG} Parsing with a Supertag and Dependency Factored Model | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1026/ | Yoshikawa, Masashi and Noji, Hiroshi and Matsumoto, Yuji | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 277--287 | We propose a new A* CCG parsing model in which the probability of a tree is decomposed into factors of CCG categories and its syntactic dependencies both defined on bi-directional LSTMs. Our factored model allows the precomputation of all probabilities and runs very efficiently, while modeling sentence structures explicitly via dependencies. Our model achieves the state-of-the-art results on English and Japanese CCG parsing. | null | null | 10.18653/v1/P17-1026 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,456 |
inproceedings | fernandez-gonzalez-gomez-rodriguez-2017-full | A Full Non-Monotonic Transition System for Unrestricted Non-Projective Parsing | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1027/ | Fern{\'a}ndez-Gonz{\'a}lez, Daniel and G{\'o}mez-Rodr{\'i}guez, Carlos | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 288--298 | Restricted non-monotonicity has been shown beneficial for the projective arc-eager dependency parser in previous research, as posterior decisions can repair mistakes made in previous states due to the lack of information. In this paper, we propose a novel, fully non-monotonic transition system based on the non-projective Covington algorithm. As a non-monotonic system requires exploration of erroneous actions during the training process, we develop several non-monotonic variants of the recently defined dynamic oracle for the Covington parser, based on tight approximations of the loss. Experiments on datasets from the CoNLL-X and CoNLL-XI shared tasks show that a non-monotonic dynamic oracle outperforms the monotonic version in the majority of languages. | null | null | 10.18653/v1/P17-1027 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,457 |
inproceedings | nguyen-etal-2017-aggregating | Aggregating and Predicting Sequence Labels from Crowd Annotations | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1028/ | Nguyen, An Thanh and Wallace, Byron and Li, Junyi Jessy and Nenkova, Ani and Lease, Matthew | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 299--309 | Despite sequences being core to NLP, scant work has considered how to handle noisy sequence labels from multiple annotators for the same text. Given such annotations, we consider two complementary tasks: (1) aggregating sequential crowd labels to infer a best single set of consensus annotations; and (2) using crowd annotations as training data for a model that can predict sequences in unannotated text. For aggregation, we propose a novel Hidden Markov Model variant. To predict sequences in unannotated text, we propose a neural approach using Long Short Term Memory. We evaluate a suite of methods across two different applications and text genres: Named-Entity Recognition in news articles and Information Extraction from biomedical abstracts. Results show improvement over strong baselines. Our source code and data are available online. | null | null | 10.18653/v1/P17-1028 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,458 |
inproceedings | zhou-neubig-2017-multi | Multi-space Variational Encoder-Decoders for Semi-supervised Labeled Sequence Transduction | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1029/ | Zhou, Chunting and Neubig, Graham | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 310--320 | Labeled sequence transduction is a task of transforming one sequence into another sequence that satisfies desiderata specified by a set of labels. In this paper we propose multi-space variational encoder-decoders, a new model for labeled sequence transduction with semi-supervised learning. The generative model can use neural networks to handle both discrete and continuous latent variables to exploit various features of data. Experiments show that our model provides not only a powerful supervised framework but also can effectively take advantage of the unlabeled data. On the SIGMORPHON morphological inflection benchmark, our model outperforms single-model state-of-art results by a large margin for the majority of languages. | null | null | 10.18653/v1/P17-1029 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,459 |
inproceedings | gan-etal-2017-scalable | Scalable {B}ayesian Learning of Recurrent Neural Networks for Language Modeling | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1030/ | Gan, Zhe and Li, Chunyuan and Chen, Changyou and Pu, Yunchen and Su, Qinliang and Carin, Lawrence | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 321--331 | Recurrent neural networks (RNNs) have shown promising performance for language modeling. However, traditional training of RNNs using back-propagation through time often suffers from overfitting. One reason for this is that stochastic optimization (used for large training sets) does not provide good estimates of model uncertainty. This paper leverages recent advances in stochastic gradient Markov Chain Monte Carlo (also appropriate for large training sets) to learn weight uncertainty in RNNs. It yields a principled Bayesian learning algorithm, adding gradient noise during training (enhancing exploration of the model-parameter space) and model averaging when testing. Extensive experiments on various RNN models and across a broad range of applications demonstrate the superiority of the proposed approach relative to stochastic optimization. | null | null | 10.18653/v1/P17-1030 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,460 |
inproceedings | bollmann-etal-2017-learning | Learning attention for historical text normalization by learning to pronounce | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1031/ | Bollmann, Marcel and Bingel, Joachim and S{\o}gaard, Anders | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 332--344 | Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-the-art by an absolute 2{\%} increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works. | null | null | 10.18653/v1/P17-1031 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,461 |
inproceedings | croce-etal-2017-deep | Deep Learning in Semantic Kernel Spaces | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1032/ | Croce, Danilo and Filice, Simone and Castellucci, Giuseppe and Basili, Roberto | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 345--354 | Kernel methods enable the direct usage of structured representations of textual data during language learning and inference tasks. Expressive kernels, such as Tree Kernels, achieve excellent performance in NLP. On the other side, deep neural networks have been demonstrated effective in automatically learning feature representations during training. However, their input is tensor data, i.e., they can not manage rich structured information. In this paper, we show that expressive kernels and deep neural networks can be combined in a common framework in order to (i) explicitly model structured information and (ii) learn non-linear decision functions. We show that the input layer of a deep architecture can be pre-trained through the application of the Nystrom low-rank approximation of kernel spaces. The resulting {\textquotedblleft}kernelized{\textquotedblright} neural network achieves state-of-the-art accuracy in three different tasks. | null | null | 10.18653/v1/P17-1032 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,462 |
inproceedings | lau-etal-2017-topically | Topically Driven Neural Language Model | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1033/ | Lau, Jey Han and Baldwin, Timothy and Cohn, Trevor | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 355--365 | Language models are typically applied at the sentence level, without access to the broader document context. We present a neural language model that incorporates document context in the form of a topic model-like architecture, thus providing a succinct representation of the broader document context outside of the current sentence. Experiments over a range of datasets demonstrate that our model outperforms a pure sentence-based model in terms of language model perplexity, and leads to topics that are potentially more coherent than those produced by a standard LDA topic model. Our model also has the ability to generate related sentences for a topic, providing another way to interpret topics. | null | null | 10.18653/v1/P17-1033 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,463 |
inproceedings | wang-etal-2017-handling | Handling Cold-Start Problem in Review Spam Detection by Jointly Embedding Texts and Behaviors | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1034/ | Wang, Xuepeng and Liu, Kang and Zhao, Jun | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 366--376 | Solving cold-start problem in review spam detection is an urgent and significant task. It can help the on-line review websites to relieve the damage of spammers in time, but has never been investigated by previous work. This paper proposes a novel neural network model to detect review spam for cold-start problem, by learning to represent the new reviewers' review with jointly embedded textual and behavioral information. Experimental results prove the proposed model achieves an effective performance and possesses preferable domain-adaptability. It is also applicable to a large scale dataset in an unsupervised way. | null | null | 10.18653/v1/P17-1034 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,464 |
inproceedings | mishra-etal-2017-learning | Learning Cognitive Features from Gaze Data for Sentiment and Sarcasm Classification using Convolutional Neural Network | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1035/ | Mishra, Abhijit and Dey, Kuntal and Bhattacharyya, Pushpak | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 377--387 | Cognitive NLP systems- i.e., NLP systems that make use of behavioral data - augment traditional text-based features with cognitive features extracted from eye-movement patterns, EEG signals, brain-imaging etc. Such extraction of features is typically manual. We contend that manual extraction of features may not be the best way to tackle text subtleties that characteristically prevail in complex classification tasks like Sentiment Analysis and Sarcasm Detection, and that even the extraction and choice of features should be delegated to the learning system. We introduce a framework to automatically extract cognitive features from the eye-movement/gaze data of human readers reading the text and use them as features along with textual features for the tasks of sentiment polarity and sarcasm detection. Our proposed framework is based on Convolutional Neural Network (CNN). The CNN learns features from both gaze and text and uses them to classify the input text. We test our technique on published sentiment and sarcasm labeled datasets, enriched with gaze information, to show that using a combination of automatically learned text and gaze features often yields better classification performance over (i) CNN based systems that rely on text input alone and (ii) existing systems that rely on handcrafted gaze and textual features. | null | null | 10.18653/v1/P17-1035 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,465 |
inproceedings | he-etal-2017-unsupervised | An Unsupervised Neural Attention Model for Aspect Extraction | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1036/ | He, Ruidan and Lee, Wee Sun and Ng, Hwee Tou and Dahlmeier, Daniel | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 388--397 | Aspect extraction is an important and challenging task in aspect-based sentiment analysis. Existing works tend to apply variants of topic models on this task. While fairly successful, these methods usually do not produce highly coherent aspects. In this paper, we present a novel neural approach with the aim of discovering coherent aspects. The model improves coherence by exploiting the distribution of word co-occurrences through the use of neural word embeddings. Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space. In addition, we use an attention mechanism to de-emphasize irrelevant words during training, further improving the coherence of aspects. Experimental results on real-life datasets demonstrate that our approach discovers more meaningful and coherent aspects, and substantially outperforms baseline methods on several evaluation tasks. | null | null | 10.18653/v1/P17-1036 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,466 |
inproceedings | sasaki-etal-2017-topics | Other Topics You May Also Agree or Disagree: Modeling Inter-Topic Preferences using Tweets and Matrix Factorization | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1037/ | Sasaki, Akira and Hanawa, Kazuaki and Okazaki, Naoaki and Inui, Kentaro | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 398--408 | We presents in this paper our approach for modeling inter-topic preferences of Twitter users: for example, {\textquotedblleft}those who agree with the Trans-Pacific Partnership (TPP) also agree with free trade{\textquotedblright}. This kind of knowledge is useful not only for stance detection across multiple topics but also for various real-world applications including public opinion survey, electoral prediction, electoral campaigns, and online debates. In order to extract users' preferences on Twitter, we design linguistic patterns in which people agree and disagree about specific topics (e.g., {\textquotedblleft}A is completely wrong{\textquotedblright}). By applying these linguistic patterns to a collection of tweets, we extract statements agreeing and disagreeing with various topics. Inspired by previous work on item recommendation, we formalize the task of modeling inter-topic preferences as matrix factorization: representing users' preference as a user-topic matrix and mapping both users and topics onto a latent feature space that abstracts the preferences. Our experimental results demonstrate both that our presented approach is useful in predicting missing preferences of users and that the latent vector representations of topics successfully encode inter-topic preferences. | null | null | 10.18653/v1/P17-1037 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,467 |
inproceedings | chen-etal-2017-automatically | Automatically Labeled Data Generation for Large Scale Event Extraction | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1038/ | Chen, Yubo and Liu, Shulin and Zhang, Xiang and Liu, Kang and Zhao, Jun | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 409--419 | Modern models of event extraction for tasks like ACE are based on supervised learning of events from small hand-labeled data. However, hand-labeled training data is expensive to produce, in low coverage of event types, and limited in size, which makes supervised methods hard to extract large scale of events for knowledge base population. To solve the data labeling problem, we propose to automatically label training data for event extraction via world knowledge and linguistic knowledge, which can detect key arguments and trigger words for each event type and employ them to label events in texts automatically. The experimental results show that the quality of our large scale automatically labeled data is competitive with elaborately human-labeled data. And our automatically labeled data can incorporate with human-labeled data, then improve the performance of models learned from these data. | null | null | 10.18653/v1/P17-1038 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,468 |
inproceedings | zhong-etal-2017-time | Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1039/ | Zhong, Xiaoshi and Sun, Aixin and Cambria, Erik | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 420--429 | Extracting time expressions from free text is a fundamental task for many applications. We analyze the time expressions from four datasets and find that only a small group of words are used to express time information, and the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach, named SynTime, to recognize time expressions. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related regular expressions over tokens. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies the time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a light-weight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text of different types and of different domains. Experiment on benchmark datasets and tweets data shows that SynTime outperforms state-of-the-art methods. | null | null | 10.18653/v1/P17-1039 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,469 |
inproceedings | luo-etal-2017-learning-noise | Learning with Noise: Enhance Distantly Supervised Relation Extraction with Dynamic Transition Matrix | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1040/ | Luo, Bingfeng and Feng, Yansong and Wang, Zheng and Zhu, Zhanxing and Huang, Songfang and Yan, Rui and Zhao, Dongyan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 430--439 | Distant supervision significantly reduces human efforts in building training data for many classification tasks. While promising, this technique often introduces noise to the generated training data, which can severely affect the model performance. In this paper, we take a deep look at the application of distant supervision in relation extraction. We show that the dynamic transition matrix can effectively characterize the noise in the training data built by distant supervision. The transition matrix can be effectively trained using a novel curriculum learning based method without any direct supervision about the noise. We thoroughly evaluate our approach under a wide range of extraction scenarios. Experimental results show that our approach consistently improves the extraction results and outperforms the state-of-the-art in various evaluation scenarios. | null | null | 10.18653/v1/P17-1040 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,470 |
inproceedings | yin-neubig-2017-syntactic | A Syntactic Neural Model for General-Purpose Code Generation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1041/ | Yin, Pengcheng and Neubig, Graham | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 440--450 | We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches. | null | null | 10.18653/v1/P17-1041 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,471 |
inproceedings | artetxe-etal-2017-learning | Learning bilingual word embeddings with (almost) no bilingual data | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1042/ | Artetxe, Mikel and Labaka, Gorka and Agirre, Eneko | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 451--462 | Most methods to learn bilingual word embeddings rely on large parallel corpora, which is difficult to obtain for most language pairs. This has motivated an active research line to relax this requirement, with methods that use document-aligned corpora or bilingual dictionaries of a few thousand words instead. In this work, we further reduce the need of bilingual resources using a very simple self-learning approach that can be combined with any dictionary-based mapping technique. Our method exploits the structural similarity of embedding spaces, and works with as little bilingual evidence as a 25 word dictionary or even an automatically generated list of numerals, obtaining results comparable to those of systems that use richer resources. | null | null | 10.18653/v1/P17-1042 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,472 |
inproceedings | foland-martin-2017-abstract | {A}bstract {M}eaning {R}epresentation Parsing using {LSTM} Recurrent Neural Networks | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1043/ | Foland, William and Martin, James H. | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 463--472 | We present a system which parses sentences into Abstract Meaning Representations, improving state-of-the-art results for this task by more than 5{\%}. AMR graphs represent semantic content using linguistic properties such as semantic roles, coreference, negation, and more. The AMR parser does not rely on a syntactic pre-parse, or heavily engineered features, and uses five recurrent neural networks as the key architectural components for inferring AMR graphs. | null | null | 10.18653/v1/P17-1043 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,473 |
inproceedings | he-etal-2017-deep | Deep Semantic Role Labeling: What Works and What`s Next | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1044/ | He, Luheng and Lee, Kenton and Lewis, Mike and Zettlemoyer, Luke | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 473--483 | We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on theCoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10{\%} relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results. | null | null | 10.18653/v1/P17-1044 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,474 |
inproceedings | dhingra-etal-2017-towards | Towards End-to-End Reinforcement Learning of Dialogue Agents for Information Access | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1045/ | Dhingra, Bhuwan and Li, Lihong and Li, Xiujun and Gao, Jianfeng and Chen, Yun-Nung and Ahmed, Faisal and Deng, Li | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 484--495 | This paper proposes KB-InfoBot - a multi-turn dialogue agent which helps users search Knowledge Bases (KBs) without composing complicated queries. Such goal-oriented dialogue agents typically need to interact with an external database to access real-world knowledge. Previous systems achieved this by issuing a symbolic query to the KB to retrieve entries based on their attributes. However, such symbolic operations break the differentiability of the system and prevent end-to-end training of neural dialogue agents. In this paper, we address this limitation by replacing symbolic queries with an induced {\textquotedblleft}soft{\textquotedblright} posterior distribution over the KB that indicates which entities the user is interested in. Integrating the soft retrieval process with a reinforcement learner leads to higher task success rate and reward in both simulations and against real users. We also present a fully neural end-to-end agent, trained entirely from user feedback, and discuss its application towards personalized dialogue agents. | null | null | 10.18653/v1/P17-1045 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,475 |
inproceedings | wu-etal-2017-sequential | Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1046/ | Wu, Yu and Wu, Wei and Xing, Chen and Zhou, Ming and Li, Zhoujun | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 496--505 | We study response selection for multi-turn conversation in retrieval based chatbots. Existing work either concatenates utterances in context or matches a response with a highly abstract context vector finally, which may lose relationships among the utterances or important information in the context. We propose a sequential matching network (SMN) to address both problems. SMN first matches a response with each utterance in the context on multiple levels of granularity, and distills important matching information from each pair as a vector with convolution and pooling operations. The vectors are then accumulated in a chronological order through a recurrent neural network (RNN) which models relationships among the utterances. The final matching score is calculated with the hidden states of the RNN. Empirical study on two public data sets shows that SMN can significantly outperform state-of-the-art methods for response selection in multi-turn conversation. | null | null | 10.18653/v1/P17-1046 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,476 |
inproceedings | harwath-glass-2017-learning | Learning Word-Like Units from Joint Audio-Visual Analysis | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1047/ | Harwath, David and Glass, James | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 506--517 | Given a collection of images and spoken audio captions, we present a method for discovering word-like acoustic units in the continuous speech signal and grounding them to semantically relevant image regions. For example, our model is able to detect spoken instances of the word {\textquoteleft}lighthouse' within an utterance and associate them with image regions containing lighthouses. We do not use any form of conventional automatic speech recognition, nor do we use any text transcriptions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only to recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images. | null | null | 10.18653/v1/P17-1047 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,477 |
inproceedings | hori-etal-2017-joint | Joint {CTC}/attention decoding for end-to-end speech recognition | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1048/ | Hori, Takaaki and Watanabe, Shinji and Hershey, John | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 518--529 | End-to-end automatic speech recognition (ASR) has become a popular alternative to conventional DNN/HMM systems because it avoids the need for linguistic resources such as pronunciation dictionary, tokenization, and context-dependency trees, leading to a greatly simplified model-building process. There are two major types of end-to-end architectures for ASR: attention-based methods use an attention mechanism to perform alignment between acoustic frames and recognized symbols, and connectionist temporal classification (CTC), uses Markov assumptions to efficiently solve sequential problems by dynamic programming. This paper proposes joint decoding algorithm for end-to-end ASR with a hybrid CTC/attention architecture, which effectively utilizes both advantages in decoding. We have applied the proposed method to two ASR benchmarks (spontaneous Japanese and Mandarin Chinese), and showing the comparable performance to conventional state-of-the-art DNN/HMM ASR systems without linguistic resources. | null | null | 10.18653/v1/P17-1048 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,478 |
inproceedings | rabinovich-etal-2017-found | Found in Translation: Reconstructing Phylogenetic Language Trees from Translations | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1049/ | Rabinovich, Ella and Ordan, Noam and Wintner, Shuly | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 530--540 | Translation has played an important role in trade, law, commerce, politics, and literature for thousands of years. Translators have always tried to be invisible; ideal translations should look as if they were written originally in the target language. We show that traces of the source language remain in the translation product to the extent that it is possible to uncover the history of the source language by looking only at the translation. Specifically, we automatically reconstruct phylogenetic language trees from monolingual texts (translated from several source languages). The signal of the source language is so powerful that it is retained even after two phases of translation. This strongly indicates that source language interference is the most dominant characteristic of translated texts, overshadowing the more subtle signals of universal properties of translation. | null | null | 10.18653/v1/P17-1049 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,479 |
inproceedings | berzak-etal-2017-predicting | Predicting Native Language from Gaze | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1050/ | Berzak, Yevgeni and Nakamura, Chie and Flynn, Suzanne and Katz, Boris | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 541--551 | A fundamental question in language learning concerns the role of a speaker`s first language in second language acquisition. We present a novel methodology for studying this question: analysis of eye-movement patterns in second language reading of free-form text. Using this methodology, we demonstrate for the first time that the native language of English learners can be predicted from their gaze fixations when reading English. We provide analysis of classifier uncertainty and learned features, which indicates that differences in English reading are likely to be rooted in linguistic divergences across native languages. The presented framework complements production studies and offers new ground for advancing research on multilingualism. | null | null | 10.18653/v1/P17-1050 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,480 |
inproceedings | sakakini-etal-2017-morse | {MORSE}: Semantic-ally Drive-n {MOR}pheme {SE}gment-er | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1051/ | Sakakini, Tarek and Bhat, Suma and Viswanath, Pramod | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 552--561 | We present in this paper a novel framework for morpheme segmentation which uses the morpho-syntactic regularities preserved by word representations, in addition to orthographic features, to segment words into morphemes. This framework is the first to consider vocabulary-wide syntactico-semantic information for this task. We also analyze the deficiencies of available benchmarking datasets and introduce our own dataset that was created on the basis of compositionality. We validate our algorithm across datasets and present state-of-the-art results. | null | null | 10.18653/v1/P17-1051 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,481 |
inproceedings | johnson-zhang-2017-deep | Deep Pyramid Convolutional Neural Networks for Text Categorization | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1052/ | Johnson, Rie and Zhang, Tong | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 562--570 | This paper proposes a low-complexity word-level deep convolutional neural network (CNN) architecture for text categorization that can efficiently represent long-range associations in text. In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data. However, the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications. Moreover, it was shown recently that shallow word-level CNNs are more accurate and much faster than the state-of-the-art very deep nets such as character-level CNNs even in the setting of large training data. Motivated by these findings, we carefully studied deepening of word-level CNNs to capture global representations of text, and found a simple network architecture with which the best accuracy can be obtained by increasing the network depth without increasing computational cost by much. We call it deep pyramid CNN. The proposed model with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic categorization. | null | null | 10.18653/v1/P17-1052 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,482 |
inproceedings | yu-etal-2017-improved | Improved Neural Relation Detection for Knowledge Base Question Answering | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1053/ | Yu, Mo and Yin, Wenpeng and Hasan, Kazi Saidul and dos Santos, Cicero and Xiang, Bing and Zhou, Bowen | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 571--581 | Relation detection is a core component of many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning which detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different levels of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to make the two components enhance each other. Our experimental results show that our approach not only achieves outstanding relation detection performance, but more importantly, it helps our KBQA system achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks. | null | null | 10.18653/v1/P17-1053 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,483 |
inproceedings | meng-etal-2017-deep | Deep Keyphrase Generation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1054/ | Meng, Rui and Zhao, Sanqiang and Han, Shuguang and He, Daqing and Brusilovsky, Peter and Chi, Yu | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 582--592 | Keyphrase provides highly-summative information that can be effectively used for understanding, organizing and retrieving text content. Though previous studies have provided many workable solutions for automated keyphrase extraction, they commonly divided the to-be-summarized content into multiple text chunks, then ranked and selected the most meaningful ones. These approaches could neither identify keyphrases that do not appear in the text, nor capture the real semantic meaning behind the text. We propose a generative model for keyphrase prediction with an encoder-decoder framework, which can effectively overcome the above drawbacks. We name it as \textit{deep keyphrase generation} since it attempts to capture the deep semantic meaning of the content with a deep learning method. Empirical analysis on six datasets demonstrates that our proposed model not only achieves a significant performance boost on extracting keyphrases that appear in the source text, but also can generate absent keyphrases based on the semantic meaning of the text. Code and dataset are available at \url{https://github.com/memray/seq2seq-keyphrase}. | null | null | 10.18653/v1/P17-1054 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,484 |
inproceedings | cui-etal-2017-attention | Attention-over-Attention Neural Networks for Reading Comprehension | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1055/ | Cui, Yiming and Chen, Zhipeng and Wei, Si and Wang, Shijin and Liu, Ting and Hu, Guoping | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 593--602 | Cloze-style reading comprehension is a representative problem in mining relationship between document and query. In this paper, we present a simple but novel model called attention-over-attention reader for better solving cloze-style reading comprehension task. The proposed model aims to place another attention mechanism over the document-level attention and induces {\textquotedblleft}attended attention{\textquotedblright} for final answer predictions. One advantage of our model is that it is simpler than related works while giving excellent performance. In addition to the primary model, we also propose an N-best re-ranking strategy to double check the validity of the candidates and further improve the performance. Experimental results show that the proposed methods significantly outperform various state-of-the-art systems by a large margin in public datasets, such as CNN and Children`s Book Test. | null | null | 10.18653/v1/P17-1055 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,485 |
inproceedings | doyle-etal-2017-alignment | Alignment at Work: Using Language to Distinguish the Internalization and Self-Regulation Components of Cultural Fit in Organizations | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1056/ | Doyle, Gabriel and Goldberg, Amir and Srivastava, Sameer and Frank, Michael | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 603--612 | Cultural fit is widely believed to affect the success of individuals and the groups to which they belong. Yet it remains an elusive, poorly measured construct. Recent research draws on computational linguistics to measure cultural fit but overlooks asymmetries in cultural adaptation. By contrast, we develop a directed, dynamic measure of cultural fit based on linguistic alignment, which estimates the influence of one person`s word use on another`s and distinguishes between two enculturation mechanisms: internalization and self-regulation. We use this measure to trace employees' enculturation trajectories over a large, multi-year corpus of corporate emails and find that patterns of alignment in the first six months of employment are predictive of individuals' downstream outcomes, especially involuntary exit. Further predictive analyses suggest referential alignment plays an overlooked role in linguistic alignment. | null | null | 10.18653/v1/P17-1056 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,486 |
inproceedings | chrupala-etal-2017-representations | Representations of language in a model of visually grounded speech signal | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1057/ | Chrupa{\l}a, Grzegorz and Gelderloos, Lieke and Alishahi, Afra | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 613--622 | We present a visually grounded model of speech perception which projects spoken utterances and images to a joint semantic space. We use a multi-layer recurrent highway network to model the temporal nature of spoken speech, and show that it learns to extract both form and meaning-based linguistic knowledge from the input signal. We carry out an in-depth analysis of the representations used by different components of the trained model and show that encoding of semantic aspects tends to become richer as we go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease. | null | null | 10.18653/v1/P17-1057 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,487 |
inproceedings | xu-reitter-2017-spectral | Spectral Analysis of Information Density in Dialogue Predicts Collaborative Task Performance | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1058/ | Xu, Yang and Reitter, David | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 623--633 | We propose a perspective on dialogue that focuses on relative information contributions of conversation partners as a key to successful communication. We predict the success of collaborative task in English and Danish corpora of task-oriented dialogue. Two features are extracted from the frequency domain representations of the lexical entropy series of each interlocutor, power spectrum overlap (PSO) and relative phase (RP). We find that PSO is a negative predictor of task success, while RP is a positive one. An SVM with these features significantly improved on previous task success prediction models. Our findings suggest that the strategic distribution of information density between interlocutors is relevant to task success. | null | null | 10.18653/v1/P17-1058 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,488 |
inproceedings | ghosh-etal-2017-affect | Affect-{LM}: A Neural Language Model for Customizable Affective Text Generation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1059/ | Ghosh, Sayan and Chollet, Mathieu and Laksana, Eugene and Morency, Louis-Philippe and Scherer, Stefan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 634--642 | Human verbal communication includes affective messages which are conveyed through use of emotionally colored words. There has been a lot of research effort in this direction but the problem of integrating state-of-the-art neural language models with affective information remains an area ripe for exploration. In this paper, we propose an extension to an LSTM (Long Short-Term Memory) language model for generation of conversational text, conditioned on affect categories. Our proposed model, Affect-LM enables us to customize the degree of emotional content in generated sentences through an additional design parameter. Perception studies conducted using Amazon Mechanical Turk show that Affect-LM can generate naturally looking emotional sentences without sacrificing grammatical correctness. Affect-LM also learns affect-discriminative word representations, and perplexity experiments show that additional affective information in conversational text can improve language model prediction. | null | null | 10.18653/v1/P17-1059 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,489 |
inproceedings | kim-etal-2017-domain | Domain Attention with an Ensemble of Experts | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1060/ | Kim, Young-Bum and Stratos, Karl and Kim, Dongchan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 643--653 | An important problem in domain adaptation is to quickly generalize to a new domain with limited supervision given K existing domains. One approach is to retrain a global model across all K + 1 domains using standard techniques, for instance Daum{\'e} III (2009). However, it is desirable to adapt without having to re-estimate a global model from scratch each time a new domain with potentially new intents and slots is added. We describe a solution based on attending an ensemble of domain experts. We assume K domain specific intent and slot models trained on respective domains. When given domain K + 1, our model uses a weighted combination of the K domain experts' feedback along with its own opinion to make predictions on the new domain. In experiments, the model significantly outperforms baselines that do not use domain adaptation and also performs better than the full retraining approach. | null | null | 10.18653/v1/P17-1060 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,490 |
inproceedings | zhao-etal-2017-learning | Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1061/ | Zhao, Tiancheng and Zhao, Ran and Eskenazi, Maxine | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 654--664 | While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder from word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that capture the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved through introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence of discourse-level decision-making. | null | null | 10.18653/v1/P17-1061 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,491 |
inproceedings | williams-etal-2017-hybrid | Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1062/ | Williams, Jason D. and Asadi, Kavosh and Zweig, Geoffrey | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 665--677 | End-to-end learning of recurrent neural networks (RNNs) is an attractive solution for dialog systems; however, current techniques are data-intensive and require thousands of dialogs to learn simple behaviors. We introduce Hybrid Code Networks (HCNs), which combine an RNN with domain-specific knowledge encoded as software and system action templates. Compared to existing end-to-end approaches, HCNs considerably reduce the amount of training data required, while retaining the key benefit of inferring a latent representation of dialog state. In addition, HCNs can be optimized with supervised learning, reinforcement learning, or a mixture of both. HCNs attain state-of-the-art performance on the bAbI dialog dataset (Bordes and Weston, 2016), and outperform two commercially deployed customer-facing dialog systems at our company. | null | null | 10.18653/v1/P17-1062 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,492 |
inproceedings | villalba-etal-2017-generating | Generating Contrastive Referring Expressions | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1063/ | Villalba, Mart{\'i}n and Teichmann, Christoph and Koller, Alexander | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 678--687 | The referring expressions (REs) produced by a natural language generation (NLG) system can be misunderstood by the hearer, even when they are semantically correct. In an interactive setting, the NLG system can try to recognize such misunderstandings and correct them. We present an algorithm for generating corrective REs that use contrastive focus ({\textquotedblleft}no, the BLUE button{\textquotedblright}) to emphasize the information the hearer most likely misunderstood. We show empirically that these contrastive REs are preferred over REs without contrast marking. | null | null | 10.18653/v1/P17-1063 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,493 |
inproceedings | li-etal-2017-modeling | Modeling Source Syntax for Neural Machine Translation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1064/ | Li, Junhui and Xiong, Deyi and Tu, Zhaopeng and Zhu, Muhua and Zhang, Min and Zhou, Guodong | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 688--697 | Even though a linguistics-free sequence to sequence model in neural machine translation (NMT) has certain capability of implicitly learning syntactic information of source sentences, this paper shows that source syntax can be explicitly incorporated into NMT effectively to provide further improvements. Specifically, we linearize parse trees of source sentences to obtain structural label sequences. On the basis, we propose three different sorts of encoders to incorporate source syntax into NMT: 1) Parallel RNN encoder that learns word and label annotation vectors parallelly; 2) Hierarchical RNN encoder that learns word and label annotation vectors in a two-level hierarchy; and 3) Mixed RNN encoder that stitchingly learns word and label annotation vectors over sequences where words and labels are mixed. Experimentation on Chinese-to-English translation demonstrates that all the three proposed syntactic encoders are able to improve translation accuracy. It is interesting to note that the simplest RNN encoder, i.e., Mixed RNN encoder yields the best performance with an significant improvement of 1.4 BLEU points. Moreover, an in-depth analysis from several perspectives is provided to reveal how source syntax benefits NMT. | null | null | 10.18653/v1/P17-1064 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,494 |
inproceedings | wu-etal-2017-sequence | Sequence-to-Dependency Neural Machine Translation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1065/ | Wu, Shuangzhi and Zhang, Dongdong and Yang, Nan and Li, Mu and Zhou, Ming | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 698--707 | Nowadays a typical Neural Machine Translation (NMT) model generates translations from left to right as a linear sequence, during which latent syntactic structures of the target sentences are not explicitly concerned. Inspired by the success of using syntactic knowledge of target language for improving statistical machine translation, in this paper we propose a novel Sequence-to-Dependency Neural Machine Translation (SD-NMT) method, in which the target word sequence and its corresponding dependency structure are jointly constructed and modeled, and this structure is used as context to facilitate word generations. Experimental results show that the proposed method significantly outperforms state-of-the-art baselines on Chinese-English and Japanese-English translation tasks. | null | null | 10.18653/v1/P17-1065 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,495 |
inproceedings | ma-etal-2017-detect | Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1066/ | Ma, Jing and Gao, Wei and Wong, Kam-Fai | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 708--717 | How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-of-the-art rumor detection models. | null | null | 10.18653/v1/P17-1066 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,496 |
inproceedings | abdul-mageed-ungar-2017-emonet | {E}mo{N}et: Fine-Grained Emotion Detection with Gated Recurrent Neural Networks | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1067/ | Abdul-Mageed, Muhammad and Ungar, Lyle | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 718--728 | Accurate detection of emotion from natural language has applications ranging from building emotional chatbots to better understanding individuals and their lives. However, progress on emotion detection has been hampered by the absence of large labeled datasets. In this work, we build a very large dataset for fine-grained emotions and develop deep learning models on it. We achieve a new state-of-the-art on 24 fine-grained types of emotions (with an average accuracy of 87.58{\%}). We also extend the task beyond emotion types to model Robert Plutick`s 8 primary emotion dimensions, acquiring a superior accuracy of 95.68{\%}. | null | null | 10.18653/v1/P17-1067 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,497 |
inproceedings | preotiuc-pietro-etal-2017-beyond | Beyond Binary Labels: Political Ideology Prediction of {T}witter Users | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1068/ | Preo{\c{t}}iuc-Pietro, Daniel and Liu, Ye and Hopkins, Daniel and Ungar, Lyle | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 729--740 | Automatic political orientation prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users' political ideology using a seven-point scale which enables us to identify politically moderate and neutral users {--} groups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the groups of politically engaged users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups. | null | null | 10.18653/v1/P17-1068 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,498 |
inproceedings | johnson-etal-2017-leveraging | Leveraging Behavioral and Social Information for Weakly Supervised Collective Classification of Political Discourse on {T}witter | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1069/ | Johnson, Kristen and Jin, Di and Goldwasser, Dan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 741--752 | Framing is a political strategy in which politicians carefully word their statements in order to control public perception of issues. Previous works exploring political framing typically analyze frame usage in longer texts, such as congressional speeches. We present a collection of weakly supervised models which harness collective classification to predict the frames used in political discourse on the microblogging platform, Twitter. Our global probabilistic models show that by combining both lexical features of tweets and network-based behavioral features of Twitter, we are able to increase the average, unsupervised F1 score by 21.52 points over a lexical baseline alone. | null | null | 10.18653/v1/P17-1069 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,499 |
inproceedings | ji-etal-2017-nested | A Nested Attention Neural Hybrid Model for Grammatical Error Correction | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1070/ | Ji, Jianshu and Wang, Qinlong and Toutanova, Kristina and Gong, Yongen and Truong, Steven and Gao, Jianfeng | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 753--762 | Grammatical error correction (GEC) systems strive to correct both global errors inword order and usage, and local errors inspelling and inflection. Further developing upon recent work on neural machine translation, we propose a new hybrid neural model with nested attention layers for GEC.Experiments show that the new model can effectively correct errors of both types by incorporating word and character-level information, and that the model significantly outperforms previous neural models for GEC as measured on the standard CoNLL-14 benchmark dataset. Further analysis also shows that the superiority of the proposed model can be largely attributed to the use of the nested attention mechanism, which has proven particularly effective incorrecting local errors that involve small edits in orthography. | null | null | 10.18653/v1/P17-1070 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,500 |
inproceedings | mrabet-etal-2017-textflow | {T}ext{F}low: A Text Similarity Measure based on Continuous Sequences | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1071/ | Mrabet, Yassine and Kilicoglu, Halil and Demner-Fushman, Dina | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 763--772 | Text similarity measures are used in multiple tasks such as plagiarism detection, information ranking and recognition of paraphrases and textual entailment. While recent advances in deep learning highlighted the relevance of sequential models in natural language generation, existing similarity measures do not fully exploit the sequential nature of language. Examples of such similarity measures include n-grams and skip-grams overlap which rely on distinct slices of the input texts. In this paper we present a novel text similarity measure inspired from a common representation in DNA sequence alignment algorithms. The new measure, called TextFlow, represents input text pairs as continuous curves and uses both the actual position of the words and sequence matching to compute the similarity value. Our experiments on 8 different datasets show very encouraging results in paraphrase detection, textual entailment recognition and ranking relevance. | null | null | 10.18653/v1/P17-1071 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,501 |
inproceedings | tan-etal-2017-friendships | Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1072/ | Tan, Chenhao and Card, Dallas and Smith, Noah A. | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 773--783 | Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics{---}cooccurrence within documents and prevalence correlation over time{---}our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other`s prevalence over time, and yet rarely cooccur, almost like a {\textquotedblleft}cold war{\textquotedblright} scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers. | null | null | 10.18653/v1/P17-1072 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,502 |
inproceedings | wroblewska-krasnowska-kieras-2017-polish | {P}olish evaluation dataset for compositional distributional semantics models | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1073/ | Wr{\'o}blewska, Alina and Krasnowska-Kiera{\'s}, Katarzyna | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 784--792 | The paper presents a procedure of building an evaluation dataset. for the validation of compositional distributional semantics models estimated for languages other than English. The procedure generally builds on steps designed to assemble the SICK corpus, which contains pairs of English sentences annotated for semantic relatedness and entailment, because we aim at building a comparable dataset. However, the implementation of particular building steps significantly differs from the original SICK design assumptions, which is caused by both lack of necessary extraneous resources for an investigated language and the need for language-specific transformation rules. The designed procedure is verified on Polish, a fusional language with a relatively free word order, and contributes to building a Polish evaluation dataset. The resource consists of 10K sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. | null | null | 10.18653/v1/P17-1073 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,503 |
inproceedings | bryant-etal-2017-automatic | Automatic Annotation and Evaluation of Error Types for Grammatical Error Correction | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1074/ | Bryant, Christopher and Felice, Mariano and Briscoe, Ted | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 793--805 | Until now, error type performance for Grammatical Error Correction (GEC) systems could only be measured in terms of recall because system output is not annotated. To overcome this problem, we introduce ERRANT, a grammatical ERRor ANnotation Toolkit designed to automatically extract edits from parallel original and corrected sentences and classify them according to a new, dataset-agnostic, rule-based framework. This not only facilitates error type evaluation at different levels of granularity, but can also be used to reduce annotator workload and standardise existing GEC datasets. Human experts rated the automatic edits as {\textquotedblleft}Good{\textquotedblright} or {\textquotedblleft}Acceptable{\textquotedblright} in at least 95{\%} of cases, so we applied ERRANT to the system output of the CoNLL-2014 shared task to carry out a detailed error type analysis for the first time. | null | null | 10.18653/v1/P17-1074 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,504 |
inproceedings | sugawara-etal-2017-evaluation | Evaluation Metrics for Machine Reading Comprehension: Prerequisite Skills and Readability | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1075/ | Sugawara, Saku and Kido, Yusuke and Yokono, Hikaru and Aizawa, Akiko | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 806--817 | Knowing the quality of reading comprehension (RC) datasets is important for the development of natural-language understanding systems. In this study, two classes of metrics were adopted for evaluating RC datasets: prerequisite skills and readability. We applied these classes to six existing datasets, including MCTest and SQuAD, and highlighted the characteristics of the datasets according to each metric and the correlation between the two classes. Our dataset analysis suggests that the readability of RC datasets does not directly affect the question difficulty and that it is possible to create an RC dataset that is easy to read but difficult to answer. | null | null | 10.18653/v1/P17-1075 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,505 |
inproceedings | stern-etal-2017-minimal | A Minimal Span-Based Neural Constituency Parser | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1076/ | Stern, Mitchell and Andreas, Jacob and Klein, Dan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 818--827 | In this work, we present a minimal neural model for constituency parsing based on independent scoring of labels and spans. We show that this model is not only compatible with classical dynamic programming techniques, but also admits a novel greedy top-down inference algorithm based on recursive partitioning of the input. We demonstrate empirically that both prediction schemes are competitive with recent work, and when combined with basic extensions to the scoring model are capable of achieving state-of-the-art single-model performance on the Penn Treebank (91.79 F1) and strong performance on the French Treebank (82.23 F1). | null | null | 10.18653/v1/P17-1076 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,506 |
inproceedings | sun-etal-2017-semantic | Semantic Dependency Parsing via Book Embedding | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1077/ | Sun, Weiwei and Cao, Junjie and Wan, Xiaojun | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 828--838 | We model a dependency graph as a book, a particular kind of topological space, for semantic dependency parsing. The spine of the book is made up of a sequence of words, and each page contains a subset of noncrossing arcs. To build a semantic graph for a given sentence, we design new Maximum Subgraph algorithms to generate noncrossing graphs on each page, and a Lagrangian Relaxation-based algorithm tocombine pages into a book. Experiments demonstrate the effectiveness of the bookembedding framework across a wide range of conditions. Our parser obtains comparable results with a state-of-the-art transition-based parser. | null | null | 10.18653/v1/P17-1077 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,507 |
inproceedings | yang-etal-2017-neural-word | Neural Word Segmentation with Rich Pretraining | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1078/ | Yang, Jie and Zhang, Yue and Dong, Fei | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 839--849 | Neural word segmentation research has benefited from large-scale raw texts by leveraging them for pretraining character and word embeddings. On the other hand, statistical segmentation research has exploited richer sources of external information, such as punctuation, automatic segmentation and POS. We investigate the effectiveness of a range of external training sources for neural word segmentation by building a modular segmentation model, pretraining the most important submodule using rich external sources. Results show that such pretraining significantly improves the model, leading to accuracies competitive to the best methods on six benchmarks. | null | null | 10.18653/v1/P17-1078 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,508 |
inproceedings | oda-etal-2017-neural | Neural Machine Translation via Binary Code Prediction | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1079/ | Oda, Yusuke and Arthur, Philip and Neubig, Graham and Yoshino, Koichiro and Nakamura, Satoshi | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 850--860 | In this paper, we propose a new method for calculating the output layer in neural machine translation systems. The method is based on predicting a binary code for each word and can reduce computation time/memory requirements of the output layer to be logarithmic in vocabulary size in the best case. In addition, we also introduce two advanced approaches to improve the robustness of the proposed model: using error-correcting codes and combining softmax and binary codes. Experiments on two English-Japanese bidirectional translation tasks show proposed models achieve BLEU scores that approach the softmax, while reducing memory usage to the order of less than 1/10 and improving decoding speed on CPUs by x5 to x10. | null | null | 10.18653/v1/P17-1079 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,509 |
inproceedings | belinkov-etal-2017-neural | What do Neural Machine Translation Models Learn about Morphology? | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1080/ | Belinkov, Yonatan and Durrani, Nadir and Dalvi, Fahim and Sajjad, Hassan and Glass, James | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 861--872 | Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs. character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs. decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure. | null | null | 10.18653/v1/P17-1080 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,510 |
inproceedings | poria-etal-2017-context | Context-Dependent Sentiment Analysis in User-Generated Videos | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1081/ | Poria, Soujanya and Cambria, Erik and Hazarika, Devamanyu and Majumder, Navonil and Zadeh, Amir and Morency, Louis-Philippe | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 873--883 | Multimodal sentiment analysis is a developing area of research, which involves the identification of sentiments in videos. Current research considers utterances as independent entities, i.e., ignores the interdependencies and relations among the utterances of a video. In this paper, we propose a LSTM-based model that enables utterances to capture contextual information from their surroundings in the same video, thus aiding the classification process. Our method shows 5-10{\%} performance improvement over the state of the art and high robustness to generalizability. | null | null | 10.18653/v1/P17-1081 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,511 |
inproceedings | pavalanathan-etal-2017-multidimensional | A Multidimensional Lexicon for Interpersonal Stancetaking | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1082/ | Pavalanathan, Umashanthi and Fitzpatrick, Jim and Kiesling, Scott and Eisenstein, Jacob | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 884--895 | The sociolinguistic construct of stancetaking describes the activities through which discourse participants create and signal relationships to their interlocutors, to the topic of discussion, and to the talk itself. Stancetaking underlies a wide range of interactional phenomena, relating to formality, politeness, affect, and subjectivity. We present a computational approach to stancetaking, in which we build a theoretically-motivated lexicon of stance markers, and then use multidimensional analysis to identify a set of underlying stance dimensions. We validate these dimensions intrinscially and extrinsically, showing that they are internally coherent, match pre-registered hypotheses, and correlate with social phenomena. | null | null | 10.18653/v1/P17-1082 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,512 |
inproceedings | lund-etal-2017-tandem | Tandem Anchoring: a Multiword Anchor Approach for Interactive Topic Modeling | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1083/ | Lund, Jeffrey and Cook, Connor and Seppi, Kevin and Boyd-Graber, Jordan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 896--905 | Interactive topic models are powerful tools for those seeking to understand large collections of text. However, existing sampling-based interactive topic modeling approaches scale poorly to large data sets. Anchor methods, which use a single word to uniquely identify a topic, offer the speed needed for interactive work but lack both a mechanism to inject prior knowledge and lack the intuitive semantics needed for user-facing applications. We propose combinations of words as anchors, going beyond existing single word anchor algorithms{---}an approach we call {\textquotedblleft}Tandem Anchors{\textquotedblright}. We begin with a synthetic investigation of this approach then apply the approach to interactive topic modeling in a user study and compare it to interactive and non-interactive approaches. Tandem anchors are faster and more intuitive than existing interactive approaches. | null | null | 10.18653/v1/P17-1083 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,513 |
inproceedings | bakhshandeh-allen-2017-apples | Apples to Apples: Learning Semantics of Common Entities Through a Novel Comprehension Task | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1084/ | Bakhshandeh, Omid and Allen, James | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 906--916 | Understanding common entities and their attributes is a primary requirement for any system that comprehends natural language. In order to enable learning about common entities, we introduce a novel machine comprehension task, GuessTwo: given a short paragraph comparing different aspects of two real-world semantically-similar entities, a system should guess what those entities are. Accomplishing this task requires deep language understanding which enables inference, connecting each comparison paragraph to different levels of knowledge about world entities and their attributes. So far we have crowdsourced a dataset of more than 14K comparison paragraphs comparing entities from a variety of categories such as fruits and animals. We have designed two schemes for evaluation: open-ended, and binary-choice prediction. For benchmarking further progress in the task, we have collected a set of paragraphs as the test set on which human can accomplish the task with an accuracy of 94.2{\%} on open-ended prediction. We have implemented various models for tackling the task, ranging from semantic-driven to neural models. The semantic-driven approach outperforms the neural models, however, the results indicate that the task is very challenging across the models. | null | null | 10.18653/v1/P17-1084 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,514 |
inproceedings | katiyar-cardie-2017-going | Going out on a limb: Joint Extraction of Entity Mentions and Relations without Dependency Trees | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1085/ | Katiyar, Arzoo and Cardie, Claire | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 917--928 | We present a novel attention-based recurrent neural network for joint extraction of entity mentions and relations. We show that attention along with long short term memory (LSTM) network can extract semantic relations between entity mentions without having access to dependency trees. Experiments on Automatic Content Extraction (ACE) corpora show that our model significantly outperforms feature-based joint model by Li and Ji (2014). We also compare our model with an end-to-end tree-based LSTM model (SPTree) by Miwa and Bansal (2016) and show that our model performs within 1{\%} on entity mentions and 2{\%} on relations. Our fine-grained analysis also shows that our model performs significantly better on Agent-Artifact relations, while SPTree performs better on Physical and Part-Whole relations. | null | null | 10.18653/v1/P17-1085 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,515 |
inproceedings | wang-etal-2017-naturalizing | Naturalizing a Programming Language via Interactive Learning | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1086/ | Wang, Sida I. and Ginn, Samuel and Liang, Percy and Manning, Christopher D. | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 929--938 | Our goal is to create a convenient natural language interface for performing well-specified but complex actions such as analyzing data, manipulating text, and querying databases. However, existing natural language interfaces for such tasks are quite primitive compared to the power one wields with a programming language. To bridge this gap, we start with a core programming language and allow users to {\textquotedblleft}naturalize{\textquotedblright} the core language incrementally by defining alternative, more natural syntax and increasingly complex concepts in terms of compositions of simpler ones. In a voxel world, we show that a community of users can simultaneously teach a common system a diverse language and use it to build hundreds of complex voxel structures. Over the course of three days, these users went from using only the core language to using the naturalized language in 85.9{\%} of the last 10K utterances. | null | null | 10.18653/v1/P17-1086 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,516 |
inproceedings | sedoc-etal-2017-semantic | Semantic Word Clusters Using Signed Spectral Clustering | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1087/ | Sedoc, Jo{\~a}o and Gallier, Jean and Foster, Dean and Ungar, Lyle | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 939--949 | Vector space representations of words capture many aspects of word similarity, but such methods tend to produce vector spaces in which antonyms (as well as synonyms) are close to each other. For spectral clustering using such word embeddings, words are points in a vector space where synonyms are linked with positive weights, while antonyms are linked with negative weights. We present a new signed spectral normalized graph cut algorithm, \textit{signed clustering}, that overlays existing thesauri upon distributionally derived vector representations of words, so that antonym relationships between word pairs are represented by negative weights. Our signed clustering algorithm produces clusters of words that simultaneously capture distributional and synonym relations. By using randomized spectral decomposition (Halko et al., 2011) and sparse matrices, our method is both fast and scalable. We validate our clusters using datasets containing human judgments of word pair similarities and show the benefit of using our word clusters for sentiment prediction. | null | null | 10.18653/v1/P17-1087 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,517 |
inproceedings | xie-etal-2017-interpretable | An Interpretable Knowledge Transfer Model for Knowledge Base Completion | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1088/ | Xie, Qizhe and Ma, Xuezhe and Dai, Zihang and Hovy, Eduard | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 950--962 | Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, ITransF, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets{---}WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information. | null | null | 10.18653/v1/P17-1088 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,518 |
inproceedings | iyer-etal-2017-learning | Learning a Neural Semantic Parser from User Feedback | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1089/ | Iyer, Srinivasan and Konstas, Ioannis and Cheung, Alvin and Krishnamurthy, Jayant and Zettlemoyer, Luke | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 963--973 | We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal intervention. To achieve this, we adapt neural sequence models to map utterances directly to SQL with its full expressivity, bypassing any intermediate meaning representations. These models are immediately deployed online to solicit feedback from real users to flag incorrect queries. Finally, the popularity of SQL facilitates gathering annotations for incorrect predictions using the crowd, which is directly used to improve our models. This complete feedback loop, without intermediate representations or database specific engineering, opens up new ways of building high quality semantic parsers. Experiments suggest that this approach can be deployed quickly for any new target domain, as we show by learning a semantic parser for an online academic database from scratch. | null | null | 10.18653/v1/P17-1089 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,519 |
inproceedings | qin-etal-2017-joint | Joint Modeling of Content and Discourse Relations in Dialogues | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1090/ | Qin, Kechen and Wang, Lu and Kim, Joseph | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 974--984 | We present a joint modeling approach to identify salient discussion points in spoken meetings as well as to label the discourse relations between speaker turns. A variation of our model is also discussed when discourse relations are treated as latent variables. Experimental results on two popular meeting corpora show that our joint model can outperform state-of-the-art approaches for both phrase-based content selection and discourse relation prediction tasks. We also evaluate our model on predicting the consistency among team members' understanding of their group decisions. Classifiers trained with features constructed from our model achieve significant better predictive performance than the state-of-the-art. | null | null | 10.18653/v1/P17-1090 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,520 |
inproceedings | niculae-etal-2017-argument | Argument Mining with Structured {SVM}s and {RNN}s | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1091/ | Niculae, Vlad and Park, Joonsuk and Cardie, Claire | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 985--995 | We propose a novel factor graph model for argument mining, designed for settings in which the argumentative relations in a document do not necessarily form a tree structure. (This is the case in over 20{\%} of the web comments dataset we release.) Our model jointly learns elementary unit type classification and argumentative relation prediction. Moreover, our model supports SVM and RNN parametrizations, can enforce structure constraints (e.g., transitivity), and can express dependencies between adjacent relations and propositions. Our approaches outperform unstructured baselines in both web comments and argumentative essay datasets. | null | null | 10.18653/v1/P17-1091 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,521 |
inproceedings | ji-smith-2017-neural | Neural Discourse Structure for Text Categorization | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1092/ | Ji, Yangfeng and Smith, Noah A. | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 996--1005 | We show that discourse structure, as defined by Rhetorical Structure Theory and provided by an existing discourse parser, benefits text categorization. Our approach uses a recursive neural network and a newly proposed attention mechanism to compute a representation of the text that focuses on salient content, from the perspective of both RST and the task. Experiments consider variants of the approach and illustrate its strengths and weaknesses. | null | null | 10.18653/v1/P17-1092 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,522 |
inproceedings | qin-etal-2017-adversarial | Adversarial Connective-exploiting Networks for Implicit Discourse Relation Classification | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1093/ | Qin, Lianhui and Zhang, Zhisong and Zhao, Hai and Hu, Zhiting and Xing, Eric | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1006--1017 | Implicit discourse relation classification is of great challenge due to the lack of connectives as strong linguistic cues, which motivates the use of annotated implicit connectives to improve the recognition. We propose a feature imitation framework in which an implicit relation network is driven to learn from another neural network with access to connectives, and thus encouraged to extract similarly salient features for accurate classification. We develop an adversarial model to enable an adaptive imitation scheme through competition between the implicit network and a rival feature discriminator. Our method effectively transfers discriminability of connectives to the implicit features, and achieves state-of-the-art performance on the PDTB benchmark. | null | null | 10.18653/v1/P17-1093 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,523 |
inproceedings | haponchyk-moschitti-2017-dont | Don`t understand a measure? Learn it: Structured Prediction for Coreference Resolution optimizing its measures | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1094/ | Haponchyk, Iryna and Moschitti, Alessandro | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1018--1028 | An interesting aspect of structured prediction is the evaluation of an output structure against the gold standard. Especially in the loss-augmented setting, the need of finding the max-violating constraint has severely limited the expressivity of effective loss functions. In this paper, we trade off exact computation for enabling the use and study of more complex loss functions for coreference resolution. Most interestingly, we show that such functions can be (i) automatically learned also from controversial but commonly accepted coreference measures, e.g., MELA, and (ii) successfully used in learning algorithms. The accurate model comparison on the standard CoNLL-2012 setting shows the benefit of more expressive loss functions. | null | null | 10.18653/v1/P17-1094 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,524 |
inproceedings | andrews-etal-2017-bayesian | {B}ayesian Modeling of Lexical Resources for Low-Resource Settings | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1095/ | Andrews, Nicholas and Dredze, Mark and Van Durme, Benjamin and Eisner, Jason | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1029--1039 | Lexical resources such as dictionaries and gazetteers are often used as auxiliary data for tasks such as part-of-speech induction and named-entity recognition. However, discriminative training with lexical features requires annotated data to reliably estimate the lexical feature weights and may result in overfitting the lexical features at the expense of features which generalize better. In this paper, we investigate a more robust approach: we stipulate that the lexicon is the result of an assumed generative process. Practically, this means that we may treat the lexical resources as observations under the proposed generative model. The lexical resources provide training data for the generative model without requiring separate data to estimate lexical feature weights. We evaluate the proposed approach in two settings: part-of-speech induction and low-resource named-entity recognition. | null | null | 10.18653/v1/P17-1095 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,525 |
inproceedings | yang-etal-2017-semi | Semi-Supervised {QA} with Generative Domain-Adaptive Nets | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1096/ | Yang, Zhilin and Hu, Junjie and Salakhutdinov, Ruslan and Cohen, William | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1040--1050 | We study the problem of semi-supervised question answering{---}utilizing unlabeled text to boost the performance of question answering models. We propose a novel training framework, the \textit{Generative Domain-Adaptive Nets}. In this framework, we train a generative model to generate questions based on the unlabeled text, and combine model-generated questions with human-generated questions for training question answering models. We develop novel domain adaptation algorithms, based on reinforcement learning, to alleviate the discrepancy between the model-generated data distribution and the human-generated data distribution. Experiments show that our proposed framework obtains substantial improvement from unlabeled text. | null | null | 10.18653/v1/P17-1096 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,526 |
inproceedings | guu-etal-2017-language | From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1097/ | Guu, Kelvin and Pasupat, Panupong and Liu, Evan and Liang, Percy | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1051--1062 | Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself. Consequently, we must search the space of programs for those that output the correct result, while not being misled by \textit{spurious programs}: incorrect programs that coincidentally output the correct result. We connect two common learning paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML), and then present a new learning algorithm that combines the strengths of both. The new algorithm guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL, and by updating parameters such that probability is spread more evenly across consistent programs. We apply our learning algorithm to a new neural semantic parser and show significant gains over existing state-of-the-art results on a recent context-dependent semantic parsing task. | null | null | 10.18653/v1/P17-1097 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,527 |
inproceedings | nema-etal-2017-diversity | Diversity driven attention model for query-based abstractive summarization | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1098/ | Nema, Preksha and Khapra, Mitesh M. and Laha, Anirban and Ravindran, Balaraman | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1063--1072 | Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encode-attend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28{\%} (absolute) in ROUGE-L scores. | null | null | 10.18653/v1/P17-1098 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,528 |
inproceedings | see-etal-2017-get | Get To The Point: Summarization with Pointer-Generator Networks | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1099/ | See, Abigail and Liu, Peter J. and Manning, Christopher D. | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1073--1083 | Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points. | null | null | 10.18653/v1/P17-1099 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,529 |
inproceedings | peyrard-eckle-kohler-2017-supervised | Supervised Learning of Automatic Pyramid for Optimization-Based Multi-Document Summarization | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1100/ | Peyrard, Maxime and Eckle-Kohler, Judith | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1084--1094 | We present a new supervised framework that learns to estimate automatic Pyramid scores and uses them for optimization-based extractive multi-document summarization. For learning automatic Pyramid scores, we developed a method for automatic training data generation which is based on a genetic algorithm using automatic Pyramid as the fitness function. Our experimental evaluation shows that our new framework significantly outperforms strong baselines regarding automatic Pyramid, and that there is much room for improvement in comparison with the upper-bound for automatic Pyramid. | null | null | 10.18653/v1/P17-1100 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,530 |
inproceedings | zhou-etal-2017-selective | Selective Encoding for Abstractive Sentence Summarization | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1101/ | Zhou, Qingyu and Yang, Nan and Wei, Furu and Zhou, Ming | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1095--1104 | We propose a selective encoding model to extend the sequence-to-sequence framework for abstractive sentence summarization. It consists of a sentence encoder, a selective gate network, and an attention equipped decoder. The sentence encoder and decoder are built with recurrent neural networks. The selective gate network constructs a second level sentence representation by controlling the information flow from encoder to decoder. The second level representation is tailored for sentence summarization task, which leads to better performance. We evaluate our model on the English Gigaword, DUC 2004 and MSR abstractive sentence summarization datasets. The experimental results show that the proposed selective encoding model outperforms the state-of-the-art baseline models. | null | null | 10.18653/v1/P17-1101 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,531 |
inproceedings | florescu-caragea-2017-positionrank | {P}osition{R}ank: An Unsupervised Approach to Keyphrase Extraction from Scholarly Documents | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1102/ | Florescu, Corina and Caragea, Cornelia | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1105--1115 | The large and growing amounts of online scholarly data present both challenges and opportunities to enhance knowledge discovery. One such challenge is to automatically extract a small set of keyphrases from a document that can accurately describe the document`s content and can facilitate fast information processing. In this paper, we propose PositionRank, an unsupervised model for keyphrase extraction from scholarly documents that incorporates information from all positions of a word`s occurrences into a biased PageRank. Our model obtains remarkable improvements in performance over PageRank models that do not take into account word positions as well as over strong baselines for this task. Specifically, on several datasets of research papers, PositionRank achieves improvements as high as 29.09{\%}. | null | null | 10.18653/v1/P17-1102 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,532 |
inproceedings | lowe-etal-2017-towards | Towards an Automatic {T}uring Test: Learning to Evaluate Dialogue Responses | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1103/ | Lowe, Ryan and Noseworthy, Michael and Serban, Iulian Vlad and Angelard-Gontier, Nicolas and Bengio, Yoshua and Pineau, Joelle | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1116--1126 | Automatically evaluating the quality of dialogue responses for unstructured domains is a challenging problem. Unfortunately, existing automatic evaluation metrics are biased and correlate very poorly with human judgements of response quality (Liu et al., 2016). Yet having an accurate automatic evaluation procedure is crucial for dialogue research, as it allows rapid prototyping and testing of new models with fewer expensive human evaluations. In response to this challenge, we formulate automatic dialogue evaluation as a learning problem. We present an evaluation model (ADEM)that learns to predict human-like scores to input responses, using a new dataset of human response scores. We show that the ADEM model`s predictions correlate significantly, and at a level much higher than word-overlap metrics such as BLEU, with human judgements at both the utterance and system-level. We also show that ADEM can generalize to evaluating dialogue mod-els unseen during training, an important step for automatic dialogue evaluation. | null | null | 10.18653/v1/P17-1103 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,533 |
inproceedings | hershcovich-etal-2017-transition | A Transition-Based Directed Acyclic Graph Parser for {UCCA} | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1104/ | Hershcovich, Daniel and Abend, Omri and Rappoport, Ari | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1127--1138 | We present the first parser for UCCA, a cross-linguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. To our knowledge, the conjunction of these formal properties is not supported by any existing parser. Our transition-based parser, which uses a novel transition set and features based on bidirectional LSTMs, has value not just for UCCA parsing: its ability to handle more general graph structures can inform the development of parsers for other semantic DAG structures, and in languages that frequently use discontinuous structures. | null | null | 10.18653/v1/P17-1104 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,534 |
inproceedings | rabinovich-etal-2017-abstract | Abstract Syntax Networks for Code Generation and Semantic Parsing | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1105/ | Rabinovich, Maxim and Stern, Mitchell and Klein, Dan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1139--1149 | Tasks like code generation and semantic parsing require mapping unstructured (or partially structured) inputs to well-formed, executable outputs. We introduce abstract syntax networks, a modeling framework for these problems. The outputs are represented as abstract syntax trees (ASTs) and constructed by a decoder with a dynamically-determined modular structure paralleling the structure of the output tree. On the benchmark Hearthstone dataset for code generation, our model obtains 79.2 BLEU and 22.7{\%} exact match accuracy, compared to previous state-of-the-art values of 67.1 and 6.1{\%}. Furthermore, we perform competitively on the Atis, Jobs, and Geo semantic parsing datasets with no task-specific engineering. | null | null | 10.18653/v1/P17-1105 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,535 |
inproceedings | ding-etal-2017-visualizing | Visualizing and Understanding Neural Machine Translation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1106/ | Ding, Yanzhuo and Liu, Yang and Luan, Huanbo and Sun, Maosong | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1150--1159 | While neural machine translation (NMT) has made remarkable progress in recent years, it is hard to interpret its internal workings due to the continuous representations and non-linearity of neural networks. In this work, we propose to use layer-wise relevance propagation (LRP) to compute the contribution of each contextual word to arbitrary hidden states in the attention-based encoder-decoder framework. We show that visualization with LRP helps to interpret the internal workings of NMT and analyze translation errors. | null | null | 10.18653/v1/P17-1106 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,536 |
inproceedings | rehbein-ruppenhofer-2017-detecting | Detecting annotation noise in automatically labelled data | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1107/ | Rehbein, Ines and Ruppenhofer, Josef | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1160--1170 | We introduce a method for error detection in automatically annotated text, aimed at supporting the creation of high-quality language resources at affordable cost. Our method combines an unsupervised generative model with human supervision from active learning. We test our approach on in-domain and out-of-domain data in two languages, in AL simulations and in a real world setting. For all settings, the results show that our method is able to detect annotation errors with high precision and high recall. | null | null | 10.18653/v1/P17-1107 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,537 |
inproceedings | tan-etal-2017-abstractive | Abstractive Document Summarization with a Graph-Based Attentional Neural Model | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1108/ | Tan, Jiwei and Wan, Xiaojun and Xiao, Jianguo | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1171--1181 | Abstractive summarization is the ultimate goal of document summarization research, but previously it is less investigated due to the immaturity of text generation techniques. Recently impressive progress has been made to abstractive sentence summarization using neural models. Unfortunately, attempts on abstractive document summarization are still in a primitive stage, and the evaluation results are worse than extractive methods on benchmark datasets. In this paper, we review the difficulties of neural abstractive document summarization, and propose a novel graph-based attention mechanism in the sequence-to-sequence framework. The intuition is to address the saliency factor of summarization, which has been overlooked by prior works. Experimental results demonstrate our model is able to achieve considerable improvement over previous neural abstractive models. The data-driven neural abstractive method is also competitive with state-of-the-art extractive methods. | null | null | 10.18653/v1/P17-1108 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,538 |
inproceedings | cotterell-eisner-2017-probabilistic | Probabilistic Typology: Deep Generative Models of Vowel Inventories | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1109/ | Cotterell, Ryan and Eisner, Jason | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1182--1192 | Linguistic typology studies the range of structures present in human language. The main goal of the field is to discover which sets of possible phenomena are universal, and which are merely frequent. For example, all languages have vowels, while most{---}but not all{---}languages have an /u/ sound. In this paper we present the first probabilistic treatment of a basic question in phonological typology: What makes a natural vowel inventory? We introduce a series of deep stochastic point processes, and contrast them with previous computational, simulation-based approaches. We provide a comprehensive suite of experiments on over 200 distinct languages. | null | null | 10.18653/v1/P17-1109 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,539 |
inproceedings | chen-etal-2017-adversarial | Adversarial Multi-Criteria Learning for {C}hinese Word Segmentation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1110/ | Chen, Xinchi and Shi, Zhan and Qiu, Xipeng and Huang, Xuanjing | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1193--1203 | Different linguistic perspectives causes many diverse segmentation criteria for Chinese word segmentation (CWS). Most existing methods focus on improve the performance for each single criterion. However, it is interesting to exploit these different criteria and mining their common underlying knowledge. In this paper, we propose adversarial multi-criteria learning for CWS by integrating shared knowledge from multiple heterogeneous segmentation criteria. Experiments on eight corpora with heterogeneous segmentation criteria show that the performance of each corpus obtains a significant improvement, compared to single-criterion learning. Source codes of this paper are available on Github. | null | null | 10.18653/v1/P17-1110 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,540 |
inproceedings | kurita-etal-2017-neural | Neural Joint Model for Transition-based {C}hinese Syntactic Analysis | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1111/ | Kurita, Shuhei and Kawahara, Daisuke and Kurohashi, Sadao | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1204--1214 | We present neural network-based joint models for Chinese word segmentation, POS tagging and dependency parsing. Our models are the first neural approaches for fully joint Chinese analysis that is known to prevent the error propagation problem of pipeline models. Although word embeddings play a key role in dependency parsing, they cannot be applied directly to the joint task in the previous work. To address this problem, we propose embeddings of character strings, in addition to words. Experiments show that our models outperform existing systems in Chinese word segmentation and POS tagging, and perform preferable accuracies in dependency parsing. We also explore bi-LSTM models with fewer features. | null | null | 10.18653/v1/P17-1111 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,541 |
inproceedings | buys-blunsom-2017-robust | Robust Incremental Neural Semantic Graph Parsing | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1112/ | Buys, Jan and Blunsom, Phil | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1215--1226 | Parsing sentences to linguistically-expressive semantic representations is a key goal of Natural Language Processing. Yet statistical parsing has focussed almost exclusively on bilexical dependencies or domain-specific logical forms. We propose a neural encoder-decoder transition-based parser which is the first full-coverage semantic graph parser for Minimal Recursion Semantics (MRS). The model architecture uses stack-based embedding features, predicting graphs jointly with unlexicalized predicates and their token alignments. Our parser is more accurate than attention-based baselines on MRS, and on an additional Abstract Meaning Representation (AMR) benchmark, and GPU batch processing makes it an order of magnitude faster than a high-precision grammar-based parser. Further, the 86.69{\%} Smatch score of our MRS parser is higher than the upper-bound on AMR parsing, making MRS an attractive choice as a semantic representation. | null | null | 10.18653/v1/P17-1112 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,542 |
inproceedings | zheng-etal-2017-joint | Joint Extraction of Entities and Relations Based on a Novel Tagging Scheme | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1113/ | Zheng, Suncong and Wang, Feng and Bao, Hongyun and Hao, Yuexing and Zhou, Peng and Xu, Bo | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1227--1236 | Joint extraction of entities and relations is an important task in information extraction. To tackle this problem, we firstly propose a novel tagging scheme that can convert the joint extraction task to a tagging problem.. Then, based on our tagging scheme, we study different end-to-end models to extract entities and their relations directly, without identifying entities and relations separately. We conduct experiments on a public dataset produced by distant supervision method and the experimental results show that the tagging based methods are better than most of the existing pipelined and joint learning methods. What`s more, the end-to-end model proposed in this paper, achieves the best results on the public dataset. | null | null | 10.18653/v1/P17-1113 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,543 |
inproceedings | xu-etal-2017-local | A Local Detection Approach for Named Entity Recognition and Mention Detection | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1114/ | Xu, Mingbin and Jiang, Hui and Watcharawittayakul, Sedtawut | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1237--1247 | In this paper, we study a novel approach for named entity recognition (NER) and mention detection (MD) in natural language processing. Instead of treating NER as a sequence labeling problem, we propose a new local detection approach, which relies on the recent fixed-size ordinally forgetting encoding (FOFE) method to fully encode each sentence fragment and its left/right contexts into a fixed-size representation. Subsequently, a simple feedforward neural network (FFNN) is learned to either reject or predict entity label for each individual text fragment. The proposed method has been evaluated in several popular NER and MD tasks, including CoNLL 2003 NER task and TAC-KBP2015 and TAC-KBP2016 Tri-lingual Entity Discovery and Linking (EDL) tasks. Our method has yielded pretty strong performance in all of these examined tasks. This local detection approach has shown many advantages over the traditional sequence labeling methods. | null | null | 10.18653/v1/P17-1114 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,544 |
inproceedings | gritta-etal-2017-vancouver | {V}ancouver Welcomes You! Minimalist Location Metonymy Resolution | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1115/ | Gritta, Milan and Pilehvar, Mohammad Taher and Limsopatham, Nut and Collier, Nigel | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1248--1259 | Named entities are frequently used in a metonymic manner. They serve as references to related entities such as people and organisations. Accurate identification and interpretation of metonymy can be directly beneficial to various NLP applications, such as Named Entity Recognition and Geographical Parsing. Until now, metonymy resolution (MR) methods mainly relied on parsers, taggers, dictionaries, external word lists and other handcrafted lexical resources. We show how a minimalist neural approach combined with a novel predicate window method can achieve competitive results on the SemEval 2007 task on Metonymy Resolution. Additionally, we contribute with a new Wikipedia-based MR dataset called RelocaR, which is tailored towards locations as well as improving previous deficiencies in annotation guidelines. | null | null | 10.18653/v1/P17-1115 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,545 |
inproceedings | miura-etal-2017-unifying | Unifying Text, Metadata, and User Network Representations with a Neural Network for Geolocation Prediction | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1116/ | Miura, Yasuhide and Taniguchi, Motoki and Taniguchi, Tomoki and Ohkuma, Tomoko | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1260--1272 | We propose a novel geolocation prediction model using a complex neural network. Geolocation prediction in social media has attracted many researchers to use information of various types. Our model unifies text, metadata, and user network representations with an attention mechanism to overcome previous ensemble approaches. In an evaluation using two open datasets, the proposed model exhibited a maximum 3.8{\%} increase in accuracy and a maximum of 6.6{\%} increase in accuracy@161 against previous models. We further analyzed several intermediate layers of our model, which revealed that their states capture some statistical characteristics of the datasets. | null | null | 10.18653/v1/P17-1116 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,546 |
inproceedings | pasunuru-bansal-2017-multi | Multi-Task Video Captioning with Video and Entailment Generation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1117/ | Pasunuru, Ramakanth and Bansal, Mohit | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1273--1283 | Video captioning, the task of describing the content of a video, has seen some promising improvements in recent years with sequence-to-sequence models, but accurately learning the temporal and logical dynamics involved in the task still remains a challenge, especially given the lack of sufficient annotated data. We improve video captioning by sharing knowledge with two related directed-generation tasks: a temporally-directed unsupervised video prediction task to learn richer context-aware video encoder representations, and a logically-directed language entailment generation task to learn better video-entailing caption decoder representations. For this, we present a many-to-many multi-task learning model that shares parameters across the encoders and decoders of the three tasks. We achieve significant improvements and the new state-of-the-art on several standard video captioning datasets using diverse automatic and human evaluations. We also show mutual multi-task improvements on the entailment generation task. | null | null | 10.18653/v1/P17-1117 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,547 |
inproceedings | santos-etal-2017-enriching | Enriching Complex Networks with Word Embeddings for Detecting Mild Cognitive Impairment from Speech Transcripts | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1118/ | Santos, Leandro and Corr{\^e}a J{\'u}nior, Edilson Anselmo and Oliveira Jr, Osvaldo and Amancio, Diego and Mansur, Let{\'i}cia and Alu{\'i}sio, Sandra | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1284--1296 | Mild Cognitive Impairment (MCI) is a mental disorder difficult to diagnose. Linguistic features, mainly from parsers, have been used to detect MCI, but this is not suitable for large-scale assessments. MCI disfluencies produce non-grammatical speech that requires manual or high precision automatic correction of transcripts. In this paper, we modeled transcripts into complex networks and enriched them with word embedding (CNE) to better represent short texts produced in neuropsychological assessments. The network measurements were applied with well-known classifiers to automatically identify MCI in transcripts, in a binary classification task. A comparison was made with the performance of traditional approaches using Bag of Words (BoW) and linguistic features for three datasets: DementiaBank in English, and Cinderella and Arizona-Battery in Portuguese. Overall, CNE provided higher accuracy than using only complex networks, while Support Vector Machine was superior to other classifiers. CNE provided the highest accuracies for DementiaBank and Cinderella, but BoW was more efficient for the Arizona-Battery dataset probably owing to its short narratives. The approach using linguistic features yielded higher accuracy if the transcriptions of the Cinderella dataset were manually revised. Taken together, the results indicate that complex networks enriched with embedding is promising for detecting MCI in large-scale assessments. | null | null | 10.18653/v1/P17-1118 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,548 |
inproceedings | kim-etal-2017-adversarial | Adversarial Adaptation of Synthetic or Stale Data | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1119/ | Kim, Young-Bum and Stratos, Karl and Kim, Dongchan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1297--1307 | Two types of data shift common in practice are 1. transferring from synthetic data to live user data (a deployment shift), and 2. transferring from stale data to current data (a temporal shift). Both cause a distribution mismatch between training and evaluation, leading to a model that overfits the flawed training data and performs poorly on the test data. We propose a solution to this mismatch problem by framing it as domain adaptation, treating the flawed training dataset as a source domain and the evaluation dataset as a target domain. To this end, we use and build on several recent advances in neural domain adaptation such as adversarial training (Ganinet al., 2016) and domain separation network (Bousmalis et al., 2016), proposing a new effective adversarial training scheme. In both supervised and unsupervised adaptation scenarios, our approach yields clear improvement over strong baselines. | null | null | 10.18653/v1/P17-1119 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,549 |
inproceedings | akasaki-kaji-2017-chat | Chat Detection in an Intelligent Assistant: Combining Task-oriented and Non-task-oriented Spoken Dialogue Systems | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1120/ | Akasaki, Satoshi and Kaji, Nobuhiro | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1308--1319 | Recently emerged intelligent assistants on smartphones and home electronics (e.g., Siri and Alexa) can be seen as novel hybrids of domain-specific task-oriented spoken dialogue systems and open-domain non-task-oriented ones. To realize such hybrid dialogue systems, this paper investigates determining whether or not a user is going to have a chat with the system. To address the lack of benchmark datasets for this task, we construct a new dataset consisting of 15,160 utterances collected from the real log data of a commercial intelligent assistant (and will release the dataset to facilitate future research activity). In addition, we investigate using tweets and Web search queries for handling open-domain user utterances, which characterize the task of chat detection. Experimental experiments demonstrated that, while simple supervised methods are effective, the use of the tweets and search queries further improves the F$_1$-score from 86.21 to 87.53. | null | null | 10.18653/v1/P17-1120 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,550 |
inproceedings | tien-nguyen-joty-2017-neural | A Neural Local Coherence Model | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1121/ | Tien Nguyen, Dat and Joty, Shafiq | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1320--1330 | We propose a local coherence model based on a convolutional neural network that operates over the entity grid representation of a text. The model captures long range entity transitions along with entity-specific features without loosing generalization, thanks to the power of distributed representation. We present a pairwise ranking method to train the model in an end-to-end fashion on a task and learn task-specific high level features. Our evaluation on three different coherence assessment tasks demonstrates that our model achieves state of the art results outperforming existing models by a good margin. | null | null | 10.18653/v1/P17-1121 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,551 |
inproceedings | cagan-etal-2017-data | Data-Driven Broad-Coverage Grammars for Opinionated Natural Language Generation ({ONLG}) | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1122/ | Cagan, Tomer and Frank, Stefan L. and Tsarfaty, Reut | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1331--1341 | Opinionated Natural Language Generation (ONLG) is a new, challenging, task that aims to automatically generate human-like, subjective, responses to opinionated articles online. We present a data-driven architecture for ONLG that generates subjective responses triggered by users' agendas, consisting of topics and sentiments, and based on wide-coverage automatically-acquired generative grammars. We compare three types of grammatical representations that we design for ONLG, which interleave different layers of linguistic information and are induced from a new, enriched dataset we developed. Our evaluation shows that generation with Relational-Realizational (Tsarfaty and Sima`an, 2008) inspired grammar gets better language model scores than lexicalized grammars {\textquoteleft}a la Collins (2003), and that the latter gets better human-evaluation scores. We also show that conditioning the generation on topic models makes generated responses more relevant to the document content. | null | null | 10.18653/v1/P17-1122 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,552 |
inproceedings | du-etal-2017-learning | Learning to Ask: Neural Question Generation for Reading Comprehension | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1123/ | Du, Xinya and Shao, Junru and Cardie, Claire | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1342--1352 | We study automatic question generation for sentences from text passages in reading comprehension. We introduce an attention-based sequence learning model for the task and investigate the effect of encoding sentence- vs. paragraph-level information. In contrast to all previous work, our model does not rely on hand-crafted rules or a sophisticated NLP pipeline; it is instead trainable end-to-end via sequence-to-sequence learning. Automatic evaluation results show that our system significantly outperforms the state-of-the-art rule-based system. In human evaluations, questions generated by our system are also rated as being more natural (\textit{i.e.,}, grammaticality, fluency) and as more difficult to answer (in terms of syntactic and lexical divergence from the original text and reasoning needed to answer). | null | null | 10.18653/v1/P17-1123 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,553 |
inproceedings | p-v-s-meyer-2017-joint | Joint Optimization of User-desired Content in Multi-document Summaries by Learning from User Feedback | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-1124/ | P.V.S, Avinesh and Meyer, Christian M. | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 1353--1363 | In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback. Our method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS. Our methods complement fully automatic methods in producing high-quality summaries with a minimum number of iterations and feedbacks. We conduct multiple simulation-based experiments and analyze the effect of feedback-based concept selection in the ILP setup in order to maximize the user-desired content in the summary. | null | null | 10.18653/v1/P17-1124 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,554 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.