entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | sakaguchi-etal-2017-error | Error-repair Dependency Parsing for Ungrammatical Texts | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2030/ | Sakaguchi, Keisuke and Post, Matt and Van Durme, Benjamin | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 189--195 | We propose a new dependency parsing scheme which jointly parses a sentence and repairs grammatical errors by extending the non-directional transition-based formalism of Goldberg and Elhadad (2010) with three additional actions: SUBSTITUTE, DELETE, INSERT. Because these actions may cause an infinite loop in derivation, we also introduce simple constraints that ensure the parser termination. We evaluate our model with respect to dependency accuracy and grammaticality improvements for ungrammatical sentences, demonstrating the robustness and applicability of our scheme. | null | null | 10.18653/v1/P17-2030 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,656 |
inproceedings | libovicky-helcl-2017-attention | Attention Strategies for Multi-Source Sequence-to-Sequence Learning | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2031/ | Libovick{\'y}, Jind{\v{r}}ich and Helcl, Jind{\v{r}}ich | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 196--202 | Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks. | null | null | 10.18653/v1/P17-2031 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,657 |
inproceedings | hua-wang-2017-understanding | Understanding and Detecting Supporting Arguments of Diverse Types | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2032/ | Hua, Xinyu and Wang, Lu | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 203--208 | We investigate the problem of sentence-level supporting argument detection from relevant documents for user-specified claims. A dataset containing claims and associated citation articles is collected from online debate website idebate.org. We then manually label sentence-level supporting arguments from the documents along with their types as study, factual, opinion, or reasoning. We further characterize arguments of different types, and explore whether leveraging type information can facilitate the supporting arguments detection task. Experimental results show that LambdaMART (Burges, 2010) ranker that uses features informed by argument types yields better performance than the same ranker trained without type information. | null | null | 10.18653/v1/P17-2032 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,658 |
inproceedings | rahimi-etal-2017-neural | A Neural Model for User Geolocation and Lexical Dialectology | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2033/ | Rahimi, Afshin and Cohn, Trevor and Baldwin, Timothy | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 209--216 | We propose a simple yet effective text-based user geolocation model based on a neural network with one hidden layer, which achieves state of the art performance over three Twitter benchmark geolocation datasets, in addition to producing word and phrase embeddings in the hidden layer that we show to be useful for detecting dialectal terms. As part of our analysis of dialectal terms, we release DAREDS, a dataset for evaluating dialect term detection methods. | null | null | 10.18653/v1/P17-2033 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,659 |
inproceedings | suhr-etal-2017-corpus | A Corpus of Natural Language for Visual Reasoning | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2034/ | Suhr, Alane and Lewis, Mike and Yeh, James and Artzi, Yoav | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 217--223 | We present a new visual reasoning language dataset, containing 92,244 pairs of examples of natural statements grounded in synthetic images with 3,962 unique sentences. We describe a method of crowdsourcing linguistically-diverse data, and present an analysis of our data. The data demonstrates a broad set of linguistic phenomena, requiring visual and set-theoretic reasoning. We experiment with various models, and show the data presents a strong challenge for future research. | null | null | 10.18653/v1/P17-2034 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,660 |
inproceedings | tourille-etal-2017-neural | Neural Architecture for Temporal Relation Extraction: A {B}i-{LSTM} Approach for Detecting Narrative Containers | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2035/ | Tourille, Julien and Ferret, Olivier and N{\'e}v{\'e}ol, Aur{\'e}lie and Tannier, Xavier | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 224--230 | We present a neural architecture for containment relation identification between medical events and/or temporal expressions. We experiment on a corpus of de-identified clinical notes in English from the Mayo Clinic, namely the THYME corpus. Our model achieves an F-measure of 0.613 and outperforms the best result reported on this corpus to date. | null | null | 10.18653/v1/P17-2035 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,661 |
inproceedings | tian-etal-2017-make | How to Make Context More Useful? An Empirical Study on Context-Aware Neural Conversational Models | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2036/ | Tian, Zhiliang and Yan, Rui and Mou, Lili and Song, Yiping and Feng, Yansong and Zhao, Dongyan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 231--236 | Generative conversational systems are attracting increasing attention in natural language processing (NLP). Recently, researchers have noticed the importance of context information in dialog processing, and built various models to utilize context. However, there is no systematic comparison to analyze how to use context effectively. In this paper, we conduct an empirical study to compare various models and investigate the effect of context information in dialog systems. We also propose a variant that explicitly weights context vectors by context-query relevance, outperforming the other baselines. | null | null | 10.18653/v1/P17-2036 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,662 |
inproceedings | braud-etal-2017-cross-lingual | Cross-lingual and cross-domain discourse segmentation of entire documents | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2037/ | Braud, Chlo{\'e} and Lacroix, Oph{\'e}lie and S{\o}gaard, Anders | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 237--243 | Discourse segmentation is a crucial step in building end-to-end discourse parsers. However, discourse segmenters only exist for a few languages and domains. Typically they only detect intra-sentential segment boundaries, assuming gold standard sentence and token segmentation, and relying on high-quality syntactic parses and rich heuristics that are not generally available across languages and domains. In this paper, we propose statistical discourse segmenters for five languages and three domains that do not rely on gold pre-annotations. We also consider the problem of learning discourse segmenters when no labeled data is available for a language. Our fully supervised system obtains 89.5{\%} F1 for English newswire, with slight drops in performance on other domains, and we report supervised and unsupervised (cross-lingual) results for five languages in total. | null | null | 10.18653/v1/P17-2037 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,663 |
inproceedings | beigman-klebanov-etal-2017-detecting | Detecting Good Arguments in a Non-Topic-Specific Way: An Oxymoron? | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2038/ | Beigman Klebanov, Beata and Gyawali, Binod and Song, Yi | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 244--249 | Automatic identification of good arguments on a controversial topic has applications in civics and education, to name a few. While in the civics context it might be acceptable to create separate models for each topic, in the context of scoring of students' writing there is a preference for a single model that applies to all responses. Given that good arguments for one topic are likely to be irrelevant for another, is a single model for detecting good arguments a contradiction in terms? We investigate the extent to which it is possible to close the performance gap between topic-specific and across-topics models for identification of good arguments. | null | null | 10.18653/v1/P17-2038 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,664 |
inproceedings | wachsmuth-etal-2017-argumentation | Argumentation Quality Assessment: Theory vs. Practice | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2039/ | Wachsmuth, Henning and Naderi, Nona and Habernal, Ivan and Hou, Yufang and Hirst, Graeme and Gurevych, Iryna and Stein, Benno | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 250--255 | Argumentation quality is viewed differently in argumentation theory and in practical assessment approaches. This paper studies to what extent the views match empirically. We find that most observations on quality phrased spontaneously are in fact adequately represented by theory. Even more, relative comparisons of arguments in practice correlate with absolute quality ratings based on theory. Our results clarify how the two views can learn from each other. | null | null | 10.18653/v1/P17-2039 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,665 |
inproceedings | ronnqvist-etal-2017-recurrent | A Recurrent Neural Model with Attention for the Recognition of {C}hinese Implicit Discourse Relations | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2040/ | R{\"onnqvist, Samuel and Schenk, Niko and Chiarcos, Christian | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 256--262 | We introduce an attention-based Bi-LSTM for Chinese implicit discourse relations and demonstrate that modeling argument pairs as a joint sequence can outperform word order-agnostic approaches. Our model benefits from a partial sampling scheme and is conceptually simple, yet achieves state-of-the-art performance on the Chinese Discourse Treebank. We also visualize its attention activity to illustrate the model`s ability to selectively focus on the relevant parts of an input sequence. | null | null | 10.18653/v1/P17-2040 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,666 |
inproceedings | wang-etal-2017-discourse | Discourse Annotation of Non-native Spontaneous Spoken Responses Using the {R}hetorical {S}tructure {T}heory Framework | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2041/ | Wang, Xinhao and Bruno, James and Molloy, Hillary and Evanini, Keelan and Zechner, Klaus | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 263--268 | The availability of the Rhetorical Structure Theory (RST) Discourse Treebank has spurred substantial research into discourse analysis of written texts; however, limited research has been conducted to date on RST annotation and parsing of spoken language, in particular, non-native spontaneous speech. Considering that the measurement of discourse coherence is typically a key metric in human scoring rubrics for assessments of spoken language, we initiated a research effort to obtain RST annotations of a large number of non-native spoken responses from a standardized assessment of academic English proficiency. The resulting inter-annotator kappa agreements on the three different levels of Span, Nuclearity, and Relation are 0.848, 0.766, and 0.653, respectively. Furthermore, a set of features was explored to evaluate the discourse structure of non-native spontaneous speech based on these annotations; the highest performing feature resulted in a correlation of 0.612 with scores of discourse coherence provided by expert human raters. | null | null | 10.18653/v1/P17-2041 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,667 |
inproceedings | wu-etal-2017-improving | Improving Implicit Discourse Relation Recognition with Discourse-specific Word Embeddings | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2042/ | Wu, Changxing and Shi, Xiaodong and Chen, Yidong and Su, Jinsong and Wang, Boli | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 269--274 | We introduce a simple and effective method to learn discourse-specific word embeddings (DSWE) for implicit discourse relation recognition. Specifically, DSWE is learned by performing connective classification on massive explicit discourse data, and capable of capturing discourse relationships between words. On the PDTB data set, using DSWE as features achieves significant improvements over baselines. | null | null | 10.18653/v1/P17-2042 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,668 |
inproceedings | hirao-etal-2017-oracle | Oracle Summaries of Compressive Summarization | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2043/ | Hirao, Tsutomu and Nishino, Masaaki and Nagata, Masaaki | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 275--280 | This paper derives an Integer Linear Programming (ILP) formulation to obtain an oracle summary of the compressive summarization paradigm in terms of ROUGE. The oracle summary is essential to reveal the upper bound performance of the paradigm. Experimental results on the DUC dataset showed that ROUGE scores of compressive oracles are significantly higher than those of extractive oracles and state-of-the-art summarization systems. These results reveal that compressive summarization is a promising paradigm and encourage us to continue with the research to produce informative summaries. | null | null | 10.18653/v1/P17-2043 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,669 |
inproceedings | hasegawa-etal-2017-japanese | {J}apanese Sentence Compression with a Large Training Dataset | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2044/ | Hasegawa, Shun and Kikuchi, Yuta and Takamura, Hiroya and Okumura, Manabu | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 281--286 | In English, high-quality sentence compression models by deleting words have been trained on automatically created large training datasets. We work on Japanese sentence compression by a similar approach. To create a large Japanese training dataset, a method of creating English training dataset is modified based on the characteristics of the Japanese language. The created dataset is used to train Japanese sentence compression models based on the recurrent neural network. | null | null | 10.18653/v1/P17-2044 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,670 |
inproceedings | loyola-etal-2017-neural | A Neural Architecture for Generating Natural Language Descriptions from Source Code Changes | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2045/ | Loyola, Pablo and Marrese-Taylor, Edison and Matsuo, Yutaka | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 287--292 | We propose a model to automatically describe changes introduced in the source code of a program using natural language. Our method receives as input a set of code commits, which contains both the modifications and message introduced by an user. These two modalities are used to train an encoder-decoder architecture. We evaluated our approach on twelve real world open source projects from four different programming languages. Quantitative and qualitative results showed that the proposed approach can generate feasible and semantically sound descriptions not only in standard in-project settings, but also in a cross-project setting. | null | null | 10.18653/v1/P17-2045 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,671 |
inproceedings | wei-etal-2017-english | {E}nglish Event Detection With Translated Language Features | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2046/ | Wei, Sam and Korostil, Igor and Nothman, Joel and Hachey, Ben | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 293--298 | We propose novel radical features from automatic translation for event extraction. Event detection is a complex language processing task for which it is expensive to collect training data, making generalisation challenging. We derive meaningful subword features from automatic translations into target language. Results suggest this method is particularly useful when using languages with writing systems that facilitate easy decomposition into subword features, e.g., logograms and Cangjie. The best result combines logogram features from Chinese and Japanese with syllable features from Korean, providing an additional 3.0 points f-score when added to state-of-the-art generalisation features on the TAC KBP 2015 Event Nugget task. | null | null | 10.18653/v1/P17-2046 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,672 |
inproceedings | savenkov-agichtein-2017-evinets | {E}vi{N}ets: Neural Networks for Combining Evidence Signals for Factoid Question Answering | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2047/ | Savenkov, Denis and Agichtein, Eugene | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 299--304 | A critical task for question answering is the final answer selection stage, which has to combine multiple signals available about each answer candidate. This paper proposes EviNets: a novel neural network architecture for factoid question answering. EviNets scores candidate answer entities by combining the available supporting evidence, e.g., structured knowledge bases and unstructured text documents. EviNets represents each piece of evidence with a dense embeddings vector, scores their relevance to the question, and aggregates the support for each candidate to predict their final scores. Each of the components is generic and allows plugging in a variety of models for semantic similarity scoring and information aggregation. We demonstrate the effectiveness of EviNets in experiments on the existing TREC QA and WikiMovies benchmarks, and on the new Yahoo! Answers dataset introduced in this paper. EviNets can be extended to other information types and could facilitate future work on combining evidence signals for joint reasoning in question answering. | null | null | 10.18653/v1/P17-2047 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,673 |
inproceedings | wolfe-etal-2017-pocket | Pocket Knowledge Base Population | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2048/ | Wolfe, Travis and Dredze, Mark and Van Durme, Benjamin | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 305--310 | Existing Knowledge Base Population methods extract relations from a closed relational schema with limited coverage leading to sparse KBs. We propose Pocket Knowledge Base Population (PKBP), the task of dynamically constructing a KB of entities related to a query and finding the best characterization of relationships between entities. We describe novel Open Information Extraction methods which leverage the PKB to find informative trigger words. We evaluate using existing KBP shared-task data as well anew annotations collected for this work. Our methods produce high quality KB from just text with many more entities and relationships than existing KBP systems. | null | null | 10.18653/v1/P17-2048 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,674 |
inproceedings | khot-etal-2017-answering | Answering Complex Questions Using Open Information Extraction | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2049/ | Khot, Tushar and Sabharwal, Ashish and Clark, Peter | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 311--316 | While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques. Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for QA, but to date such knowledge has only been used to answer simple questions with retrieval-based methods. We overcome this limitation by presenting a method for reasoning with Open IE knowledge, allowing more complex questions to be handled. Using a recently proposed support graph optimization framework for QA, we develop a new inference model for Open IE, in particular one that can work effectively with multiple short facts, noise, and the relational structure of tuples. Our model significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty, while also removing the reliance on manually curated knowledge. | null | null | 10.18653/v1/P17-2049 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,675 |
inproceedings | saha-etal-2017-bootstrapping | Bootstrapping for Numerical Open {IE} | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2050/ | Saha, Swarnadeep and Pal, Harinder and Mausam | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 317--323 | We design and release BONIE, the first open numerical relation extractor, for extracting Open IE tuples where one of the arguments is a number or a quantity-unit phrase. BONIE uses bootstrapping to learn the specific dependency patterns that express numerical relations in a sentence. BONIE`s novelty lies in task-specific customizations, such as inferring implicit relations, which are clear due to context such as units (for e.g., {\textquoteleft}square kilometers' suggests area, even if the word {\textquoteleft}area' is missing in the sentence). BONIE obtains 1.5x yield and 15 point precision gain on numerical facts over a state-of-the-art Open IE system. | null | null | 10.18653/v1/P17-2050 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,676 |
inproceedings | komninos-manandhar-2017-feature | Feature-Rich Networks for Knowledge Base Completion | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2051/ | Komninos, Alexandros and Manandhar, Suresh | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 324--329 | We propose jointly modelling Knowledge Bases and aligned text with Feature-Rich Networks. Our models perform Knowledge Base Completion by learning to represent and compose diverse feature types from partially aligned and noisy resources. We perform experiments on Freebase utilizing additional entity type information and syntactic textual relations. Our evaluation suggests that the proposed models can better incorporate side information than previously proposed combinations of bilinear models with convolutional neural networks, showing large improvements when scoring the plausibility of unobserved facts with associated textual mentions. | null | null | 10.18653/v1/P17-2051 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,677 |
inproceedings | rabinovich-klein-2017-fine | Fine-Grained Entity Typing with High-Multiplicity Assignments | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2052/ | Rabinovich, Maxim and Klein, Dan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 330--334 | As entity type systems become richer and more fine-grained, we expect the number of types assigned to a given entity to increase. However, most fine-grained typing work has focused on datasets that exhibit a low degree of type multiplicity. In this paper, we consider the high-multiplicity regime inherent in data sources such as Wikipedia that have semi-open type systems. We introduce a set-prediction approach to this problem and show that our model outperforms unstructured baselines on a new Wikipedia-based fine-grained typing corpus. | null | null | 10.18653/v1/P17-2052 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,678 |
inproceedings | ma-etal-2017-group | Group Sparse {CNN}s for Question Classification with Answer Sets | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2053/ | Ma, Mingbo and Huang, Liang and Xiang, Bing and Zhou, Bowen | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 335--340 | Question classification is an important task with wide applications. However, traditional techniques treat questions as general sentences, ignoring the corresponding answer data. In order to consider answer information into question modeling, we first introduce novel group sparse autoencoders which refine question representation by utilizing group information in the answer set. We then propose novel group sparse CNNs which naturally learn question representation with respect to their answers by implanting group sparse autoencoders into traditional CNNs. The proposed model significantly outperform strong baselines on four datasets. | null | null | 10.18653/v1/P17-2053 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,679 |
inproceedings | augenstein-sogaard-2017-multi | Multi-Task Learning of Keyphrase Boundary Classification | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2054/ | Augenstein, Isabelle and S{\o}gaard, Anders | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 341--346 | Keyphrase boundary classification (KBC) is the task of detecting keyphrases in scientific articles and labelling them with respect to predefined types. Although important in practice, this task is so far underexplored, partly due to the lack of labelled data. To overcome this, we explore several auxiliary tasks, including semantic super-sense tagging and identification of multi-word expressions, and cast the task as a multi-task learning problem with deep recurrent neural networks. Our multi-task models perform significantly better than previous state of the art approaches on two scientific KBC datasets, particularly for long keyphrases. | null | null | 10.18653/v1/P17-2054 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,680 |
inproceedings | mirza-etal-2017-cardinal | Cardinal Virtues: Extracting Relation Cardinalities from Text | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2055/ | Mirza, Paramita and Razniewski, Simon and Darari, Fariz and Weikum, Gerhard | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 347--351 | Information extraction (IE) from text has largely focused on relations between individual entities, such as who has won which award. However, some facts are never fully mentioned, and no IE method has perfect recall. Thus, it is beneficial to also tap contents about the cardinalities of these relations, for example, how many awards someone has won. We introduce this novel problem of extracting cardinalities and discusses the specific challenges that set it apart from standard IE. We present a distant supervision method using conditional random fields. A preliminary evaluation results in precision between 3{\%} and 55{\%}, depending on the difficulty of relations. | null | null | 10.18653/v1/P17-2055 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,681 |
inproceedings | stanovsky-etal-2017-integrating | Integrating Deep Linguistic Features in Factuality Prediction over Unified Datasets | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2056/ | Stanovsky, Gabriel and Eckle-Kohler, Judith and Puzikov, Yevgeniy and Dagan, Ido and Gurevych, Iryna | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 352--357 | Previous models for the assessment of commitment towards a predicate in a sentence (also known as factuality prediction) were trained and tested against a specific annotated dataset, subsequently limiting the generality of their results. In this work we propose an intuitive method for mapping three previously annotated corpora onto a single factuality scale, thereby enabling models to be tested across these corpora. In addition, we design a novel model for factuality prediction by first extending a previous rule-based factuality prediction system and applying it over an abstraction of dependency trees, and then using the output of this system in a supervised classifier. We show that this model outperforms previous methods on all three datasets. We make both the unified factuality corpus and our new model publicly available. | null | null | 10.18653/v1/P17-2056 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,682 |
inproceedings | das-etal-2017-question | Question Answering on Knowledge Bases and Text using Universal Schema and Memory Networks | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2057/ | Das, Rajarshi and Zaheer, Manzil and Reddy, Siva and McCallum, Andrew | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 358--365 | Existing question answering methods infer answers either from a knowledge base or from raw text. While knowledge base (KB) methods are good at answering compositional questions, their performance is often affected by the incompleteness of the KB. Au contraire, web text contains millions of facts that are absent in the KB, however in an unstructured form. Universal schema can support reasoning on the union of both structured KBs and unstructured text by aligning them in a common embedded space. In this paper we extend universal schema to natural language question answering, employing Memory networks to attend to the large body of facts in the combination of text and KB. Our models can be trained in an end-to-end fashion on question-answer pairs. Evaluation results on Spades fill-in-the-blank question answering dataset show that exploiting universal schema for question answering is better than using either a KB or text alone. This model also outperforms the current state-of-the-art by 8.5 F1 points. | null | null | 10.18653/v1/P17-2057 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,683 |
inproceedings | goyal-etal-2017-differentiable | Differentiable Scheduled Sampling for Credit Assignment | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2058/ | Goyal, Kartik and Dyer, Chris and Berg-Kirkpatrick, Taylor | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 366--371 | We demonstrate that a continuous relaxation of the argmax operation can be used to create a differentiable approximation to greedy decoding in sequence-to-sequence (seq2seq) models. By incorporating this approximation into the scheduled sampling training procedure{--}a well-known technique for correcting exposure bias{--}we introduce a new training objective that is continuous and differentiable everywhere and can provide informative gradients near points where previous decoding decisions change their value. By using a related approximation, we also demonstrate a similar approach to sampled-based training. We show that our approach outperforms both standard cross-entropy training and scheduled sampling procedures in two sequence prediction tasks: named entity recognition and machine translation. | null | null | 10.18653/v1/P17-2058 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,684 |
inproceedings | guo-2017-deep | A Deep Network with Visual Text Composition Behavior | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2059/ | Guo, Hongyu | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 372--377 | While natural languages are compositional, how state-of-the-art neural models achieve compositionality is still unclear. We propose a deep network, which not only achieves competitive accuracy for text classification, but also exhibits compositional behavior. That is, while creating hierarchical representations of a piece of text, such as a sentence, the lower layers of the network distribute their layer-specific attention weights to individual words. In contrast, the higher layers compose meaningful phrases and clauses, whose lengths increase as the networks get deeper until fully composing the sentence. | null | null | 10.18653/v1/P17-2059 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,685 |
inproceedings | zhou-etal-2017-neural | Neural System Combination for Machine Translation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2060/ | Zhou, Long and Hu, Wenpeng and Zhang, Jiajun and Zong, Chengqing | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 378--384 | Neural machine translation (NMT) becomes a new approach to machine translation and generates much more fluent results compared to statistical machine translation (SMT). However, SMT is usually better than NMT in translation adequacy. It is therefore a promising direction to combine the advantages of both NMT and SMT. In this paper, we propose a neural system combination framework leveraging multi-source NMT, which takes as input the outputs of NMT and SMT systems and produces the final translation. Extensive experiments on the Chinese-to-English translation task show that our model archives significant improvement by 5.3 BLEU points over the best single system output and 3.4 BLEU points over the state-of-the-art traditional system combination methods. | null | null | 10.18653/v1/P17-2060 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,686 |
inproceedings | chu-etal-2017-empirical | An Empirical Comparison of Domain Adaptation Methods for Neural Machine Translation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2061/ | Chu, Chenhui and Dabre, Raj and Kurohashi, Sadao | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 385--391 | In this paper, we propose a novel domain adaptation method named {\textquotedblleft}mixed fine tuning{\textquotedblright} for neural machine translation (NMT). We combine two existing approaches namely fine tuning and multi domain NMT. We first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus which is a mix of the in-domain and out-of-domain corpora. All corpora are augmented with artificial tags to indicate specific domains. We empirically compare our proposed method against fine tuning and multi domain methods and discuss its benefits and shortcomings. | null | null | 10.18653/v1/P17-2061 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,687 |
inproceedings | marie-fujita-2017-efficient | Efficient Extraction of Pseudo-Parallel Sentences from Raw Monolingual Data Using Word Embeddings | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2062/ | Marie, Benjamin and Fujita, Atsushi | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 392--398 | We propose a new method for extracting pseudo-parallel sentences from a pair of large monolingual corpora, without relying on any document-level information. Our method first exploits word embeddings in order to efficiently evaluate trillions of candidate sentence pairs and then a classifier to find the most reliable ones. We report significant improvements in domain adaptation for statistical machine translation when using a translation model trained on the sentence pairs extracted from in-domain monolingual corpora. | null | null | 10.18653/v1/P17-2062 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,688 |
inproceedings | malmasi-dras-2017-feature | Feature Hashing for Language and Dialect Identification | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2063/ | Malmasi, Shervin and Dras, Mark | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 399--403 | We evaluate feature hashing for language identification (LID), a method not previously used for this task. Using a standard dataset, we first show that while feature performance is high, LID data is highly dimensional and mostly sparse ({\ensuremath{>}}99.5{\%}) as it includes large vocabularies for many languages; memory requirements grow as languages are added. Next we apply hashing using various hash sizes, demonstrating that there is no performance loss with dimensionality reductions of up to 86{\%}. We also show that using an ensemble of low-dimension hash-based classifiers further boosts performance. Feature hashing is highly useful for LID and holds great promise for future work in this area. | null | null | 10.18653/v1/P17-2063 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,689 |
inproceedings | shiue-etal-2017-detection | Detection of {C}hinese Word Usage Errors for Non-Native {C}hinese Learners with Bidirectional {LSTM} | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2064/ | Shiue, Yow-Ting and Huang, Hen-Hsen and Chen, Hsin-Hsi | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 404--410 | Selecting appropriate words to compose a sentence is one common problem faced by non-native Chinese learners. In this paper, we propose (bidirectional) LSTM sequence labeling models and explore various features to detect word usage errors in Chinese sentences. By combining CWINDOW word embedding features and POS information, the best bidirectional LSTM model achieves accuracy 0.5138 and MRR 0.6789 on the HSK dataset. For 80.79{\%} of the test data, the model ranks the ground-truth within the top two at position level. | null | null | 10.18653/v1/P17-2064 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,690 |
inproceedings | ryskina-etal-2017-automatic | Automatic Compositor Attribution in the First Folio of Shakespeare | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2065/ | Ryskina, Maria and Alpert-Abrams, Hannah and Garrette, Dan and Berg-Kirkpatrick, Taylor | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 411--416 | Compositor attribution, the clustering of pages in a historical printed document by the individual who set the type, is a bibliographic task that relies on analysis of orthographic variation and inspection of visual details of the printed page. In this paper, we introduce a novel unsupervised model that jointly describes the textual and visual features needed to distinguish compositors. Applied to images of Shakespeare`s First Folio, our model predicts attributions that agree with the manual judgements of bibliographers with an accuracy of 87{\%}, even on text that is the output of OCR. | null | null | 10.18653/v1/P17-2065 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,691 |
inproceedings | yoshikawa-etal-2017-stair | {STAIR} Captions: Constructing a Large-Scale {J}apanese Image Caption Dataset | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2066/ | Yoshikawa, Yuya and Shigeto, Yutaro and Takeuchi, Akikazu | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 417--421 | In recent years, automatic generation of image descriptions (captions), that is, image captioning, has attracted a great deal of attention. In this paper, we particularly consider generating Japanese captions for images. Since most available caption datasets have been constructed for English language, there are few datasets for Japanese. To tackle this problem, we construct a large-scale Japanese image caption dataset based on images from MS-COCO, which is called STAIR Captions. STAIR Captions consists of 820,310 Japanese captions for 164,062 images. In the experiment, we show that a neural network trained using STAIR Captions can generate more natural and better Japanese captions, compared to those generated using English-Japanese machine translation after generating English captions. | null | null | 10.18653/v1/P17-2066 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,692 |
inproceedings | wang-2017-liar | {\textquotedblleft}Liar, Liar Pants on Fire{\textquotedblright}: A New Benchmark Dataset for Fake News Detection | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2067/ | Wang, William Yang | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 422--426 | Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present LIAR: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model. | null | null | 10.18653/v1/P17-2067 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,693 |
inproceedings | kato-etal-2017-english | {E}nglish Multiword Expression-aware Dependency Parsing Including Named Entities | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2068/ | Kato, Akihiko and Shindo, Hiroyuki and Matsumoto, Yuji | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 427--432 | Because syntactic structures and spans of multiword expressions (MWEs) are independently annotated in many English syntactic corpora, they are generally inconsistent with respect to one another, which is harmful to the implementation of an aggregate system. In this work, we construct a corpus that ensures consistency between dependency structures and MWEs, including named entities. Further, we explore models that predict both MWE-spans and an MWE-aware dependency structure. Experimental results show that our joint model using additional MWE-span features achieves an MWE recognition improvement of 1.35 points over a pipeline model. | null | null | 10.18653/v1/P17-2068 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,694 |
inproceedings | kober-etal-2017-improving | Improving Semantic Composition with Offset Inference | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2069/ | Kober, Thomas and Weeds, Julie and Reffin, Jeremy and Weir, David | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 433--440 | Count-based distributional semantic models suffer from sparsity due to unobserved but plausible co-occurrences in any text collection. This problem is amplified for models like Anchored Packed Trees (APTs), that take the grammatical type of a co-occurrence into account. We therefore introduce a novel form of distributional inference that exploits the rich type structure in APTs and infers missing data by the same mechanism that is used for semantic composition. | null | null | 10.18653/v1/P17-2069 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,695 |
inproceedings | fadaee-etal-2017-learning | Learning Topic-Sensitive Word Representations | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2070/ | Fadaee, Marzieh and Bisazza, Arianna and Monz, Christof | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 441--447 | Distributed word representations are widely used for modeling words in NLP tasks. Most of the existing models generate one representation per word and do not consider different meanings of a word. We present two approaches to learn multiple topic-sensitive representations per word by using Hierarchical Dirichlet Process. We observe that by modeling topics and integrating topic distributions for each document we obtain representations that are able to distinguish between different meanings of a given word. Our models yield statistically significant improvements for the lexical substitution task indicating that commonly used single word representations, even when combined with contextual information, are insufficient for this task. | null | null | 10.18653/v1/P17-2070 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,696 |
inproceedings | szymanski-2017-temporal | Temporal Word Analogies: Identifying Lexical Replacement with Diachronic Word Embeddings | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2071/ | Szymanski, Terrence | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 448--453 | This paper introduces the concept of temporal word analogies: pairs of words which occupy the same semantic space at different points in time. One well-known property of word embeddings is that they are able to effectively model traditional word analogies ({\textquotedblleft}word $w_1$ is to word $w_2$ as word $w_3$ is to word $w_4${\textquotedblright}) through vector addition. Here, I show that temporal word analogies ({\textquotedblleft}word $w_1$ at time $t_\alpha$ is like word $w_2$ at time $t_\beta${\textquotedblright}) can effectively be modeled with diachronic word embeddings, provided that the independent embedding spaces from each time period are appropriately transformed into a common vector space. When applied to a diachronic corpus of news articles, this method is able to identify temporal word analogies such as {\textquotedblleft}Ronald Reagan in 1987 is like Bill Clinton in 1997{\textquotedblright}, or {\textquotedblleft}Walkman in 1987 is like iPod in 2007{\textquotedblright}. | null | null | 10.18653/v1/P17-2071 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,697 |
inproceedings | elrazzaz-etal-2017-methodical | Methodical Evaluation of {A}rabic Word Embeddings | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2072/ | Elrazzaz, Mohammed and Elbassuoni, Shady and Shaban, Khaled and Helwe, Chadi | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 454--458 | Many unsupervised learning techniques have been proposed to obtain meaningful representations of words from text. In this study, we evaluate these various techniques when used to generate Arabic word embeddings. We first build a benchmark for the Arabic language that can be utilized to perform intrinsic evaluation of different word embeddings. We then perform additional extrinsic evaluations of the embeddings based on two NLP tasks. | null | null | 10.18653/v1/P17-2072 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,698 |
inproceedings | rashkin-etal-2017-multilingual | Multilingual Connotation Frames: A Case Study on Social Media for Targeted Sentiment Analysis and Forecast | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2073/ | Rashkin, Hannah and Bell, Eric and Choi, Yejin and Volkova, Svitlana | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 459--464 | People around the globe respond to major real world events through social media. To study targeted public sentiments across many languages and geographic locations, we introduce multilingual connotation frames: an extension from English connotation frames of Rashkin et al. (2016) with 10 additional European languages, focusing on the implied sentiments among event participants engaged in a frame. As a case study, we present large scale analysis on targeted public sentiments toward salient events and entities using 1.2 million multilingual connotation frames extracted from Twitter. | null | null | 10.18653/v1/P17-2073 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,699 |
inproceedings | kiritchenko-mohammad-2017-best | Best-Worst Scaling More Reliable than Rating Scales: A Case Study on Sentiment Intensity Annotation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2074/ | Kiritchenko, Svetlana and Mohammad, Saif | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 465--470 | Rating scales are a widely used method for data annotation; however, they present several challenges, such as difficulty in maintaining inter- and intra-annotator consistency. Best{--}worst scaling (BWS) is an alternative method of annotation that is claimed to produce high-quality annotations while keeping the required number of annotations similar to that of rating scales. However, the veracity of this claim has never been systematically established. Here for the first time, we set up an experiment that directly compares the rating scale method with BWS. We show that with the same total number of annotations, BWS produces significantly more reliable results than the rating scale. | null | null | 10.18653/v1/P17-2074 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,700 |
inproceedings | kim-etal-2017-demographic | Demographic Inference on {T}witter using Recursive Neural Networks | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2075/ | Kim, Sunghwan Mac and Xu, Qiongkai and Qu, Lizhen and Wan, Stephen and Paris, C{\'e}cile | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 471--477 | In social media, demographic inference is a critical task in order to gain a better understanding of a cohort and to facilitate interacting with one`s audience. Most previous work has made independence assumptions over topological, textual and label information on social networks. In this work, we employ recursive neural networks to break down these independence assumptions to obtain inference about demographic characteristics on Twitter. We show that our model performs better than existing models including the state-of-the-art. | null | null | 10.18653/v1/P17-2075 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,701 |
inproceedings | vijayaraghavan-etal-2017-twitter | {T}witter Demographic Classification Using Deep Multi-modal Multi-task Learning | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2076/ | Vijayaraghavan, Prashanth and Vosoughi, Soroush and Roy, Deb | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 478--483 | Twitter should be an ideal place to get a fresh read on how different issues are playing with the public, one that`s potentially more reflective of democracy in this new media age than traditional polls. Pollsters typically ask people a fixed set of questions, while in social media people use their own voices to speak about whatever is on their minds. However, the demographic distribution of users on Twitter is not representative of the general population. In this paper, we present a demographic classifier for gender, age, political orientation and location on Twitter. We collected and curated a robust Twitter demographic dataset for this task. Our classifier uses a deep multi-modal multi-task learning architecture to reach a state-of-the-art performance, achieving an F1-score of 0.89, 0.82, 0.86, and 0.68 for gender, age, political orientation, and location respectively. | null | null | 10.18653/v1/P17-2076 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,702 |
inproceedings | zhan-etal-2017-network | A Network Framework for Noisy Label Aggregation in Social Media | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2077/ | Zhan, Xueying and Wang, Yaowei and Rao, Yanghui and Xie, Haoran and Li, Qing and Wang, Fu Lee and Wong, Tak-Lam | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 484--490 | This paper focuses on the task of noisy label aggregation in social media, where users with different social or culture backgrounds may annotate invalid or malicious tags for documents. To aggregate noisy labels at a small cost, a network framework is proposed by calculating the matching degree of a document`s topics and the annotators' meta-data. Unlike using the back-propagation algorithm, a probabilistic inference approach is adopted to estimate network parameters. Finally, a new simulation method is designed for validating the effectiveness of the proposed framework in aggregating noisy labels. | null | null | 10.18653/v1/P17-2077 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,703 |
inproceedings | van-der-goot-van-noord-2017-parser | Parser Adaptation for Social Media by Integrating Normalization | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2078/ | van der Goot, Rob and van Noord, Gertjan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 491--497 | This work explores different approaches of using normalization for parser adaptation. Traditionally, normalization is used as separate pre-processing step. We show that integrating the normalization model into the parsing algorithm is more beneficial. This way, multiple normalization candidates can be leveraged, which improves parsing performance on social media. We test this hypothesis by modifying the Berkeley parser; out-of-the-box it achieves an F1 score of 66.52. Our integrated approach reaches a significant improvement with an F1 score of 67.36, while using the best normalization sequence results in an F1 score of only 66.94. | null | null | 10.18653/v1/P17-2078 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,704 |
inproceedings | qiu-etal-2017-alime | {A}li{M}e Chat: A Sequence to Sequence and Rerank based Chatbot Engine | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2079/ | Qiu, Minghui and Li, Feng-Lin and Wang, Siyu and Gao, Xing and Chen, Yan and Zhao, Weipeng and Chen, Haiqing and Huang, Jun and Chu, Wei | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 498--503 | We propose AliMe Chat, an open-domain chatbot engine that integrates the joint results of Information Retrieval (IR) and Sequence to Sequence (Seq2Seq) based generation models. AliMe Chat uses an attentive Seq2Seq based rerank model to optimize the joint results. Extensive experiments show our engine outperforms both IR and generation based models. We launch AliMe Chat for a real-world industrial application and observe better results than another public chatbot. | null | null | 10.18653/v1/P17-2079 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,705 |
inproceedings | shen-etal-2017-conditional | A Conditional Variational Framework for Dialog Generation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2080/ | Shen, Xiaoyu and Su, Hui and Li, Yanran and Li, Wenjie and Niu, Shuzi and Zhao, Yang and Aizawa, Akiko and Long, Guoping | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 504--509 | Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems. However, these latent variables are highly randomized, leading to uncontrollable generated responses. In this paper, we propose a framework allowing conditional response generation based on specific attributes. These attributes can be either manually assigned or automatically detected. Moreover, the dialog states for both speakers are modeled separately in order to reflect personal features. We validate this framework on two different scenarios, where the attribute refers to genericness and sentiment states respectively. The experiment result testified the potential of our model, where meaningful responses can be generated in accordance with the specified attributes. | null | null | 10.18653/v1/P17-2080 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,706 |
inproceedings | min-etal-2017-question | Question Answering through Transfer Learning from Large Fine-grained Supervision Data | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2081/ | Min, Sewon and Seo, Minjoon and Hajishirzi, Hannaneh | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 510--517 | We show that the task of question answering (QA) can significantly benefit from the transfer learning of models trained on a different large, fine-grained QA dataset. We achieve the state of the art in two well-studied QA datasets, WikiQA and SemEval-2016 (Task 3A), through a basic transfer learning technique from SQuAD. For WikiQA, our model outperforms the previous best model by more than 8{\%}. We demonstrate that finer supervision provides better guidance for learning lexical and syntactic information than coarser supervision, through quantitative results and visual analysis. We also show that a similar transfer learning procedure achieves the state of the art on an entailment task. | null | null | 10.18653/v1/P17-2081 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,707 |
inproceedings | tran-etal-2017-generative | A Generative Attentional Neural Network Model for Dialogue Act Classification | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2083/ | Tran, Quan Hung and Haffari, Gholamreza and Zukerman, Ingrid | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 524--529 | We propose a novel generative neural network architecture for Dialogue Act classification. Building upon the Recurrent Neural Network framework, our model incorporates a novel attentional technique and a label to label connection for sequence learning, akin to Hidden Markov Models. The experiments show that both of these innovations lead our model to outperform strong baselines for dialogue act classification on MapTask and Switchboard corpora. We further empirically analyse the effectiveness of each of the new innovations. | null | null | 10.18653/v1/P17-2083 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,709 |
inproceedings | teneva-cheng-2017-salience | Salience Rank: Efficient Keyphrase Extraction with Topic Modeling | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2084/ | Teneva, Nedelina and Cheng, Weiwei | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 530--535 | Topical PageRank (TPR) uses latent topic distribution inferred by Latent Dirichlet Allocation (LDA) to perform ranking of noun phrases extracted from documents. The ranking procedure consists of running PageRank K times, where K is the number of topics used in the LDA model. In this paper, we propose a modification of TPR, called Salience Rank. Salience Rank only needs to run PageRank once and extracts comparable or better keyphrases on benchmark datasets. In addition to quality and efficiency benefit, our method has the flexibility to extract keyphrases with varying tradeoffs between topic specificity and corpus specificity. | null | null | 10.18653/v1/P17-2084 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,710 |
inproceedings | lin-etal-2017-list | List-only Entity Linking | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2085/ | Lin, Ying and Lin, Chin-Yew and Ji, Heng | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 536--541 | Traditional Entity Linking (EL) technologies rely on rich structures and properties in the target knowledge base (KB). However, in many applications, the KB may be as simple and sparse as lists of names of the same type (e.g., lists of products). We call it as List-only Entity Linking problem. Fortunately, some mentions may have more cues for linking, which can be used as seed mentions to bridge other mentions and the uninformative entities. In this work, we select most linkable mentions as seed mentions and disambiguate other mentions by comparing them with the seed mentions rather than directly with the entities. Our experiments on linking mentions to seven automatically mined lists show promising results and demonstrate the effectiveness of our approach. | null | null | 10.18653/v1/P17-2085 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,711 |
inproceedings | chen-etal-2017-improving | Improving Native Language Identification by Using Spelling Errors | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2086/ | Chen, Lingzhen and Strapparava, Carlo and Nastase, Vivi | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 542--546 | In this paper, we explore spelling errors as a source of information for detecting the native language of a writer, a previously under-explored area. We note that character n-grams from misspelled words are very indicative of the native language of the author. In combination with other lexical features, spelling error features lead to 1.2{\%} improvement in accuracy on classifying texts in the TOEFL11 corpus by the author`s native language, compared to systems participating in the NLI shared task. | null | null | 10.18653/v1/P17-2086 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,712 |
inproceedings | jamshid-lou-johnson-2017-disfluency | Disfluency Detection using a Noisy Channel Model and a Deep Neural Language Model | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2087/ | Jamshid Lou, Paria and Johnson, Mark | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 547--553 | This paper presents a model for disfluency detection in spontaneous speech transcripts called LSTM Noisy Channel Model. The model uses a Noisy Channel Model (NCM) to generate n-best candidate disfluency analyses and a Long Short-Term Memory (LSTM) language model to score the underlying fluent sentences of each analysis. The LSTM language model scores, along with other features, are used in a MaxEnt reranker to identify the most plausible analysis. We show that using an LSTM language model in the reranking process of noisy channel disfluency model improves the state-of-the-art in disfluency detection. | null | null | 10.18653/v1/P17-2087 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,713 |
inproceedings | hayashi-shimbo-2017-equivalence | On the Equivalence of Holographic and Complex Embeddings for Link Prediction | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2088/ | Hayashi, Katsuhiko and Shimbo, Masashi | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 554--559 | We show the equivalence of two state-of-the-art models for link prediction/knowledge graph completion: Nickel et al`s holographic embeddings and Trouillon et al.`s complex embeddings. We first consider a spectral version of the holographic embeddings, exploiting the frequency domain in the Fourier transform for efficient computation. The analysis of the resulting model reveals that it can be viewed as an instance of the complex embeddings with a certain constraint imposed on the initial vectors upon training. Conversely, any set of complex embeddings can be converted to a set of equivalent holographic embeddings. | null | null | 10.18653/v1/P17-2088 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,714 |
inproceedings | wang-etal-2017-sentence | Sentence Embedding for Neural Machine Translation Domain Adaptation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2089/ | Wang, Rui and Finch, Andrew and Utiyama, Masao and Sumita, Eiichiro | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 560--566 | Although new corpora are becoming increasingly available for machine translation, only those that belong to the same or similar domains are typically able to improve translation performance. Recently Neural Machine Translation (NMT) has become prominent in the field. However, most of the existing domain adaptation methods only focus on phrase-based machine translation. In this paper, we exploit the NMT`s internal embedding of the source sentence and use the sentence embedding similarity to select the sentences which are close to in-domain data. The empirical adaptation results on the IWSLT English-French and NIST Chinese-English tasks show that the proposed methods can substantially improve NMT performance by 2.4-9.0 BLEU points, outperforming the existing state-of-the-art baseline by 2.3-4.5 BLEU points. | null | null | 10.18653/v1/P17-2089 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,715 |
inproceedings | fadaee-etal-2017-data | Data Augmentation for Low-Resource Neural Machine Translation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2090/ | Fadaee, Marzieh and Bisazza, Arianna and Monz, Christof | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 567--573 | The quality of a Neural Machine Translation system depends substantially on the availability of sizable parallel corpora. For low-resource language pairs this is not the case, resulting in poor translation quality. Inspired by work in computer vision, we propose a novel data augmentation approach that targets low-frequency words by generating new sentence pairs containing rare words in new, synthetically created contexts. Experimental results on simulated low-resource settings show that our method improves translation quality by up to 2.9 BLEU points over the baseline and up to 3.2 BLEU over back-translation. | null | null | 10.18653/v1/P17-2090 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,716 |
inproceedings | shi-knight-2017-speeding | Speeding Up Neural Machine Translation Decoding by Shrinking Run-time Vocabulary | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2091/ | Shi, Xing and Knight, Kevin | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 574--579 | We speed up Neural Machine Translation (NMT) decoding by shrinking run-time target vocabulary. We experiment with two shrinking approaches: Locality Sensitive Hashing (LSH) and word alignments. Using the latter method, we get a 2x overall speed-up over a highly-optimized GPU implementation, without hurting BLEU. On certain low-resource language pairs, the same methods improve BLEU by 0.5 points. We also report a negative result for LSH on GPUs, due to relatively large overhead, though it was successful on CPUs. Compared with Locality Sensitive Hashing (LSH), decoding with word alignments is GPU-friendly, orthogonal to existing speedup methods and more robust across language pairs. | null | null | 10.18653/v1/P17-2091 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,717 |
inproceedings | zhou-etal-2017-chunk | Chunk-Based Bi-Scale Decoder for Neural Machine Translation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2092/ | Zhou, Hao and Tu, Zhaopeng and Huang, Shujian and Liu, Xiaohua and Li, Hang and Chen, Jiajun | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 580--586 | In typical neural machine translation (NMT), the decoder generates a sentence word by word, packing all linguistic granularities in the same time-scale of RNN. In this paper, we propose a new type of decoder for NMT, which splits the decode state into two parts and updates them in two different time-scales. Specifically, we first predict a chunk time-scale state for phrasal modeling, on top of which multiple word time-scale states are generated. In this way, the target sentence is translated hierarchically from chunks to words, with information in different granularities being leveraged. Experiments show that our proposed model significantly improves the translation performance over the state-of-the-art NMT model. | null | null | 10.18653/v1/P17-2092 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,718 |
inproceedings | fang-cohn-2017-model | Model Transfer for Tagging Low-resource Languages using a Bilingual Dictionary | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2093/ | Fang, Meng and Cohn, Trevor | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 587--593 | Cross-lingual model transfer is a compelling and popular method for predicting annotations in a low-resource language, whereby parallel corpora provide a bridge to a high-resource language, and its associated annotated corpora. However, parallel data is not readily available for many languages, limiting the applicability of these approaches. We address these drawbacks in our framework which takes advantage of cross-lingual word embeddings trained solely on a high coverage dictionary. We propose a novel neural network model for joint training from both sources of data based on cross-lingual word embeddings, and show substantial empirical improvements over baseline techniques. We also propose several active learning heuristics, which result in improvements over competitive benchmark methods. | null | null | 10.18653/v1/P17-2093 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,719 |
inproceedings | delli-bovi-etal-2017-eurosense | {E}uro{S}ense: Automatic Harvesting of Multilingual Sense Annotations from Parallel Text | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2094/ | Delli Bovi, Claudio and Camacho-Collados, Jose and Raganato, Alessandro and Navigli, Roberto | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 594--600 | Parallel corpora are widely used in a variety of Natural Language Processing tasks, from Machine Translation to cross-lingual Word Sense Disambiguation, where parallel sentences can be exploited to automatically generate high-quality sense annotations on a large scale. In this paper we present EuroSense, a multilingual sense-annotated resource based on the joint disambiguation of the Europarl parallel corpus, with almost 123 million sense annotations for over 155 thousand distinct concepts and entities from a language-independent unified sense inventory. We evaluate the quality of our sense annotations intrinsically and extrinsically, showing their effectiveness as training data for Word Sense Disambiguation. | null | null | 10.18653/v1/P17-2094 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,720 |
inproceedings | sajjad-etal-2017-challenging | Challenging Language-Dependent Segmentation for {A}rabic: An Application to Machine Translation and Part-of-Speech Tagging | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2095/ | Sajjad, Hassan and Dalvi, Fahim and Durrani, Nadir and Abdelali, Ahmed and Belinkov, Yonatan and Vogel, Stephan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 601--607 | Word segmentation plays a pivotal role in improving any Arabic NLP application. Therefore, a lot of research has been spent in improving its accuracy. Off-the-shelf tools, however, are: i) complicated to use and ii) domain/dialect dependent. We explore three language-independent alternatives to morphological segmentation using: i) data-driven sub-word units, ii) characters as a unit of learning, and iii) word embeddings learned using a character CNN (Convolution Neural Network). On the tasks of Machine Translation and POS tagging, we found these methods to achieve close to, and occasionally surpass state-of-the-art performance. In our analysis, we show that a neural machine translation system is sensitive to the ratio of source and target tokens, and a ratio close to 1 or greater, gives optimal performance. | null | null | 10.18653/v1/P17-2095 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,721 |
inproceedings | cai-etal-2017-fast | Fast and Accurate Neural Word Segmentation for {C}hinese | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2096/ | Cai, Deng and Zhao, Hai and Zhang, Zhisong and Xin, Yuan and Wu, Yongjian and Huang, Feiyue | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 608--615 | Neural models with minimal feature engineering have achieved competitive performance against traditional methods for the task of Chinese word segmentation. However, both training and working procedures of the current neural models are computationally inefficient. In this paper, we propose a greedy neural word segmenter with balanced word and character embedding inputs to alleviate the existing drawbacks. Our segmenter is truly end-to-end, capable of performing segmentation much faster and even more accurate than state-of-the-art neural models on Chinese benchmark datasets. | null | null | 10.18653/v1/P17-2096 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,722 |
inproceedings | cai-etal-2017-pay | Pay Attention to the Ending:Strong Neural Baselines for the {ROC} Story Cloze Task | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2097/ | Cai, Zheng and Tu, Lifu and Gimpel, Kevin | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 616--622 | We consider the ROC story cloze task (Mostafazadeh et al., 2016) and present several findings. We develop a model that uses hierarchical recurrent networks with attention to encode the sentences in the story and score candidate endings. By discarding the large training set and only training on the validation set, we achieve an accuracy of 74.7{\%}. Even when we discard the story plots (sentences before the ending) and only train to choose the better of two endings, we can still reach 72.5{\%}. We then analyze this {\textquotedblleft}ending-only{\textquotedblright} task setting. We estimate human accuracy to be 78{\%} and find several types of clues that lead to this high accuracy, including those related to sentiment, negation, and general ending likelihood regardless of the story context. | null | null | 10.18653/v1/P17-2097 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,723 |
inproceedings | herzig-berant-2017-neural | Neural Semantic Parsing over Multiple Knowledge-bases | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2098/ | Herzig, Jonathan and Berant, Jonathan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 623--628 | A fundamental challenge in developing semantic parsers is the paucity of strong supervision in the form of language utterances annotated with logical form. In this paper, we propose to exploit structural regularities in language in different domains, and train semantic parsers over multiple knowledge-bases (KBs), while sharing information across datasets. We find that we can substantially improve parsing accuracy by training a single sequence-to-sequence model over multiple KBs, when providing an encoding of the domain at decoding time. Our model achieves state-of-the-art performance on the Overnight dataset (containing eight domains), improves performance over a single KB baseline from 75.6{\%} to 79.6{\%}, while obtaining a 7x reduction in the number of model parameters. | null | null | 10.18653/v1/P17-2098 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,724 |
inproceedings | mu-etal-2017-representing | Representing Sentences as Low-Rank Subspaces | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2099/ | Mu, Jiaqi and Bhat, Suma and Viswanath, Pramod | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 629--634 | Sentences are important semantic units of natural language. A generic, distributional representation of sentences that can capture the latent semantics is beneficial to multiple downstream applications. We observe a simple geometry of sentences {--} the word representations of a given sentence (on average 10.23 words in all SemEval datasets with a standard deviation 4.84) roughly lie in a low-rank subspace (roughly, rank 4). Motivated by this observation, we represent a sentence by the low-rank subspace spanned by its word vectors. Such an unsupervised representation is empirically validated via semantic textual similarity tasks on 19 different datasets, where it outperforms the sophisticated neural network models, including skip-thought vectors, by 15{\%} on average. | null | null | 10.18653/v1/P17-2099 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,725 |
inproceedings | ma-etal-2017-improving | Improving Semantic Relevance for Sequence-to-Sequence Learning of {C}hinese Social Media Text Summarization | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2100/ | Ma, Shuming and Sun, Xu and Xu, Jingjing and Wang, Houfeng and Li, Wenjie and Su, Qi | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 635--640 | Current Chinese social media text summarization models are based on an encoder-decoder framework. Although its generated summaries are similar to source texts literally, they have low semantic relevance. In this work, our goal is to improve semantic relevance between source texts and summaries for Chinese social media summarization. We introduce a Semantic Relevance Based neural model to encourage high semantic similarity between texts and summaries. In our model, the source text is represented by a gated attention encoder, while the summary representation is produced by a decoder. Besides, the similarity score between the representations is maximized during training. Our experiments show that the proposed model outperforms baseline systems on a social media corpus. | null | null | 10.18653/v1/P17-2100 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,726 |
inproceedings | sanagavarapu-etal-2017-determining | Determining Whether and When People Participate in the Events They Tweet About | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2101/ | Sanagavarapu, Krishna Chaitanya and Vempala, Alakananda and Blanco, Eduardo | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 641--646 | This paper describes an approach to determine whether people participate in the events they tweet about. Specifically, we determine whether people are participants in events with respect to the tweet timestamp. We target all events expressed by verbs in tweets, including past, present and events that may occur in the future. We present new annotations using 1,096 event mentions, and experimental results showing that the task is challenging. | null | null | 10.18653/v1/P17-2101 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,727 |
inproceedings | volkova-etal-2017-separating | Separating Facts from Fiction: Linguistic Models to Classify Suspicious and Trusted News Posts on {T}witter | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2102/ | Volkova, Svitlana and Shaffer, Kyle and Jang, Jin Yea and Hodas, Nathan | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 647--653 | Pew research polls report 62 percent of U.S. adults get news on social media (Gottfried and Shearer, 2016). In a December poll, 64 percent of U.S. adults said that {\textquotedblleft}made-up news{\textquotedblright} has caused a {\textquotedblleft}great deal of confusion{\textquotedblright} about the facts of current events (Barthel et al., 2016). Fabricated stories in social media, ranging from deliberate propaganda to hoaxes and satire, contributes to this confusion in addition to having serious effects on global stability. In this work we build predictive models to classify 130 thousand news posts as suspicious or verified, and predict four sub-types of suspicious news {--} satire, hoaxes, clickbait and propaganda. We show that neural network models trained on tweet content and social network interactions outperform lexical models. Unlike previous work on deception detection, we find that adding syntax and grammar features to our models does not improve performance. Incorporating linguistic features improves classification results, however, social interaction features are most informative for finer-grained separation between four types of suspicious news posts. | null | null | 10.18653/v1/P17-2102 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,728 |
inproceedings | son-etal-2017-recognizing | Recognizing Counterfactual Thinking in Social Media Texts | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2103/ | Son, Youngseo and Buffone, Anneke and Raso, Joe and Larche, Allegra and Janocko, Anthony and Zembroski, Kevin and Schwartz, H Andrew and Ungar, Lyle | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 654--658 | Counterfactual statements, describing events that did not occur and their consequents, have been studied in areas including problem-solving, affect management, and behavior regulation. People with more counterfactual thinking tend to perceive life events as more personally meaningful. Nevertheless, counterfactuals have not been studied in computational linguistics. We create a counterfactual tweet dataset and explore approaches for detecting counterfactuals using rule-based and supervised statistical approaches. A combined rule-based and statistical approach yielded the best results (F1 = 0.77) outperforming either approach used alone. | null | null | 10.18653/v1/P17-2103 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,729 |
inproceedings | hasanuzzaman-etal-2017-temporal | Temporal Orientation of Tweets for Predicting Income of Users | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2104/ | Hasanuzzaman, Mohammed and Kamila, Sabyasachi and Kaur, Mandeep and Saha, Sriparna and Ekbal, Asif | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 659--665 | Automatically estimating a user`s socio-economic profile from their language use in social media can significantly help social science research and various downstream applications ranging from business to politics. The current paper presents the first study where user cognitive structure is used to build a predictive model of income. In particular, we first develop a classifier using a weakly supervised learning framework to automatically time-tag tweets as past, present, or future. We quantify a user`s overall temporal orientation based on their distribution of tweets, and use it to build a predictive model of income. Our analysis uncovers a correlation between future temporal orientation and income. Finally, we measure the predictive power of future temporal orientation on income by performing regression. | null | null | 10.18653/v1/P17-2104 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,730 |
inproceedings | toleu-etal-2017-character | Character-Aware Neural Morphological Disambiguation | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2105/ | Toleu, Alymzhan and Tolegen, Gulmira and Makazhanov, Aibek | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 666--671 | We develop a language-independent, deep learning-based approach to the task of morphological disambiguation. Guided by the intuition that the correct analysis should be {\textquotedblleft}most similar{\textquotedblright} to the context, we propose dense representations for morphological analyses and surface context and a simple yet effective way of combining the two to perform disambiguation. Our approach improves on the language-dependent state of the art for two agglutinative languages (Turkish and Kazakh) and can be potentially applied to other morphologically complex languages. | null | null | 10.18653/v1/P17-2105 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,731 |
inproceedings | yu-vu-2017-character | Character Composition Model with Convolutional Neural Networks for Dependency Parsing on Morphologically Rich Languages | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2106/ | Yu, Xiang and Vu, Ngoc Thang | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 672--678 | We present a transition-based dependency parser that uses a convolutional neural network to compose word representations from characters. The character composition model shows great improvement over the word-lookup model, especially for parsing agglutinative languages. These improvements are even better than using pre-trained word embeddings from extra data. On the SPMRL data sets, our system outperforms the previous best greedy parser (Ballesteros et. al, 2015) by a margin of 3{\%} on average. | null | null | 10.18653/v1/P17-2106 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,732 |
inproceedings | agic-schluter-2017-train | How (not) to train a dependency parser: The curious case of jackknifing part-of-speech taggers | Barzilay, Regina and Kan, Min-Yen | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-2107/ | Agi{\'c}, {\v{Z}}eljko and Schluter, Natalie | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) | 679--684 | In dependency parsing, jackknifing taggers is indiscriminately used as a simple adaptation strategy. Here, we empirically evaluate when and how (not) to use jackknifing in parsing. On 26 languages, we reveal a preference that conflicts with, and surpasses the ubiquitous ten-folding. We show no clear benefits of tagging the training data in cross-lingual parsing. | null | null | 10.18653/v1/P17-2107 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,733 |
inproceedings | poon-etal-2017-nlp | {NLP} for Precision Medicine | Popovi{\'c}, Maja and Boyd-Graber, Jordan | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-5001/ | Poon, Hoifung and Quirk, Chris and Toutanova, Kristina and Yih, Wen-tau | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts | 1--2 | We will introduce precision medicine and showcase the vast opportunities for NLP in this burgeoning field with great societal impact. We will review pressing NLP problems, state-of-the art methods, and important applications, as well as datasets, medical resources, and practical issues. The tutorial will provide an accessible overview of biomedicine, and does not presume knowledge in biology or healthcare. The ultimate goal is to reduce the entry barrier for NLP researchers to contribute to this exciting domain. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,781 |
inproceedings | morency-baltrusaitis-2017-multimodal | Multimodal Machine Learning: Integrating Language, Vision and Speech | Popovi{\'c}, Maja and Boyd-Graber, Jordan | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-5002/ | Morency, Louis-Philippe and Baltru{\v{s}}aitis, Tadas | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts | 3--5 | Multimodal machine learning is a vibrant multi-disciplinary research field which addresses some of the original goals of artificial intelligence by integrating and modeling multiple communicative modalities, including linguistic, acoustic and visual messages. With the initial research on audio-visual speech recognition and more recently with image and video captioning projects, this research field brings some unique challenges for multimodal researchers given the heterogeneity of the data and the contingency often found between modalities.This tutorial builds upon a recent course taught at Carnegie Mellon University during the Spring 2016 semester (CMU course 11-777) and two tutorials presented at CVPR 2016 and ICMI 2016. The present tutorial will review fundamental concepts of machine learning and deep neural networks before describing the five main challenges in multimodal machine learning: (1) multimodal representation learning, (2) translation {\&} mapping, (3) modality alignment, (4) multimodal fusion and (5) co-learning. The tutorial will also present state-of-the-art algorithms that were recently proposed to solve multimodal applications such as image captioning, video descriptions and visual question-answer. We will also discuss the current and upcoming challenges. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,782 |
inproceedings | zhu-grefenstette-2017-deep | Deep Learning for Semantic Composition | Popovi{\'c}, Maja and Boyd-Graber, Jordan | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-5003/ | Zhu, Xiaodan and Grefenstette, Edward | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts | 6--7 | Learning representation to model the meaning of text has been a core problem in NLP. The last several years have seen extensive interests on distributional approaches, in which text spans of different granularities are encoded as vectors of numerical values. If properly learned, such representation has showed to achieve the state-of-the-art performance on a wide range of NLP problems.In this tutorial, we will cover the fundamentals and the state-of-the-art research on neural network-based modeling for semantic composition, which aims to learn distributed representation for different granularities of text, e.g., phrases, sentences, or even documents, from their sub-component meaning representation, e.g., word embedding. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,783 |
inproceedings | chen-etal-2017-deep | Deep Learning for Dialogue Systems | Popovi{\'c}, Maja and Boyd-Graber, Jordan | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-5004/ | Chen, Yun-Nung and Celikyilmaz, Asli and Hakkani-T{\"ur, Dilek | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts | 8--14 | In the past decade, goal-oriented spoken dialogue systems have been the most prominent component in today`s virtual personal assistants. The classic dialogue systems have rather complex and/or modular pipelines. The advance of deep learning technologies has recently risen the applications of neural models to dialogue modeling. However, how to successfully apply deep learning based approaches to a dialogue system is still challenging. Hence, this tutorial is designed to focus on an overview of the dialogue system development while describing most recent research for building dialogue systems and summarizing the challenges, in order to allow researchers to study the potential improvements of the state-of-the-art dialogue systems. The tutorial material is available at \url{http://deepdialogue.miulab.tw}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,784 |
inproceedings | kordoni-2017-beyond | Beyond Words: Deep Learning for Multiword Expressions and Collocations | Popovi{\'c}, Maja and Boyd-Graber, Jordan | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-5005/ | Kordoni, Valia | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts | 15--16 | Deep learning has recently shown much promise for NLP applications. Traditionally, in most NLP approaches, documents or sentences are represented by a sparse bag-of-words representation. There is now a lot of work which goes beyond this by adopting a distributed representation of words, by constructing a so-called ``neural embedding'' or vector space representation of each word or document. The aim of this tutorial is to go beyond the learning of word vectors and present methods for learning vector representations for Multiword Expressions and bilingual phrase pairs, all of which are useful for various NLP applications.This tutorial aims to provide attendees with a clear notion of the linguistic and distributional characteristics of Multiword Expressions (MWEs), their relevance for the intersection of deep learning and natural language processing, what methods and resources are available to support their use, and what more could be done in the future. Our target audience are researchers and practitioners in machine learning, parsing (syntactic and semantic) and language technology, not necessarily experts in MWEs, who are interested in tasks that involve or could benefit from considering MWEs as a pervasive phenomenon in human language and communication. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,785 |
inproceedings | vaughan-2017-tutorial | {T}utorial: Making Better Use of the Crowd | Popovi{\'c}, Maja and Boyd-Graber, Jordan | jul | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/P17-5006/ | Vaughan, Jennifer Wortman | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts | 17--18 | Over the last decade, crowdsourcing has been used to harness the power of human computation to solve tasks that are notoriously difficult to solve with computers alone, such as determining whether or not an image contains a tree, rating the relevance of a website, or verifying the phone number of a business. The natural language processing community was early to embrace crowdsourcing as a tool for quickly and inexpensively obtaining annotated data to train NLP systems. Once this data is collected, it can be handed off to algorithms that learn to perform basic NLP tasks such as translation or parsing. Usually this handoff is where interaction with the crowd ends. The crowd provides the data, but the ultimate goal is to eventually take humans out of the loop. Are there better ways to make use of the crowd?In this tutorial, I will begin with a showcase of innovative uses of crowdsourcing that go beyond data collection and annotation. I will discuss applications to natural language processing and machine learning, hybrid intelligence or {\textquotedblleft}human in the loop{\textquotedblright} AI systems that leverage the complementary strengths of humans and machines in order to achieve more than either could achieve alone, and large scale studies of human behavior online. I will then spend the majority of the tutorial diving into recent research aimed at understanding who crowdworkers are, how they behave, and what this should teach us about best practices for interacting with the crowd. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,786 |
inproceedings | dyer-2017-neural | Should Neural Network Architecture Reflect Linguistic Structure? | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1001/ | Dyer, Chris | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 1 | I explore the hypothesis that conventional neural network models (e.g., recurrent neural networks) are incorrectly biased for making linguistically sensible generalizations when learning, and that a better class of models is based on architectures that reflect hierarchical structures for which considerable behavioral evidence exists. I focus on the problem of modeling and representing the meanings of sentences. On the generation front, I introduce recurrent neural network grammars (RNNGs), a joint, generative model of phrase-structure trees and sentences. RNNGs operate via a recursive syntactic process reminiscent of probabilistic context-free grammar generation, but decisions are parameterized using RNNs that condition on the entire (top-down, left-to-right) syntactic derivation history, thus relaxing context-free independence assumptions, while retaining a bias toward explaining decisions via {\textquotedblleft}syntactically local{\textquotedblright} conditioning contexts. Experiments show that RNNGs obtain better results in generating language than models that don`t exploit linguistic structure. On the representation front, I explore unsupervised learning of syntactic structures based on distant semantic supervision using a reinforcement-learning algorithm. The learner seeks a syntactic structure that provides a compositional architecture that produces a good representation for a downstream semantic task. Although the inferred structures are quite different from traditional syntactic analyses, the performance on the downstream tasks surpasses that of systems that use sequential RNNs and tree-structured RNNs based on treebank dependencies. This is joint work with Adhi Kuncoro, Dani Yogatama, Miguel Ballesteros, Phil Blunsom, Ed Grefenstette, Wang Ling, and Noah A. Smith. | null | null | 10.18653/v1/K17-1001 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,834 |
inproceedings | feldman-2017-rational | Rational Distortions of Learners' Linguistic Input | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1002/ | Feldman, Naomi | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 2 | Language acquisition can be modeled as a statistical inference problem: children use sentences and sounds in their input to infer linguistic structure. However, in many cases, children learn from data whose statistical structure is distorted relative to the language they are learning. Such distortions can arise either in the input itself, or as a result of children`s immature strategies for encoding their input. This work examines several cases in which the statistical structure of children`s input differs from the language being learned. Analyses show that these distortions of the input can be accounted for with a statistical learning framework by carefully considering the inference problems that learners solve during language acquisition | null | null | 10.18653/v1/K17-1002 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,835 |
inproceedings | enguehard-etal-2017-exploring | Exploring the Syntactic Abilities of {RNN}s with Multi-task Learning | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1003/ | Enguehard, {\'E}mile and Goldberg, Yoav and Linzen, Tal | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 3--14 | Recent work has explored the syntactic abilities of RNNs using the subject-verb agreement task, which diagnoses sensitivity to sentence structure. RNNs performed this task well in common cases, but faltered in complex sentences (Linzen et al., 2016). We test whether these errors are due to inherent limitations of the architecture or to the relatively indirect supervision provided by most agreement dependencies in a corpus. We trained a single RNN to perform both the agreement task and an additional task, either CCG supertagging or language modeling. Multi-task training led to significantly lower error rates, in particular on complex sentences, suggesting that RNNs have the ability to evolve more sophisticated syntactic representations than shown before. We also show that easily available agreement training data can improve performance on other syntactic tasks, in particular when only a limited amount of training data is available for those tasks. The multi-task paradigm can also be leveraged to inject grammatical knowledge into language models. | null | null | 10.18653/v1/K17-1003 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,836 |
inproceedings | schwartz-etal-2017-effect | The Effect of Different Writing Tasks on Linguistic Style: A Case Study of the {ROC} Story Cloze Task | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1004/ | Schwartz, Roy and Sap, Maarten and Konstas, Ioannis and Zilles, Leila and Choi, Yejin and Smith, Noah A. | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 15--25 | A writer`s style depends not just on personal traits but also on her intent and mental state. In this paper, we show how variants of the same writing task can lead to measurable differences in writing style. We present a case study based on the story cloze task (Mostafazadeh et al., 2016a), where annotators were assigned similar writing tasks with different constraints: (1) writing an entire story, (2) adding a story ending for a given story context, and (3) adding an incoherent ending to a story. We show that a simple linear classifier informed by stylistic features is able to successfully distinguish among the three cases, without even looking at the story context. In addition, combining our stylistic features with language model predictions reaches state of the art performance on the story cloze challenge. Our results demonstrate that different task framings can dramatically affect the way people write. | null | null | 10.18653/v1/K17-1004 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,837 |
inproceedings | sun-etal-2017-parsing | Parsing for Grammatical Relations via Graph Merging | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1005/ | Sun, Weiwei and Du, Yantao and Wan, Xiaojun | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 26--35 | This paper is concerned with building deep grammatical relation (GR) analysis using data-driven approach. To deal with this problem, we propose graph merging, a new perspective, for building flexible dependency graphs: Constructing complex graphs via constructing simple subgraphs. We discuss two key problems in this perspective: (1) how to decompose a complex graph into simple subgraphs, and (2) how to combine subgraphs into a coherent complex graph. Experiments demonstrate the effectiveness of graph merging. Our parser reaches state-of-the-art performance and is significantly better than two transition-based parsers. | null | null | 10.18653/v1/K17-1005 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,838 |
inproceedings | chen-etal-2017-leveraging | Leveraging Eventive Information for Better Metaphor Detection and Classification | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1006/ | Chen, I-Hsuan and Long, Yunfei and Lu, Qin and Huang, Chu-Ren | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 36--46 | Metaphor detection has been both challenging and rewarding in natural language processing applications. This study offers a new approach based on eventive information in detecting metaphors by leveraging the Chinese writing system, which is a culturally bound ontological system organized according to the basic concepts represented by radicals. As such, the information represented is available in all Chinese text without pre-processing. Since metaphor detection is another culturally based conceptual representation, we hypothesize that sub-textual information can facilitate the identification and classification of the types of metaphoric events denoted in Chinese text. We propose a set of syntactic conditions crucial to event structures to improve the model based on the classification of radical groups. With the proposed syntactic conditions, the model achieves a performance of 0.8859 in terms of F-scores, making 1.7{\%} of improvement than the same classifier with only Bag-of-word features. Results show that eventive information can improve the effectiveness of metaphor detection. Event information is rooted in every language, and thus this approach has a high potential to be applied to metaphor detection in other languages. | null | null | 10.18653/v1/K17-1006 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,839 |
inproceedings | uryupina-moschitti-2017-collaborative | Collaborative Partitioning for Coreference Resolution | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1007/ | Uryupina, Olga and Moschitti, Alessandro | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 47--57 | This paper presents a collaborative partitioning algorithm{---}a novel ensemble-based approach to coreference resolution. Starting from the all-singleton partition, we search for a solution close to the ensemble`s outputs in terms of a task-specific similarity measure. Our approach assumes a loose integration of individual components of the ensemble and can therefore combine arbitrary coreference resolvers, regardless of their models. Our experiments on the CoNLL dataset show that collaborative partitioning yields results superior to those attained by the individual components, for ensembles of both strong and weak systems. Moreover, by applying the collaborative partitioning algorithm on top of three state-of-the-art resolvers, we obtain the best coreference performance reported so far in the literature (MELA v08 score of 64.47). | null | null | 10.18653/v1/K17-1007 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,840 |
inproceedings | eshel-etal-2017-named | Named Entity Disambiguation for Noisy Text | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1008/ | Eshel, Yotam and Cohen, Noam and Radinsky, Kira and Markovitch, Shaul and Yamada, Ikuya and Levy, Omer | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 58--68 | We address the task of Named Entity Disambiguation (NED) for noisy text. We present WikilinksNED, a large-scale NED dataset of text fragments from the web, which is significantly noisier and more challenging than existing news-based datasets. To capture the limited and noisy local context surrounding each mention, we design a neural model and train it with a novel method for sampling informative negative examples. We also describe a new way of initializing word and entity embeddings that significantly improves performance. Our model significantly outperforms existing state-of-the-art methods on WikilinksNED while achieving comparable performance on a smaller newswire dataset. | null | null | 10.18653/v1/K17-1008 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,841 |
inproceedings | sharp-etal-2017-tell | Tell Me Why: Using Question Answering as Distant Supervision for Answer Justification | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1009/ | Sharp, Rebecca and Surdeanu, Mihai and Jansen, Peter and Valenzuela-Esc{\'a}rcega, Marco A. and Clark, Peter and Hammond, Michael | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 69--79 | For many applications of question answering (QA), being able to explain why a given model chose an answer is critical. However, the lack of labeled data for answer justifications makes learning this difficult and expensive. Here we propose an approach that uses answer ranking as distant supervision for learning how to select informative justifications, where justifications serve as inferential connections between the question and the correct answer while often containing little lexical overlap with either. We propose a neural network architecture for QA that reranks answer justifications as an intermediate (and human-interpretable) step in answer selection. Our approach is informed by a set of features designed to combine both learned representations and explicit features to capture the connection between questions, answers, and answer justifications. We show that with this end-to-end approach we are able to significantly improve upon a strong IR baseline in both justification ranking (+9{\%} rated highly relevant) and answer selection (+6{\%} P@1). | null | null | 10.18653/v1/K17-1009 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,842 |
inproceedings | khashabi-etal-2017-learning | Learning What is Essential in Questions | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1010/ | Khashabi, Daniel and Khot, Tushar and Sabharwal, Ashish and Roth, Dan | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 80--89 | Question answering (QA) systems are easily distracted by irrelevant or redundant words in questions, especially when faced with long or multi-sentence questions in difficult domains. This paper introduces and studies the notion of essential question terms with the goal of improving such QA solvers. We illustrate the importance of essential question terms by showing that humans' ability to answer questions drops significantly when essential terms are eliminated from questions. We then develop a classifier that reliably (90{\%} mean average precision) identifies and ranks essential terms in questions. Finally, we use the classifier to demonstrate that the notion of question term essentiality allows state-of-the-art QA solver for elementary-level science questions to make better and more informed decisions,improving performance by up to 5{\%}.We also introduce a new dataset of over 2,200 crowd-sourced essential terms annotated science questions. | null | null | 10.18653/v1/K17-1010 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,843 |
inproceedings | chen-etal-2017-top | Top-Rank Enhanced Listwise Optimization for Statistical Machine Translation | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1011/ | Chen, Huadong and Huang, Shujian and Chiang, David and Dai, Xinyu and Chen, Jiajun | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 90--99 | Pairwise ranking methods are the most widely used discriminative training approaches for structure prediction problems in natural language processing (NLP). Decomposing the problem of ranking hypotheses into pairwise comparisons enables simple and efficient solutions. However, neglecting the global ordering of the hypothesis list may hinder learning. We propose a listwise learning framework for structure prediction problems such as machine translation. Our framework directly models the entire translation list`s ordering to learn parameters which may better fit the given listwise samples. Furthermore, we propose top-rank enhanced loss functions, which are more sensitive to ranking errors at higher positions. Experiments on a large-scale Chinese-English translation task show that both our listwise learning framework and top-rank enhanced listwise losses lead to significant improvements in translation quality. | null | null | 10.18653/v1/K17-1011 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,844 |
inproceedings | mancini-etal-2017-embedding | Embedding Words and Senses Together via Joint Knowledge-Enhanced Training | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1012/ | Mancini, Massimiliano and Camacho-Collados, Jose and Iacobacci, Ignacio and Navigli, Roberto | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 100--111 | Word embeddings are widely used in Natural Language Processing, mainly due to their success in capturing semantic information from massive corpora. However, their creation process does not allow the different meanings of a word to be automatically separated, as it conflates them into a single vector. We address this issue by proposing a new model which learns word and sense embeddings jointly. Our model exploits large corpora and knowledge from semantic networks in order to produce a unified vector space of word and sense embeddings. We evaluate the main features of our approach both qualitatively and quantitatively in a variety of tasks, highlighting the advantages of the proposed method in comparison to state-of-the-art word- and sense-based models. | null | null | 10.18653/v1/K17-1012 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,845 |
inproceedings | vulic-etal-2017-automatic | Automatic Selection of Context Configurations for Improved Class-Specific Word Representations | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1013/ | Vuli{\'c}, Ivan and Schwartz, Roy and Rappoport, Ari and Reichart, Roi and Korhonen, Anna | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 112--122 | This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We construct a context configuration space based on universal dependency relations between words, and efficiently search this space with an adapted beam search algorithm. In word similarity tasks for each word class, we show that our framework is both effective and efficient. Particularly, it improves the Spearman`s rho correlation with human scores on SimLex-999 over the best previously proposed class-specific contexts by 6 (A), 6 (V) and 5 (N) rho points. With our selected context configurations, we train on only 14{\%} (A), 26.2{\%} (V), and 33.6{\%} (N) of all dependency-based contexts, resulting in a reduced training time. Our results generalise: we show that the configurations our algorithm learns for one English training setup outperform previously proposed context types in another training setup for English. Moreover, basing the configuration space on universal dependencies, it is possible to transfer the learned configurations to German and Italian. We also demonstrate improved per-class results over other context types in these two languages.. | null | null | 10.18653/v1/K17-1013 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,846 |
inproceedings | jameel-schockaert-2017-modeling | Modeling Context Words as Regions: An Ordinal Regression Approach to Word Embedding | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1014/ | Jameel, Shoaib and Schockaert, Steven | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 123--133 | Vector representations of word meaning have found many applications in the field of natural language processing. Word vectors intuitively represent the average context in which a given word tends to occur, but they cannot explicitly model the diversity of these contexts. Although region representations of word meaning offer a natural alternative to word vectors, only few methods have been proposed that can effectively learn word regions. In this paper, we propose a new word embedding model which is based on SVM regression. We show that the underlying ranking interpretation of word contexts is sufficient to match, and sometimes outperform, the performance of popular methods such as Skip-gram. Furthermore, we show that by using a quadratic kernel, we can effectively learn word regions, which outperform existing unsupervised models for the task of hypernym detection. | null | null | 10.18653/v1/K17-1014 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,847 |
inproceedings | torabi-asr-jones-2017-artificial | An Artificial Language Evaluation of Distributional Semantic Models | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1015/ | Torabi Asr, Fatemeh and Jones, Michael | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 134--142 | Recent studies of distributional semantic models have set up a competition between word embeddings obtained from predictive neural networks and word vectors obtained from abstractive count-based models. This paper is an attempt to reveal the underlying contribution of additional training data and post-processing steps on each type of model in word similarity and relatedness inference tasks. We do so by designing an artificial language framework, training a predictive and a count-based model on data sampled from this grammar, and evaluating the resulting word vectors in paradigmatic and syntagmatic tasks defined with respect to the grammar. | null | null | 10.18653/v1/K17-1015 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,848 |
inproceedings | song-etal-2017-learning | Learning Word Representations with Regularization from Prior Knowledge | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1016/ | Song, Yan and Lee, Chia-Jung and Xia, Fei | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 143--152 | Conventional word embeddings are trained with specific criteria (e.g., based on language modeling or co-occurrence) inside a single information source, disregarding the opportunity for further calibration using external knowledge. This paper presents a unified framework that leverages pre-learned or external priors, in the form of a regularizer, for enhancing conventional language model-based embedding learning. We consider two types of regularizers. The first type is derived from topic distribution by running LDA on unlabeled data. The second type is based on dictionaries that are created with human annotation efforts. To effectively learn with the regularizers, we propose a novel data structure, trajectory softmax, in this paper. The resulting embeddings are evaluated by word similarity and sentiment classification. Experimental results show that our learning framework with regularization from prior knowledge improves embedding quality across multiple datasets, compared to a diverse collection of baseline methods. | null | null | 10.18653/v1/K17-1016 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,849 |
inproceedings | dong-etal-2017-attention | Attention-based Recurrent Convolutional Neural Network for Automatic Essay Scoring | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1017/ | Dong, Fei and Zhang, Yue and Yang, Jie | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 153--162 | Neural network models have recently been applied to the task of automatic essay scoring, giving promising results. Existing work used recurrent neural networks and convolutional neural networks to model input essays, giving grades based on a single vector representation of the essay. On the other hand, the relative advantages of RNNs and CNNs have not been compared. In addition, different parts of the essay can contribute differently for scoring, which is not captured by existing models. We address these issues by building a hierarchical sentence-document model to represent essays, using the attention mechanism to automatically decide the relative weights of words and sentences. Results show that our model outperforms the previous state-of-the-art methods, demonstrating the effectiveness of the attention mechanism. | null | null | 10.18653/v1/K17-1017 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,850 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.