entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
article | wang-etal-2015-sense | A Sense-Topic Model for Word Sense Induction with Unsupervised Data Enrichment | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1005/ | Wang, Jing and Bansal, Mohit and Gimpel, Kevin and Ziebart, Brian D. and Yu, Clement T. | null | 59--71 | Word sense induction (WSI) seeks to automatically discover the senses of a word in a corpus via unsupervised methods. We propose a sense-topic model for WSI, which treats sense and topic as two separate latent variables to be inferred jointly. Topics are informed by the entire document, while senses are informed by the local context surrounding the ambiguous word. We also discuss unsupervised ways of enriching the original corpus in order to improve model performance, including using neural word embeddings and external corpora to expand the context of each data instance. We demonstrate significant improvements over the previous state-of-the-art, achieving the best results reported to date on the SemEval-2013 WSI task. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00122 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,799 |
article | louis-lapata-2015-step | Which Step Do {I} Take First? Troubleshooting with {B}ayesian Models | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1006/ | Louis, Annie and Lapata, Mirella | null | 73--85 | Online discussion forums and community question-answering websites provide one of the primary avenues for online users to share information. In this paper, we propose text mining techniques which aid users navigate troubleshooting-oriented data such as questions asked on forums and their suggested solutions. We introduce Bayesian generative models of the troubleshooting data and apply them to two interrelated tasks: (a) predicting the complexity of the solutions (e.g., plugging a keyboard in the computer is easier compared to installing a special driver) and (b) presenting them in a ranked order from least to most complex. Experimental results show that our models are on par with human performance on these tasks, while outperforming baselines based on solution length or readability. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00123 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,800 |
article | he-etal-2015-gappy | Gappy Pattern Matching on {GPU}s for On-Demand Extraction of Hierarchical Translation Grammars | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1007/ | He, Hua and Lin, Jimmy and Lopez, Adam | null | 87--100 | Grammars for machine translation can be materialized on demand by finding source phrases in an indexed parallel corpus and extracting their translations. This approach is limited in practical applications by the computational expense of online lookup and extraction. For phrase-based models, recent work has shown that on-demand grammar extraction can be greatly accelerated by parallelization on general purpose graphics processing units (GPUs), but these algorithms do not work for hierarchical models, which require matching patterns that contain gaps. We address this limitation by presenting a novel GPU algorithm for on-demand hierarchical grammar extraction that is at least an order of magnitude faster than a comparable CPU algorithm when processing large batches of sentences. In terms of end-to-end translation, with decoding on the CPU, we increase throughput by roughly two thirds on a standard MT evaluation dataset. The GPU necessary to achieve these improvements increases the cost of a server by about a third. We believe that GPU-based extraction of hierarchical grammars is an attractive proposition, particularly for MT applications that demand high throughput. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00124 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,801 |
article | mcmahan-stone-2015-bayesian | A {B}ayesian Model of Grounded Color Semantics | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1008/ | McMahan, Brian and Stone, Matthew | null | 103--115 | Natural language meanings allow speakers to encode important real-world distinctions, but corpora of grounded language use also reveal that speakers categorize the world in different ways and describe situations with different terminology. To learn meanings from data, we therefore need to link underlying representations of meaning to models of speaker judgment and speaker choice. This paper describes a new approach to this problem: we model variability through uncertainty in categorization boundaries and distributions over preferred vocabulary. We apply the approach to a large data set of color descriptions, where statistical evaluation documents its accuracy. The results are available as a Lexicon of Uncertain Color Standards (LUX), which supports future efforts in grounded language understanding and generation by probabilistically mapping 829 English color descriptions to potentially context-sensitive regions in HSV color space. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00126 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,802 |
article | zhang-etal-2015-exploiting | Exploiting Parallel News Streams for Unsupervised Event Extraction | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1009/ | Zhang, Congle and Soderland, Stephen and Weld, Daniel S. | null | 117--129 | Most approaches to relation extraction, the task of extracting ground facts from natural language text, are based on machine learning and thus starved by scarce training data. Manual annotation is too expensive to scale to a comprehensive set of relations. Distant supervision, which automatically creates training data, only works with relations that already populate a knowledge base (KB). Unfortunately, KBs such as FreeBase rarely cover event relations (e.g. {\textquotedblleft}person travels to location{\textquotedblright}). Thus, the problem of extracting a wide range of events {---} e.g., from news streams {---} is an important, open challenge. This paper introduces NewsSpike-RE, a novel, unsupervised algorithm that discovers event relations and then learns to extract them. NewsSpike-RE uses a novel probabilistic graphical model to cluster sentences describing similar events from parallel news streams. These clusters then comprise training data for the extractor. Our evaluation shows that NewsSpike-RE generates high quality training sentences and learns extractors that perform much better than rival approaches, more than doubling the area under a precision-recall curve compared to Universal Schemas. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00127 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,803 |
article | guo-etal-2015-unsupervised | Unsupervised Declarative Knowledge Induction for Constraint-Based Learning of Information Structure in Scientific Documents | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1010/ | Guo, Yufan and Reichart, Roi and Korhonen, Anna | null | 131--143 | Inferring the information structure of scientific documents is useful for many NLP applications. Existing approaches to this task require substantial human effort. We propose a framework for constraint learning that reduces human involvement considerably. Our model uses topic models to identify latent topics and their key linguistic features in input documents, induces constraints from this information and maps sentences to their dominant information structure categories through a constrained unsupervised model. When the induced constraints are combined with a fully unsupervised model, the resulting model challenges existing lightly supervised feature-based models as well as unsupervised models that use manually constructed declarative knowledge. Our results demonstrate that useful declarative knowledge can be learned from data with very limited human involvement. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00128 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,804 |
article | chisholm-hachey-2015-entity | Entity Disambiguation with Web Links | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1011/ | Chisholm, Andrew and Hachey, Ben | null | 145--156 | Entity disambiguation with Wikipedia relies on structured information from redirect pages, article text, inter-article links, and categories. We explore whether web links can replace a curated encyclopaedia, obtaining entity prior, name, context, and coherence models from a corpus of web pages with links to Wikipedia. Experiments compare web link models to Wikipedia models on well-known conll and tac data sets. Results show that using 34 million web links approaches Wikipedia performance. Combining web link and Wikipedia models produces the best-known disambiguation accuracy of 88.7 on standard newswire test data. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00129 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,805 |
article | narasimhan-etal-2015-unsupervised | An Unsupervised Method for Uncovering Morphological Chains | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1012/ | Narasimhan, Karthik and Barzilay, Regina and Jaakkola, Tommi | null | 157--167 | Most state-of-the-art systems today produce morphological analysis based only on orthographic patterns. In contrast, we propose a model for unsupervised morphological analysis that integrates orthographic and semantic views of words. We model word formation in terms of morphological chains, from base words to the observed words, breaking the chains into parent-child relations. We use log-linear models with morpheme and word-level features to predict possible parents, including their modifications, for each word. The limited set of candidate parents for each word render contrastive estimation feasible. Our model consistently matches or outperforms five state-of-the-art systems on Arabic, English and Turkish. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00130 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,806 |
article | sennrich-2015-modelling | Modelling and Optimizing on Syntactic N-Grams for Statistical Machine Translation | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1013/ | Sennrich, Rico | null | 169--182 | The role of language models in SMT is to promote fluent translation output, but traditional n-gram language models are unable to capture fluency phenomena between distant words, such as some morphological agreement phenomena, subcategorisation, and syntactic collocations with string-level gaps. Syntactic language models have the potential to fill this modelling gap. We propose a language model for dependency structures that is relational rather than configurational and thus particularly suited for languages with a (relatively) free word order. It is trainable with Neural Networks, and not only improves over standard n-gram language models, but also outperforms related syntactic language models. We empirically demonstrate its effectiveness in terms of perplexity and as a feature function in string-to-tree SMT from English to German and Russian. We also show that using a syntactic evaluation metric to tune the log-linear parameters of an SMT system further increases translation quality when coupled with a syntactic language model. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00131 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,807 |
article | lazaridou-etal-2015-visual | From Visual Attributes to Adjectives through Decompositional Distributional Semantics | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1014/ | Lazaridou, Angeliki and Dinu, Georgiana and Liska, Adam and Baroni, Marco | null | 183--196 | As automated image analysis progresses, there is increasing interest in richer linguistic annotation of pictures, with attributes of objects (e.g., furry, brown{\textellipsis}) attracting most attention. By building on the recent {\textquotedblleft}zero-shot learning{\textquotedblright} approach, and paying attention to the linguistic nature of attributes as noun modifiers, and specifically adjectives, we show that it is possible to tag images with attribute-denoting adjectives even when no training data containing the relevant annotation are available. Our approach relies on two key observations. First, objects can be seen as bundles of attributes, typically expressed as adjectival modifiers (a dog is something furry, brown, etc.), and thus a function trained to map visual representations of objects to nominal labels can implicitly learn to map attributes to adjectives. Second, objects and attributes come together in pictures (the same thing is a dog and it is brown). We can thus achieve better attribute (and object) label retrieval by treating images as {\textquotedblleft}visual phrases{\textquotedblright}, and decomposing their linguistic representation into an attribute-denoting adjective and an object-denoting noun. Our approach performs comparably to a method exploiting manual attribute annotation, it out-performs various competitive alternatives in both attribute and object annotation, and it automatically constructs attribute-centric representations that significantly improve performance in supervised object recognition. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00132 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,808 |
article | fried-etal-2015-higher | Higher-order Lexical Semantic Models for Non-factoid Answer Reranking | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1015/ | Fried, Daniel and Jansen, Peter and Hahn-Powell, Gustave and Surdeanu, Mihai and Clark, Peter | null | 197--210 | Lexical semantic models provide robust performance for question answering, but, in general, can only capitalize on direct evidence seen during training. For example, monolingual alignment models acquire term alignment probabilities from semi-structured data such as question-answer pairs; neural network language models learn term embeddings from unstructured text. All this knowledge is then used to estimate the semantic similarity between question and answer candidates. We introduce a higher-order formalism that allows all these lexical semantic models to chain direct evidence to construct indirect associations between question and answer texts, by casting the task as the traversal of graphs that encode direct term associations. Using a corpus of 10,000 questions from Yahoo! Answers, we experimentally demonstrate that higher-order methods are broadly applicable to alignment and language models, across both word and syntactic representations. We show that an important criterion for success is controlling for the semantic drift that accumulates during graph traversal. All in all, the proposed higher-order approach improves five out of the six lexical semantic models investigated, with relative gains of up to +13{\%} over their first-order variants. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00133 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,809 |
article | levy-etal-2015-improving | Improving Distributional Similarity with Lessons Learned from Word Embeddings | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1016/ | Levy, Omer and Goldberg, Yoav and Dagan, Ido | null | 211--225 | Recent trends suggest that neural-network-inspired word embedding models outperform traditional count-based distributional models on word similarity and analogy detection tasks. We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Furthermore, we show that these modifications can be transferred to traditional distributional models, yielding similar gains. In contrast to prior reports, we observe mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00134 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,810 |
article | yu-dredze-2015-learning | Learning Composition Models for Phrase Embeddings | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1017/ | Yu, Mo and Dredze, Mark | null | 227--242 | Lexical embeddings can serve as useful representations for words for a variety of NLP tasks, but learning embeddings for phrases can be challenging. While separate embeddings are learned for each word, this is infeasible for every phrase. We construct phrase embeddings by learning how to compose word embeddings using features that capture phrase structure and context. We propose efficient unsupervised and task-specific learning objectives that scale our model to large datasets. We demonstrate improvements on both language modeling and several phrase semantic similarity tasks with various phrase lengths. We make the implementation of our model and the datasets available for general use. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00135 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,811 |
article | althobaiti-etal-2015-combining | Combining Minimally-supervised Methods for {A}rabic Named Entity Recognition | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1018/ | Althobaiti, Maha and Kruschwitz, Udo and Poesio, Massimo | null | 243--255 | Supervised methods can achieve high performance on NLP tasks, such as Named Entity Recognition (NER), but new annotations are required for every new domain and/or genre change. This has motivated research in minimally supervised methods such as semi-supervised learning and distant learning, but neither technique has yet achieved performance levels comparable to those of supervised methods. Semi-supervised methods tend to have very high precision but comparatively low recall, whereas distant learning tends to achieve higher recall but lower precision. This complementarity suggests that better results may be obtained by combining the two types of minimally supervised methods. In this paper we present a novel approach to Arabic NER using a combination of semi-supervised and distant learning techniques. We trained a semi-supervised NER classifier and another one using distant learning techniques, and then combined them using a variety of classifier combination schemes, including the Bayesian Classifier Combination (BCC) procedure recently proposed for sentiment analysis. According to our results, the BCC model leads to an increase in performance of 8 percentage points over the best base classifiers. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00136 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,812 |
article | krishnamurthy-mitchell-2015-learning | Learning a Compositional Semantics for {F}reebase with an Open Predicate Vocabulary | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1019/ | Krishnamurthy, Jayant and Mitchell, Tom M. | null | 257--270 | We present an approach to learning a model-theoretic semantics for natural language tied to Freebase. Crucially, our approach uses an open predicate vocabulary, enabling it to produce denotations for phrases such as {\textquotedblleft}Republican front-runner from Texas{\textquotedblright} whose semantics cannot be represented using the Freebase schema. Our approach directly converts a sentence`s syntactic CCG parse into a logical form containing predicates derived from the words in the sentence, assigning each word a consistent semantics across sentences. This logical form is evaluated against a learned probabilistic database that defines a distribution over denotations for each textual predicate. A training phase produces this probabilistic database using a corpus of entity-linked text and probabilistic matrix factorization with a novel ranking objective function. We evaluate our approach on a compositional question answering task where it outperforms several competitive baselines. We also compare our approach against manually annotated Freebase queries, finding that our open predicate vocabulary enables us to answer many questions that Freebase cannot. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00137 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,813 |
article | yang-etal-2015-domain | Domain Adaptation for Syntactic and Semantic Dependency Parsing Using Deep Belief Networks | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1020/ | Yang, Haitong and Zhuang, Tao and Zong, Chengqing | null | 271--282 | In current systems for syntactic and semantic dependency parsing, people usually define a very high-dimensional feature space to achieve good performance. But these systems often suffer severe performance drops on out-of-domain test data due to the diversity of features of different domains. This paper focuses on how to relieve this domain adaptation problem with the help of unlabeled target domain data. We propose a deep learning method to adapt both syntactic and semantic parsers. With additional unlabeled target domain data, our method can learn a latent feature representation (LFR) that is beneficial to both domains. Experiments on English data in the CoNLL 2009 shared task show that our method largely reduced the performance drop on out-of-domain test data. Moreover, we get a Macro F1 score that is 2.32 points higher than the best system in the CoNLL 2009 shared task in out-of-domain tests. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00138 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,814 |
article | xu-etal-2015-problems | Problems in Current Text Simplification Research: New Data Can Help | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1021/ | Xu, Wei and Callison-Burch, Chris and Napoles, Courtney | null | 283--297 | Simple Wikipedia has dominated simplification research in the past 5 years. In this opinion paper, we argue that focusing on Wikipedia limits simplification research. We back up our arguments with corpus analysis and by highlighting statements that other researchers have made in the simplification literature. We introduce a new simplification dataset that is a significant improvement over Simple Wikipedia, and present a novel quantitative-comparative approach to study the quality of simplification data resources. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00139 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,815 |
article | nguyen-etal-2015-improving-topic | Improving Topic Models with Latent Feature Word Representations | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1022/ | Nguyen, Dat Quoc and Billingsley, Richard and Du, Lan and Johnson, Mark | null | 299--313 | Probabilistic topic models are widely used to discover latent topics in document collections, while latent feature vector representations of words have been used to obtain high performance in many NLP tasks. In this paper, we extend two different Dirichlet multinomial topic models by incorporating latent feature vector representations of words trained on very large corpora to improve the word-topic mapping learnt on a smaller corpus. Experimental results show that by using information from the external corpora, our new models produce significant improvements on topic coherence, document clustering and document classification tasks, especially on datasets with few or short documents. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00140 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,816 |
article | ling-etal-2015-design | Design Challenges for Entity Linking | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1023/ | Ling, Xiao and Singh, Sameer and Weld, Daniel S. | null | 315--328 | Recent research on entity linking (EL) has introduced a plethora of promising techniques, ranging from deep neural networks to joint inference. But despite numerous papers there is surprisingly little understanding of the state of the art in EL. We attack this confusion by analyzing differences between several versions of the EL problem and presenting a simple yet effective, modular, unsupervised system, called Vinculum, for entity linking. We conduct an extensive evaluation on nine data sets, comparing Vinculum with two state-of-the-art systems, and elucidate key aspects of the system that include mention extraction, candidate generation, entity type prediction, entity coreference, and coherence. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00141 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,817 |
article | ji-eisenstein-2015-one | One Vector is Not Enough: Entity-Augmented Distributed Semantics for Discourse Relations | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1024/ | Ji, Yangfeng and Eisenstein, Jacob | null | 329--344 | Discourse relations bind smaller linguistic units into coherent texts. Automatically identifying discourse relations is difficult, because it requires understanding the semantics of the linked arguments. A more subtle challenge is that it is not enough to represent the meaning of each argument of a discourse relation, because the relation may depend on links between lowerlevel components, such as entity mentions. Our solution computes distributed meaning representations for each discourse argument by composition up the syntactic parse tree. We also perform a downward compositional pass to capture the meaning of coreferent entity mentions. Implicit discourse relations are then predicted from these two representations, obtaining substantial improvements on the Penn Discourse Treebank. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00142 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,818 |
article | wieting-etal-2015-paraphrase | From Paraphrase Database to Compositional Paraphrase Model and Back | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1025/ | Wieting, John and Bansal, Mohit and Gimpel, Kevin and Livescu, Karen | null | 345--358 | The Paraphrase Database (PPDB; Ganitkevitch et al., 2013) is an extensive semantic resource, consisting of a list of phrase pairs with (heuristic) confidence estimates. However, it is still unclear how it can best be used, due to the heuristic nature of the confidences and its necessarily incomplete coverage. We propose models to leverage the phrase pairs from the PPDB to build parametric paraphrase models that score paraphrase pairs more accurately than the PPDB`s internal scores while simultaneously improving its coverage. They allow for learning phrase embeddings as well as improved word embeddings. Moreover, we introduce two new, manually annotated datasets to evaluate short-phrase paraphrasing models. Using our paraphrase model trained using PPDB, we achieve state-of-the-art results on standard word and bigram similarity tasks and beat strong baselines on our new short phrase paraphrase tasks. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00143 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,819 |
article | seeker-cetinoglu-2015-graph | A Graph-based Lattice Dependency Parser for Joint Morphological Segmentation and Syntactic Analysis | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1026/ | Seeker, Wolfgang and {\c{Cetino{\u{glu, {\"Ozlem | null | 359--373 | Space-delimited words in Turkish and Hebrew text can be further segmented into meaningful units, but syntactic and semantic context is necessary to predict segmentation. At the same time, predicting correct syntactic structures relies on correct segmentation. We present a graph-based lattice dependency parser that operates on morphological lattices to represent different segmentations and morphological analyses for a given input sentence. The lattice parser predicts a dependency tree over a path in the lattice and thus solves the joint task of segmentation, morphological analysis, and syntactic parsing. We conduct experiments on the Turkish and the Hebrew treebank and show that the joint model outperforms three state-of-the-art pipeline systems on both data sets. Our work corroborates findings from constituency lattice parsing for Hebrew and presents the first results for full lattice parsing on Turkish. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00144 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,820 |
article | kruszewski-etal-2015-deriving | Deriving {B}oolean structures from distributional vectors | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1027/ | Kruszewski, German and Paperno, Denis and Baroni, Marco | null | 375--388 | Corpus-based distributional semantic models capture degrees of semantic relatedness among the words of very large vocabularies, but have problems with logical phenomena such as entailment, that are instead elegantly handled by model-theoretic approaches, which, in turn, do not scale up. We combine the advantages of the two views by inducing a mapping from distributional vectors of words (or sentences) into a Boolean structure of the kind in which natural language terms are assumed to denote. We evaluate this Boolean Distributional Semantic Model (BDSM) on recognizing entailment between words and sentences. The method achieves results comparable to a state-of-the-art SVM, degrades more gracefully when less training data are available and displays interesting qualitative properties. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00145 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,821 |
article | lee-etal-2015-unsupervised | Unsupervised Lexicon Discovery from Acoustic Input | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1028/ | Lee, Chia-ying and O{'}Donnell, Timothy J. and Glass, James | null | 389--403 | We present a model of unsupervised phonological lexicon discovery{---}the problem of simultaneously learning phoneme-like and word-like units from acoustic input. Our model builds on earlier models of unsupervised phone-like unit discovery from acoustic data (Lee and Glass, 2012), and unsupervised symbolic lexicon discovery using the Adaptor Grammar framework (Johnson et al., 2006), integrating these earlier approaches using a probabilistic model of phonological variation. We show that the model is competitive with state-of-the-art spoken term discovery systems, and present analyses exploring the model`s behavior and the kinds of linguistic structures it learns. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00146 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,822 |
article | martschat-strube-2015-latent | Latent Structures for Coreference Resolution | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1029/ | Martschat, Sebastian and Strube, Michael | null | 405--418 | Machine learning approaches to coreference resolution vary greatly in the modeling of the problem: while early approaches operated on the mention pair level, current research focuses on ranking architectures and antecedent trees. We propose a unified representation of different approaches to coreference resolution in terms of the structure they operate on. We represent several coreference resolution approaches proposed in the literature in our framework and evaluate their performance. Finally, we conduct a systematic analysis of the output of these approaches, highlighting differences and similarities. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00147 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,823 |
article | rabinovich-wintner-2015-unsupervised | Unsupervised Identification of Translationese | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1030/ | Rabinovich, Ella and Wintner, Shuly | null | 419--432 | Translated texts are distinctively different from original ones, to the extent that supervised text classification methods can distinguish between them with high accuracy. These differences were proven useful for statistical machine translation. However, it has been suggested that the accuracy of translation detection deteriorates when the classifier is evaluated outside the domain it was trained on. We show that this is indeed the case, in a variety of evaluation scenarios. We then show that unsupervised classification is highly accurate on this task. We suggest a method for determining the correct labels of the clustering outcomes, and then use the labels for voting, improving the accuracy even further. Moreover, we suggest a simple method for clustering in the challenging case of mixed-domain datasets, in spite of the dominance of domain-related features over translation-related ones. The result is an effective, fully-unsupervised method for distinguishing between original and translated texts that can be applied to new domains with reasonable accuracy. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00148 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,824 |
article | cotterell-etal-2015-modeling | Modeling Word Forms Using Latent Underlying Morphs and Phonology | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1031/ | Cotterell, Ryan and Peng, Nanyun and Eisner, Jason | null | 433--447 | The observed pronunciations or spellings of words are often explained as arising from the {\textquotedblleft}underlying forms{\textquotedblright} of their morphemes. These forms are latent strings that linguists try to reconstruct by hand. We propose to reconstruct them automatically at scale, enabling generalization to new words. Given some surface word types of a concatenative language along with the abstract morpheme sequences that they express, we show how to recover consistent underlying forms for these morphemes, together with the (stochastic) phonology that maps each concatenation of underlying forms to a surface form. Our technique involves loopy belief propagation in a natural directed graphical model whose variables are unknown strings and whose conditional distributions are encoded as finite-state machines with trainable weights. We define training and evaluation paradigms for the task of surface word prediction, and report results on subsets of 7 languages. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00149 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,825 |
article | roth-lapata-2015-context | Context-aware Frame-Semantic Role Labeling | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1032/ | Roth, Michael and Lapata, Mirella | null | 449--460 | Frame semantic representations have been useful in several applications ranging from text-to-scene generation, to question answering and social network analysis. Predicting such representations from raw text is, however, a challenging task and corresponding models are typically only trained on a small set of sentence-level annotations. In this paper, we present a semantic role labeling system that takes into account sentence and discourse context. We introduce several new features which we motivate based on linguistic insights and experimentally demonstrate that they lead to significant improvements over the current state-of-the-art in FrameNet-based semantic role labeling. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00150 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,826 |
article | beck-etal-2015-learning | Learning Structural Kernels for Natural Language Processing | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1033/ | Beck, Daniel and Cohn, Trevor and Hardmeier, Christian and Specia, Lucia | null | 461--473 | Structural kernels are a flexible learning paradigm that has been widely used in Natural Language Processing. However, the problem of model selection in kernel-based methods is usually overlooked. Previous approaches mostly rely on setting default values for kernel hyperparameters or using grid search, which is slow and coarse-grained. In contrast, Bayesian methods allow efficient model selection by maximizing the evidence on the training data through gradient-based methods. In this paper we show how to perform this in the context of structural kernels by using Gaussian Processes. Experimental results on tree kernels show that this procedure results in better prediction performance compared to hyperparameter optimization via grid search. The framework proposed in this paper can be adapted to other structures besides trees, e.g., strings and graphs, thereby extending the utility of kernel-based methods. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00151 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,827 |
article | gormley-etal-2015-approximation | Approximation-Aware Dependency Parsing by Belief Propagation | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1035/ | Gormley, Matthew R. and Dredze, Mark and Eisner, Jason | null | 489--501 | We show how to train the fast dependency parser of Smith and Eisner (2008) for improved accuracy. This parser can consider higher-order interactions among edges while retaining O(n3) runtime. It outputs the parse with maximum expected recall{---}but for speed, this expectation is taken under a posterior distribution that is constructed only approximately, using loopy belief propagation through structured factors. We show how to adjust the model parameters to compensate for the errors introduced by this approximation, by following the gradient of the actual loss on training data. We find this gradient by back-propagation. That is, we treat the entire parser (approximations and all) as a differentiable circuit, as others have done for loopy CRFs (Domke, 2010; Stoyanov et al., 2011; Domke, 2011; Stoyanov and Eisner, 2012). The resulting parser obtains higher accuracy with fewer iterations of belief propagation than one trained by conditional log-likelihood. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00153 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,829 |
article | lazic-etal-2015-plato | {P}lato: A Selective Context Model for Entity Resolution | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1036/ | Lazic, Nevena and Subramanya, Amarnag and Ringgaard, Michael and Pereira, Fernando | null | 503--515 | We present Plato, a probabilistic model for entity resolution that includes a novel approach for handling noisy or uninformative features, and supplements labeled training data derived from Wikipedia with a very large unlabeled text corpus. Training and inference in the proposed model can easily be distributed across many servers, allowing it to scale to over 107 entities. We evaluate Plato on three standard datasets for entity resolution. Our approach achieves the best results to-date on TAC KBP 2011 and is highly competitive on both the CoNLL 2003 and TAC KBP 2012 datasets. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00154 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,830 |
article | yang-etal-2015-hierarchical | A Hierarchical Distance-dependent {B}ayesian Model for Event Coreference Resolution | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1037/ | Yang, Bishan and Cardie, Claire and Frazier, Peter | null | 517--528 | We present a novel hierarchical distance-dependent Bayesian model for event coreference resolution. While existing generative models for event coreference resolution are completely unsupervised, our model allows for the incorporation of pairwise distances between event mentions {---} information that is widely used in supervised coreference models to guide the generative clustering processing for better event clustering both within and across documents. We model the distances between event mentions using a feature-rich learnable distance function and encode them as Bayesian priors for nonparametric clustering. Experiments on the ECB+ corpus show that our model outperforms state-of-the-art methods for both within- and cross-document event coreference resolution. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00155 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,831 |
article | delli-bovi-etal-2015-large | Large-Scale Information Extraction from Textual Definitions through Deep Syntactic and Semantic Analysis | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1038/ | Delli Bovi, Claudio and Telesca, Luca and Navigli, Roberto | null | 529--543 | We present DefIE, an approach to large-scale Information Extraction (IE) based on a syntactic-semantic analysis of textual definitions. Given a large corpus of definitions we leverage syntactic dependencies to reduce data sparsity, then disambiguate the arguments and content words of the relation strings, and finally exploit the resulting information to organize the acquired relations hierarchically. The output of DefIE is a high-quality knowledge base consisting of several million automatically acquired semantic relations. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00156 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,832 |
article | berant-liang-2015-imitation | Imitation Learning of Agenda-based Semantic Parsers | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1039/ | Berant, Jonathan and Liang, Percy | null | 545--558 | Semantic parsers conventionally construct logical forms bottom-up in a fixed order, resulting in the generation of many extraneous partial logical forms. In this paper, we combine ideas from imitation learning and agenda-based parsing to train a semantic parser that searches partial logical forms in a more strategic order. Empirically, our parser reduces the number of constructed partial logical forms by an order of magnitude, and obtains a 6x-9x speedup over fixed-order parsing, while maintaining comparable accuracy. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00157 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,833 |
article | kuhlmann-jonsson-2015-parsing | Parsing to Noncrossing Dependency Graphs | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1040/ | Kuhlmann, Marco and Jonsson, Peter | null | 559--570 | We study the generalization of maximum spanning tree dependency parsing to maximum acyclic subgraphs. Because the underlying optimization problem is intractable even under an arc-factored model, we consider the restriction to noncrossing dependency graphs. Our main contribution is a cubic-time exact inference algorithm for this class. We extend this algorithm into a practical parser and evaluate its performance on four linguistic data sets used in semantic dependency parsing. We also explore a generalization of our parsing framework to dependency graphs with pagenumber at most k and show that the resulting optimization problem is NP-hard for k {\ensuremath{\geq}} 2. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00158 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,834 |
article | arthur-etal-2015-semantic | Semantic Parsing of Ambiguous Input through Paraphrasing and Verification | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1041/ | Arthur, Philip and Neubig, Graham and Sakti, Sakriani and Toda, Tomoki and Nakamura, Satoshi | null | 571--584 | We propose a new method for semantic parsing of ambiguous and ungrammatical input, such as search queries. We do so by building on an existing semantic parsing framework that uses synchronous context free grammars (SCFG) to jointly model the input sentence and output meaning representation. We generalize this SCFG framework to allow not one, but multiple outputs. Using this formalism, we construct a grammar that takes an ambiguous input string and jointly maps it into both a meaning representation and a natural language paraphrase that is less ambiguous than the original input. This paraphrase can be used to disambiguate the meaning representation via verification using a language model that calculates the probability of each paraphrase. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00159 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,835 |
article | koncel-kedziorski-etal-2015-parsing | Parsing Algebraic Word Problems into Equations | Collins, Michael and Lee, Lillian | null | 2015 | Cambridge, MA | MIT Press | https://aclanthology.org/Q15-1042/ | Koncel-Kedziorski, Rik and Hajishirzi, Hannaneh and Sabharwal, Ashish and Etzioni, Oren and Ang, Siena Dumas | null | 585--597 | This paper formalizes the problem of solving multi-sentence algebraic word problems as that of generating and scoring equation trees. We use integer linear programming to generate equation trees and score their likelihood by learning local and global discriminative models. These models are trained on a small set of word problems and their answers, without any manual annotation, in order to choose the equation that best matches the problem text. We refer to the overall system as Alges. We compare Alges with previous work and show that it covers the full gamut of arithmetic operations whereas Hosseini et al. (2014) only handle addition and subtraction. In addition, Alges overcomes the brittleness of the Kushman et al. (2014) approach on single-equation problems, yielding a 15{\%} to 50{\%} reduction in error. | Transactions of the Association for Computational Linguistics | 3 | 10.1162/tacl_a_00160 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 63,836 |
inproceedings | jurgens-pilehvar-2015-semantic | Semantic Similarity Frontiers: From Concepts to Documents | Li, Wenjie and Sima'an, Khalil | sep | 2015 | Lisbon, Portugal | Association for Computational Linguistics | https://aclanthology.org/D15-2001/ | Jurgens, David and Pilehvar, Mohammad Taher | Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | Semantic similarity forms a central component in many NLP systems, from lexical semantics, to part of speech tagging, to social media analysis. Recent years have seen a renewed interest in developing new similarity techniques, buoyed in part by work on embeddings and by SemEval tasks in Semantic Textual Similarity and Cross-Level Semantic Similarity. The increased interest has led to hundreds of techniques for measuring semantic similarity, which makes it difficult for practitioners to identify which state-of-the-art techniques are applicable and easily integrated into projects and for researchers to identify which aspects of the problem require future research.This tutorial synthesizes the current state of the art for measuring semantic similarity for all types of conceptual or textual pairs and presents a broad overview of current techniques, what resources they use, and the particular inputs or domains to which the methods are most applicable. We survey methods ranging from corpus-based approaches operating on massive or domains-specific corpora to those leveraging structural information from expert-based or collaboratively-constructed lexical resources. Furthermore, we review work on multiple similarity tasks from sense-based comparisons to word, sentence, and document-sized comparisons and highlight general-purpose methods capable of comparing multiple types of inputs. Where possible, we also identify techniques that have been demonstrated to successfully operate in multilingual or cross-lingual settings.Our tutorial provides a clear overview of currently-available tools and their strengths for practitioners who need out of the box solutions and provides researchers with an understanding of the limitations of current state of the art and what open problems remain in the field. Given the breadth of available approaches, participants will also receive a detailed bibliography of approaches (including those not directly covered in the tutorial), annotated according to the approaches abilities, and pointers to when open-source implementations of the algorithms may be obtained. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,891 |
inproceedings | neuman-2015-personality | Personality Research for {NLP} | Li, Wenjie and Sima'an, Khalil | sep | 2015 | Lisbon, Portugal | Association for Computational Linguistics | https://aclanthology.org/D15-2002/ | Neuman, Yair | Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | {\textquotedblleft}Personality{\textquotedblright} is a psychological concept describing the individual`s characteristic patterns of thought, emotion, and behavior. In the context of Big Data and granular analytics, it is highly important to measure the individual`s personality dimensions as these may be used for various practical applications. However, personality has been traditionally studied by questionnaires and other forms of low tech methodologies. The availability of textual data and the development of powerful NLP technologies, invite the challenge of automatically measuring personality dimensions for various applications from granular analytics of customers to the forensic identification of potential offenders. While there are emerging attempts to address this challenge, these attempts almost exclusively focus on one theoretical model of personality and on classification tasks limited when tagged data are not available.The major aim of the tutorial is to provide NLP researchers with an introduction to personality theories that may empower their scope of research. In addition, two secondary aims are to survey some recent directions in computational personality and to point to future directions in which the field may be developed (e.g. Textual Entailment for Personality Analytics). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,892 |
inproceedings | chiticariu-etal-2015-transparent | Transparent Machine Learning for Information Extraction: State-of-the-Art and the Future | Li, Wenjie and Sima'an, Khalil | sep | 2015 | Lisbon, Portugal | Association for Computational Linguistics | https://aclanthology.org/D15-2003/ | Chiticariu, Laura and Li, Yunyao and Reiss, Frederick | Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | The rise of Big Data analytics over unstructured text has led to renewed interest in information extraction (IE). These applications need effective IE as a first step towards solving end-to-end real world problems (e.g. biology, medicine, finance, media and entertainment, etc). Much recent NLP research has focused on addressing specific IE problems using a pipeline of multiple machine learning techniques. This approach requires an analyst with the expertise to answer questions such as: {\textquotedblleft}What ML techniques should I combine to solve this problem?{\textquotedblright}; {\textquotedblleft}What features will be useful for the composite pipeline?{\textquotedblright}; and {\textquotedblleft}Why is my model giving the wrong answer on this document?{\textquotedblright}. The need for this expertise creates problems in real world applications. It is very difficult in practice to find an analyst who both understands the real world problem and has deep knowledge of applied machine learning. As a result, the real impact by current IE research does not match up to the abundant opportunities available.In this tutorial, we introduce the concept of transparent machine learning. A transparent ML technique is one that:- produces models that a typical real world use can read and understand;- uses algorithms that a typical real world user can understand; and- allows a real world user to adapt models to new domains.The tutorial is aimed at IE researchers in both the academic and industry communities who are interested in developing and applying transparent ML. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,893 |
inproceedings | pasca-2015-knowledge | Knowledge Acquisition for Web Search | Li, Wenjie and Sima'an, Khalil | sep | 2015 | Lisbon, Portugal | Association for Computational Linguistics | https://aclanthology.org/D15-2004/ | Pasca, Marius | Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | The identification of textual items, or documents, that best match a user`s information need, as expressed in search queries, forms the core functionality of information retrieval systems. Well-known challenges are associated with understanding the intent behind user queries; and, more importantly, with matching inherently-ambiguous queries to documents that may employ lexically different phrases to convey the same meaning. The conversion of semi-structured content from Wikipedia and other resources into structured data produces knowledge potentially more suitable to database-style queries and, ideally, to use in information retrieval. In parallel, the availability of textual documents on the Web enables an aggressive push towards the automatic acquisition of various types of knowledge from text. Methods developed under the umbrella of open-domain information extraction acquire open-domain classes of instances and relations from Web text. The methods operate over unstructured or semi-structured text available within collections of Web documents, or over relatively more intriguing streams of anonymized search queries. Some of the methods import the automatically-extracted data into human-generated resources, or otherwise exploit existing human-generated resources. In both cases, the goal is to expand the coverage of the initial resources, thus providing information about more of the topics that people in general, and Web search users in particular, may be interested in. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,894 |
inproceedings | nakov-etal-2015-learning | Learning Semantic Relations from Text | Li, Wenjie and Sima'an, Khalil | sep | 2015 | Lisbon, Portugal | Association for Computational Linguistics | https://aclanthology.org/D15-2005/ | Nakov, Preslav and Nastase, Vivi and {\'O} S{\'e}aghdha, Diarmuid and Szpakowicz, Stan | Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | Every non-trivial text describes interactions and relations between people, institutions, activities, events and so on. What we know about the world consists in large part of such relations, and that knowledge contributes to the understanding of what texts refer to. Newly found relations can in turn become part of this knowledge that is stored for future use.To grasp a text`s semantic content, an automatic system must be able to recognize relations in texts and reason about them. This may be done by applying and updating previously acquired knowledge. We focus here in particular on semantic relations which describe the interactions among nouns and compact noun phrases, and we present such relations from both a theoretical and a practical perspective. The theoretical exploration sketches the historical path which has brought us to the contemporary view and interpretation of semantic relations. We discuss a wide range of relation inventories proposed by linguists and by language processing people. Such inventories vary by domain, granularity and suitability for downstream applications.On the practical side, we investigate the recognition and acquisition of relations from texts. In a look at supervised learning methods, we present available datasets, the variety of features which can describe relation instances, and learning algorithms found appropriate for the task. Next, we present weakly supervised and unsupervised learning methods of acquiring relations from large corpora with little or no previously annotated data. We show how enduring the bootstrapping algorithm based on seed examples or patterns has proved to be, and how it has been adapted to tackle Web-scale text collections. We also show a few machine learning techniques which can perform fast and reliable relation extraction by taking advantage of data redundancy and variability. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,895 |
inproceedings | farzindar-inkpen-2015-applications | Applications of Social Media Text Analysis | Li, Wenjie and Sima'an, Khalil | sep | 2015 | Lisbon, Portugal | Association for Computational Linguistics | https://aclanthology.org/D15-2006/ | Farzindar, Atefeh and Inkpen, Diana | Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | Analyzing social media texts is a complex problem that becomes difficult to address using traditional Natural Language Processing (NLP) methods. Our tutorial focuses on presenting new methods for NLP tasks and applications that work on noisy and informal texts, such as the ones from social media.Automatic processing of large collections of social media texts is important because they contain a lot of useful information, due to the in-creasing popularity of all types of social media. Use of social media and messaging apps grew 203 percent year-on-year in 2013, with overall app use rising 115 percent over the same period, as reported by Statista, citing data from Flurry Analytics. This growth means that 1.61 billion people are now active in social media around the world and this is expected to advance to 2 billion users in 2016, led by India. The research shows that consumers are now spending daily 5.6 hours on digital media including social media and mo-bile internet usage.At the heart of this interest is the ability for users to create and share content via a variety of platforms such as blogs, micro-blogs, collaborative wikis, multimedia sharing sites, social net-working sites. The unprecedented volume and variety of user-generated content, as well as the user interaction network constitute new opportunities for understanding social behavior and building socially intelligent systems. Therefore it is important to investigate methods for knowledge extraction from social media data. Furthermore, we can use this information to detect and retrieve more related content about events, such as photos and video clips that have caption texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,896 |
article | jentson-2015-record | How to Record the Meaning of Figurative Language | null | null | 2015 | null | CSLI Publications | https://aclanthology.org/2015.lilt-10.3/ | Jentson, Indrek | null | null | This paper focuses on the question of what kind of data needs to be recorded about figurative language, in order to capture the essential meaning of the text and to enable us to re-create a synonymous text, based on that data. A short review of the best known systems of semantic annotation will be presented and their suitability for the task will be analyzed. Also, a method that could be used for representing the meaning of the idioms, metaphors and metonymy in the data model will be considered. | Linguistic Issues in Language Technology | 10 | null | null | null | null | null | null | null | 3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,992 |
article | cooper-etal-2015-probabilistic | Probabilistic Type Theory and Natural Language Semantics | null | null | 2015 | null | CSLI Publications | https://aclanthology.org/2015.lilt-10.4/ | Cooper, Robin and Dobnik, Simon and Lappin, Shalom and Larsson, Staffan | null | null | Type theory has played an important role in specifying the formal connection between syntactic structure and semantic interpretation within the history of formal semantics. In recent years rich type theories developed for the semantics of programming languages have become influential in the semantics of natural language. The use of probabilistic reasoning to model human learning and cognition has become an increasingly important part of cognitive science. In this paper we offer a probabilistic formulation of a rich type theory, Type Theory with Records (TTR), and we illustrate how this framework can be used to approach the problem of semantic learning. Our probabilistic version of TTR is intended to provide an interface between the cognitive process of classifying situations according to the types that they instantiate, and the compositional semantics of natural language. | Linguistic Issues in Language Technology | 10 | null | null | null | null | null | null | null | 4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,993 |
article | brooke-etal-2015-distinguishing | Distinguishing Voices in The Waste Land using Computational Stylistics | null | oct | 2015 | null | CSLI Publications | https://aclanthology.org/2015.lilt-12.2/ | Brooke, Julian and Hammond, Adam and Hirst, Graeme | null | null | T. S. Eliot`s poem The Waste Land is a notoriously challenging example of modernist poetry, mixing the independent viewpoints of over ten distinct characters without any clear demarcation of which voice is speaking when. In this work, we apply unsupervised techniques in computational stylistics to distinguish the particular styles of these voices, offering a computer`s perspective on longstanding debates in literary analysis. Our work includes a model for stylistic segmentation that looks for points of maximum stylistic variation, a k-means clustering model for detecting non-contiguous speech from the same voice, and a stylistic profiling approach which makes use of lexical resources built from a much larger collection of literary texts. Evaluating using an expert interpretation, we show clear progress in distinguishing the voices of The Waste Land as compared to appropriate baselines, and we also offer quantitative evidence both for and against that particular interpretation. | Linguistic Issues in Language Technology | 12 | null | null | null | null | null | null | null | 2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,995 |
article | kao-jurafsky-2015-computational | A computational analysis of poetic style: Imagism and its influence on modern professional and amateur poetry | null | oct | 2015 | null | CSLI Publications | https://aclanthology.org/2015.lilt-12.3/ | Kao, Justine T. and Jurafsky, Dan | null | null | How do standards of poetic beauty change as a function of time and expertise? Here we use computational methods to compare the stylistic features of 359 English poems written by 19th century professional poets, Imagist poets, contemporary professional poets, and contemporary amateur poets. Building upon techniques designed to analyze style and sentiment in texts, we examine elements of poetic craft such as imagery, sound devices, emotive language, and diction. We find that contemporary professional poets use significantly more concrete words than 19th century poets, fewer emotional words, and more complex sound devices. These changes are consistent with the tenets of Imagism, an early 20thcentury literary movement. Further analyses show that contemporary amateur poems resemble 19th century professional poems more than contemporary professional poems on several dimensions. The stylistic similarities between contemporary amateur poems and 19th century professional poems suggest that elite standards of poetic beauty in the past {\textquotedblleft}trickled down{\textquotedblright} to influence amateur works in the present. Our results highlight the influence of Imagism on the modern aesthetic and reveal the dynamics between {\textquotedblleft}high{\textquotedblright} and {\textquotedblleft}low{\textquotedblright} art. We suggest that computational linguistics may shed light on the forces and trends that shape poetic style. | Linguistic Issues in Language Technology | 12 | null | null | null | null | null | null | null | 3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,996 |
article | coll-adanay-sporleder-2015-clustering | Clustering of Novels Represented as Social Networks | null | oct | 2015 | null | CSLI Publications | https://aclanthology.org/2015.lilt-12.4/ | Coll Adanay, Mariona and Sporleder, Caroline | null | null | Within the field of literary analysis, there are few branches as confusing as that of genre theory. Literary criticism has failed so far to reach a consensus on what makes a genre a genre. In this paper, we examine the degree to which the character structure of a novel is indicative of the genre it belongs to. With the premise that novels are societies in miniature, we build static and dynamic social networks of characters as a strategy to represent the narrative structure of novels in a quantifiable manner. For each of the novels, we compute a vector of literary-motivated features extracted from their network representation. We perform clustering on the vectors and analyze the resulting clusters in terms of genre and authorship. | Linguistic Issues in Language Technology | 12 | null | null | null | null | null | null | null | 4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,997 |
article | elsner-2015-abstract | Abstract Representations of Plot Structure | null | oct | 2015 | null | CSLI Publications | https://aclanthology.org/2015.lilt-12.5/ | Elsner, Micha | null | null | Since the 18th century, the novel has been one of the defining forms of English writing, a mainstay of popular entertainment and academic criticism. Despite its importance, however, there are few computational studies of the large-scale structure of novels{---}and many popular representations for discourse modeling do not work very well for novelistic texts. This paper describes a high-level representation of plot structure which tracks the frequency of mentions of different characters, topics and emotional words over time. The representation can distinguish with high accuracy between real novels and artificially permuted surrogates; characters are important for eliminating random permutations, while topics are effective at distinguishing beginnings from ends. | Linguistic Issues in Language Technology | 12 | null | null | null | null | null | null | null | 5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,998 |
article | xu-etal-2015-sentence | Sentence alignment for literary texts: The state-of-the-art and beyond | null | oct | 2015 | null | CSLI Publications | https://aclanthology.org/2015.lilt-12.6/ | Xu, Yong and Max, Aur{\'e}lien and Yvon, Fran{\c{c}}ois | null | null | Literary works are becoming increasingly available in electronic formats, thus quickly transforming editorial processes and reading habits. In the context of the global enthusiasm for multilingualism, the rapid spread of e-book readers, such as Amazon Kindle R or Kobo Touch R , fosters the development of a new generation of reading tools for bilingual books. In particular, literary works, when available in several languages, offer an attractive perspective for self-development or everyday leisure reading, but also for activities such as language learning, translation or literary studies. An important issue in the automatic processing of multilingual e-books is the alignment between textual units. Alignment could help identify corresponding text units in different languages, which would be particularly beneficial to bilingual readers and translation professionals. Computing automatic alignments for literary works, however, is a task more challenging than in the case of better behaved corpora such as parliamentary proceedings or technical manuals. In this paper, we revisit the problem of computing high-quality. alignment for literary works. We first perform a large-scale evaluation of automatic alignment for literary texts, which provides a fair assessment of the actual difficulty of this task. We then introduce a two-pass approach, based on a maximum entropy model. Experimental results for novels available in English and French or in English and Spanish demonstrate the effectiveness of our method. | Linguistic Issues in Language Technology | 12 | null | null | null | null | null | null | null | 6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 64,999 |
inproceedings | schluter-2015-critical | A critical survey on measuring success in rank-based keyword assignment to documents | Lecarpentier, Jean-Marc and Lucas, Nadine | jun | 2015 | Caen, France | ATALA | https://aclanthology.org/2015.jeptalnrecital-court.9/ | Schluter, Natalie | Actes de la 22e conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. Articles courts | 55--60 | Evaluation approaches for unsupervised rank-based keyword assignment are nearly as numerous as are the existing systems. The prolific production of each newly used metric (or metric twist) seems to stem from general dis-satisfaction with the previous one and the source of that dissatisfaction has not previously been discussed in the literature. The difficulty may stem from a poor specification of the keyword assignment task in view of the rank-based approach. With a more complete specification of this task, we aim to show why the previous evaluation metrics fail to satisfy researchers' goals to distinguish and detect good rank-based keyword assignment systems. We put forward a characterisation of an ideal evaluation metric, and discuss the consistency of the evaluation metrics with this ideal, finding that the average standard normalised cumulative gain metric is most consistent with this ideal. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 65,036 |
inproceedings | schluter-2015-effects | Effects of Graph Generation for Unsupervised Non-Contextual Single Document Keyword Extraction | Lecarpentier, Jean-Marc and Lucas, Nadine | jun | 2015 | Caen, France | ATALA | https://aclanthology.org/2015.jeptalnrecital-court.10/ | Schluter, Natalie | Actes de la 22e conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. Articles courts | 61--67 | This paper presents an exhaustive study on the generation of graph input to unsupervised graph-based non-contextual single document keyword extraction systems. A concrete hypothesis on concept coordination for documents that are scientific articles is put forward, consistent with two separate graph models : one which is based on word adjacency in the linear text{--}an approach forming the foundation of all previous graph-based keyword extraction methods, and a novel one that is based on word adjacency modulo their modifiers. In doing so, we achieve a best reported NDCG score to date of 0.431 for any system on the same data. In terms of a best parameter f-score, we achieve the highest reported to date (0.714) at a reasonable ranked list cut-off of n = 6, which is also the best reported f-score for any keyword extraction or generation system in the literature on the same data. The best-parameter f-score corresponds to a reduction in error of 12.6{\%} conservatively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 65,037 |
inproceedings | le-sadat-2015-building | Building a Bilingual {V}ietnamese-{F}rench Named Entity Annotated Corpus through Cross-Linguistic Projection | Lecarpentier, Jean-Marc and Lucas, Nadine | jun | 2015 | Caen, France | ATALA | https://aclanthology.org/2015.jeptalnrecital-demonstration.6/ | Le, Ngoc Tan and Sadat, Fatiha | Actes de la 22e conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. D{\'e}monstrations | 12--13 | The creation of high-quality named entity annotated resources is time-consuming and an expensive process. Most of the gold standard corpora are available for English but not for less-resourced languages such as Vietnamese. In Asian languages, this task is remained problematic. This paper focuses on an automatic construction of named entity annotated corpora for Vietnamese-French, a less-resourced pair of languages. We incrementally apply different cross-projection methods using parallel corpora, such as perfect string matching and edit distance similarity. Evaluations on Vietnamese {--}French pair of languages show a good accuracy (F-score of 94.90{\%}) when identifying named entities pairs and building a named entity annotated parallel corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 65,080 |
inproceedings | navigli-2015-multilinguality | Multilinguality at Your Fingertips : {B}abel{N}et, Babelfy and Beyond ! | Lecarpentier, Jean-Marc and Lucas, Nadine | jun | 2015 | Caen, France | ATALA | https://aclanthology.org/2015.jeptalnrecital-invite.1/ | Navigli, Roberto | Actes de la 22e conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. Conf{\'e}rences invit{\'e}es | 1--1 | Multilinguality is a key feature of today`s Web, and it is this feature that we leverage and exploit in our research work at the Sapienza University of Rome`s Linguistic Computing Laboratory, which I am going to overview and showcase in this talk. I will start by presenting BabelNet 3.0, available at \url{http://babelnet.org}, a very large multilingual encyclopedic dictionary and semantic network, which covers 271 languages and provides both lexicographic and encyclopedic knowledge for all the open-class parts of speech, thanks to the seamless integration of WordNet, Wikipedia, Wiktionary, OmegaWiki, Wikidata and the Open Multilingual WordNet. Next, I will present Babelfy, available at \url{http://babelfy.org}, a unified approach that leverages BabelNet to jointly perform word sense disambiguation and entity linking in arbitrary languages, with performance on both tasks on a par with, or surpassing, those of task-specific state-of-the-art supervised systems. Finally I will describe the Wikipedia Bitaxonomy, available at \url{http://wibitaxonomy.org}, a new approach to the construction of a Wikipedia bitaxonomy, that is, the largest and most accurate currently available taxonomy of Wikipedia pages and taxonomy of categories, aligned to each other. I will also give an outline of future work on multilingual resources and processing, including state-of-the-art semantic similarity with sense embeddings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 65,088 |
article | king-etal-2014-heterogeneous | Heterogeneous Networks and Their Applications: Scientometrics, Name Disambiguation, and Topic Modeling | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1001/ | King, Ben and Jha, Rahul and Radev, Dragomir R. | null | 1--14 | We present heterogeneous networks as a way to unify lexical networks with relational data. We build a unified ACL Anthology network, tying together the citation, author collaboration, and term-cooccurence networks with affiliation and venue relations. This representation proves to be convenient and allows problems such as name disambiguation, topic modeling, and the measurement of scientific impact to be easily solved using only this network and off-the-shelf graph algorithms. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00161 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,617 |
article | lui-etal-2014-automatic | Automatic Detection and Language Identification of Multilingual Documents | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1003/ | Lui, Marco and Lau, Jey Han and Baldwin, Timothy | null | 27--40 | Language identification is the task of automatically detecting the language(s) present in a document based on the content of the document. In this work, we address the problem of detecting documents that contain text from more than one language (multilingual documents). We introduce a method that is able to detect that a document is multilingual, identify the languages present, and estimate their relative proportions. We demonstrate the effectiveness of our method over synthetic data, as well as real-world multilingual documents collected from the web. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00163 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,619 |
article | pitler-2014-crossing | A Crossing-Sensitive Third-Order Factorization for Dependency Parsing | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1004/ | Pitler, Emily | null | 41--54 | Parsers that parametrize over wider scopes are generally more accurate than edge-factored models. For graph-based non-projective parsers, wider factorizations have so far implied large increases in the computational complexity of the parsing problem. This paper introduces a {\textquotedblleft}crossing-sensitive{\textquotedblright} generalization of a third-order factorization that trades off complexity in the model structure (i.e., scoring with features over multiple edges) with complexity in the output structure (i.e., producing crossing edges). Under this model, the optimal 1-Endpoint-Crossing tree can be found in O(n4) time, matching the asymptotic run-time of both the third-order projective parser and the edge-factored 1-Endpoint-Crossing parser. The crossing-sensitive third-order parser is significantly more accurate than the third-order projective parser under many experimental settings and significantly less accurate on none. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00164 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,620 |
article | wang-manning-2014-cross | Cross-lingual Projected Expectation Regularization for Weakly Supervised Learning | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1005/ | Wang, Mengqiu and Manning, Christopher D. | null | 55--66 | We consider a multilingual weakly supervised learning scenario where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. Past approaches project labels across bitext and use them as features or gold labels for training. We propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language boundaries. We encode expectations as constraints and train a discriminative CRF model using Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on standard Chinese-English and German-English NER datasets, our method demonstrates F1 scores of 64{\%} and 60{\%} when no labeled data is used. Attaining the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences. Furthermore, when combined with labeled examples, our method yields significant improvements over state-of-the-art supervised methods, achieving best reported numbers to date on Chinese OntoNotes and German CoNLL-03 datasets. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00165 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,621 |
article | young-etal-2014-image | From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1006/ | Young, Peter and Lai, Alice and Hodosh, Micah and Hockenmaier, Julia | null | 67--78 | We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00166 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,622 |
article | pavlick-etal-2014-language | The Language Demographics of {A}mazon {M}echanical {T}urk | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1007/ | Pavlick, Ellie and Post, Matt and Irvine, Ann and Kachaev, Dmitry and Callison-Burch, Chris | null | 79--92 | We present a large scale study of the languages spoken by bilingual workers on Mechanical Turk (MTurk). We establish a methodology for determining the language skills of anonymous crowd workers that is more robust than simple surveying. We validate workers' self-reported language skill claims by measuring their ability to correctly translate words, and by geolocating workers to see if they reside in countries where the languages are likely to be spoken. Rather than posting a one-off survey, we posted paid tasks consisting of 1,000 assignments to translate a total of 10,000 words in each of 100 languages. Our study ran for several months, and was highly visible on the MTurk crowdsourcing platform, increasing the chances that bilingual workers would complete it. Our study was useful both to create bilingual dictionaries and to act as census of the bilingual speakers on MTurk. We use this data to recommend languages with the largest speaker populations as good candidates for other researchers who want to develop crowdsourced, multilingual technologies. To further demonstrate the value of creating data via crowdsourcing, we hire workers to create bilingual parallel corpora in six Indian languages, and use them to train statistical machine translation systems. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00167 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,623 |
article | borschinger-johnson-2014-exploring | Exploring the Role of Stress in {B}ayesian Word Segmentation using {A}daptor {G}rammars | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1008/ | B{\"orschinger, Benjamin and Johnson, Mark | null | 93--104 | Stress has long been established as a major cue in word segmentation for English infants. We show that enabling a current state-of-the-art Bayesian word segmentation model to take advantage of stress cues noticeably improves its performance. We find that the improvements range from 10 to 4{\%}, depending on both the use of phonotactic cues and, to a lesser extent, the amount of evidence available to the learner. We also find that in particular early on, stress cues are much more useful for our model than phonotactic cues by themselves, consistent with the finding that children do seem to use stress cues before they use phonotactic cues. Finally, we study how the model`s knowledge about stress patterns evolves over time. We not only find that our model correctly acquires the most frequent patterns relatively quickly but also that the Unique Stress Constraint that is at the heart of a previously proposed model does not need to be built in but can be acquired jointly with word segmentation. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00168 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,624 |
article | ravi-etal-2014-parallel | Parallel Algorithms for Unsupervised Tagging | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1009/ | Ravi, Sujith and Vassilivitskii, Sergei and Rastogi, Vibhor | null | 105--118 | We propose a new method for unsupervised tagging that finds minimal models which are then further improved by Expectation Maximization training. In contrast to previous approaches that rely on manually specified and multi-step heuristics for model minimization, our approach is a simple greedy approximation algorithm DMLC (Distributed-Minimum-Label-Cover) that solves this objective in a single step. We extend the method and show how to efficiently parallelize the algorithm on modern parallel computing platforms while preserving approximation guarantees. The new method easily scales to large data and grammar sizes, overcoming the memory bottleneck in previous approaches. We demonstrate the power of the new algorithm by evaluating on various sequence labeling tasks: Part-of-Speech tagging for multiple languages (including low-resource languages), with complete and incomplete dictionaries, and supertagging, a complex sequence labeling task, where the grammar size alone can grow to millions of entries. Our results show that for all of these settings, our method achieves state-of-the-art scalable performance that yields high quality tagging outputs. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00169 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,625 |
article | honnibal-johnson-2014-joint | Joint Incremental Disfluency Detection and Dependency Parsing | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1011/ | Honnibal, Matthew and Johnson, Mark | null | 131--142 | We present an incremental dependency parsing model that jointly performs disfluency detection. The model handles speech repairs using a novel non-monotonic transition system, and includes several novel classes of features. For comparison, we evaluated two pipeline systems, using state-of-the-art disfluency detectors. The joint model performed better on both tasks, with a parse accuracy of 90.5{\%} and 84.0{\%} accuracy at disfluency detection. The model runs in expected linear time, and processes over 550 tokens a second. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00171 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,627 |
article | styler-iv-etal-2014-temporal | Temporal Annotation in the Clinical Domain | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1012/ | Styler IV, William F. and Bethard, Steven and Finan, Sean and Palmer, Martha and Pradhan, Sameer and de Groen, Piet C and Erickson, Brad and Miller, Timothy and Lin, Chen and Savova, Guergana and Pustejovsky, James | null | 143--154 | This article discusses the requirements of a formal specification for the annotation of temporal information in clinical narratives. We discuss the implementation and extension of ISO-TimeML for annotating a corpus of clinical notes, known as the THYME corpus. To reflect the information task and the heavily inference-based reasoning demands in the domain, a new annotation guideline has been developed, {\textquotedblleft}the THYME Guidelines to ISO-TimeML (THYME-TimeML){\textquotedblright}. To clarify what relations merit annotation, we distinguish between linguistically-derived and inferentially-derived temporal orderings in the text. We also apply a top performing TempEval 2013 system against this new resource to measure the difficulty of adapting systems to the clinical domain. The corpus is available to the community and has been proposed for use in a SemEval 2015 task. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00172 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,628 |
article | qu-etal-2014-senti | Senti-{LSSVM}: Sentiment-Oriented Multi-Relation Extraction with Latent Structural {SVM} | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1013/ | Qu, Lizhen and Zhang, Yi and Wang, Rui and Jiang, Lili and Gemulla, Rainer and Weikum, Gerhard | null | 155--168 | Extracting instances of sentiment-oriented relations from user-generated web documents is important for online marketing analysis. Unlike previous work, we formulate this extraction task as a structured prediction problem and design the corresponding inference as an integer linear program. Our latent structural SVM based model can learn from training corpora that do not contain explicit annotations of sentiment-bearing expressions, and it can simultaneously recognize instances of both binary (polarity) and ternary (comparative) relations with regard to entity mentions of interest. The empirical evaluation shows that our approach significantly outperforms state-of-the-art systems across domains (cameras and movies) and across genres (reviews and forum posts). The gold standard corpus that we built will also be a valuable resource for the community. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00173 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,629 |
article | sperber-etal-2014-segmentation | Segmentation for Efficient Supervised Language Annotation with an Explicit Cost-Utility Tradeoff | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1014/ | Sperber, Matthias and Simantzik, Mirjam and Neubig, Graham and Nakamura, Satoshi and Waibel, Alex | null | 169--180 | In this paper, we study the problem of manually correcting automatic annotations of natural language in as efficient a manner as possible. We introduce a method for automatically segmenting a corpus into chunks such that many uncertain labels are grouped into the same chunk, while human supervision can be omitted altogether for other segments. A tradeoff must be found for segment sizes. Choosing short segments allows us to reduce the number of highly confident labels that are supervised by the annotator, which is useful because these labels are often already correct and supervising correct labels is a waste of effort. In contrast, long segments reduce the cognitive effort due to context switches. Our method helps find the segmentation that optimizes supervision efficiency by defining user models to predict the cost and utility of supervising each segment and solving a constrained optimization problem balancing these contradictory objectives. A user study demonstrates noticeable gains over pre-segmented, confidence-ordered baselines on two natural language processing tasks: speech transcription and word segmentation. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00174 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,630 |
article | yogatama-etal-2014-dynamic | Dynamic Language Models for Streaming Text | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1015/ | Yogatama, Dani and Wang, Chong and Routledge, Bryan R. and Smith, Noah A. and Xing, Eric P. | null | 181--192 | We present a probabilistic language model that captures temporal dynamics and conditions on arbitrary non-linguistic context features. These context features serve as important indicators of language changes that are otherwise difficult to capture using text data by itself. We learn our model in an efficient online fashion that is scalable for large, streaming data. With five streaming datasets from two different genres{---}economics news articles and social media{---}we evaluate our model on the task of sequential language modeling. Our model consistently outperforms competing models. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00175 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,631 |
article | schneider-etal-2014-discriminative | Discriminative Lexical Semantic Segmentation with Gaps: Running the {MWE} Gamut | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1016/ | Schneider, Nathan and Danchik, Emily and Dyer, Chris and Smith, Noah A. | null | 193--206 | We present a novel representation, evaluation measure, and supervised models for the task of identifying the multiword expressions (MWEs) in a sentence, resulting in a lexical semantic segmentation. Our approach generalizes a standard chunking representation to encode MWEs containing gaps, thereby enabling efficient sequence tagging algorithms for feature-rich discriminative models. Experiments on a new dataset of English web text offer the first linguistically-driven evaluation of MWE identification with truly heterogeneous expression types. Our statistical sequence model greatly outperforms a lookup-based segmentation procedure, achieving nearly 60{\%} F1 for MWE identification. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00176 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,632 |
article | socher-etal-2014-grounded | Grounded Compositional Semantics for Finding and Describing Images with Sentences | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1017/ | Socher, Richard and Karpathy, Andrej and Le, Quoc V. and Manning, Christopher D. and Ng, Andrew Y. | null | 207--218 | Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00177 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,633 |
article | sultan-etal-2014-back | Back to Basics for Monolingual Alignment: Exploiting Word Similarity and Contextual Evidence | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1018/ | Sultan, Md Arafat and Bethard, Steven and Sumner, Tamara | null | 219--230 | We present a simple, easy-to-replicate monolingual aligner that demonstrates state-of-the-art performance while relying on almost no supervision and a very small number of external resources. Based on the hypothesis that words with similar meanings represent potential pairs for alignment if located in similar contexts, we propose a system that operates by finding such pairs. In two intrinsic evaluations on alignment test data, our system achieves F1 scores of 88{--}92{\%}, demonstrating 1{--}3{\%} absolute improvement over the previous best system. Moreover, in two extrinsic evaluations our aligner outperforms existing aligners, and even a naive application of the aligner approaches state-of-the-art performance in each extrinsic task. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00178 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,634 |
article | moro-etal-2014-entity | Entity Linking meets Word Sense Disambiguation: a Unified Approach | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1019/ | Moro, Andrea and Raganato, Alessandro and Navigli, Roberto | null | 231--244 | Entity Linking (EL) and Word Sense Disambiguation (WSD) both address the lexical ambiguity of language. But while the two tasks are pretty similar, they differ in a fundamental respect: in EL the textual mention can be linked to a named entity which may or may not contain the exact mention, while in WSD there is a perfect match between the word form (better, its lemma) and a suitable word sense. In this paper we present Babelfy, a unified graph-based approach to EL and WSD based on a loose identification of candidate meanings coupled with a densest subgraph heuristic which selects high-coherence semantic interpretations. Our experiments show state-of-the-art performances on both tasks on 6 different datasets, including a multilingual setting. Babelfy is online at \url{http://babelfy.org} | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00179 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,635 |
article | utt-pado-2014-crosslingual | Crosslingual and Multilingual Construction of Syntax-Based Vector Space Models | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1020/ | Utt, Jason and Pad{\'o}, Sebastian | null | 245--258 | Syntax-based distributional models of lexical semantics provide a flexible and linguistically adequate representation of co-occurrence information. However, their construction requires large, accurately parsed corpora, which are unavailable for most languages. In this paper, we develop a number of methods to overcome this obstacle. We describe (a) a crosslingual approach that constructs a syntax-based model for a new language requiring only an English resource and a translation lexicon; and (b) multilingual approaches that combine crosslingual with monolingual information, subject to availability. We evaluate on two lexical semantic benchmarks in German and Croatian. We find that the models exhibit complementary profiles: crosslingual models yield higher accuracies while monolingual models provide better coverage. In addition, we show that simple multilingual models can successfully combine their strengths. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00180 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,636 |
article | fang-chang-2014-entity | Entity Linking on Microblogs with Spatial and Temporal Signals | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1021/ | Fang, Yuan and Chang, Ming-Wei | null | 259--272 | Microblogs present an excellent opportunity for monitoring and analyzing world happenings. Given that words are often ambiguous, entity linking becomes a crucial step towards understanding microblogs. In this paper, we re-examine the problem of entity linking on microblogs. We first observe that spatiotemporal (i.e., spatial and temporal) signals play a key role, but they are not utilized in existing approaches. Thus, we propose a novel entity linking framework that incorporates spatiotemporal signals through a weakly supervised process. Using entity annotations on real-world data, our experiments show that the spatiotemporal model improves F1 by more than 10 points over existing systems. Finally, we present a qualitative study to visualize the effectiveness of our approach. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00181 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,637 |
article | chambers-etal-2014-dense | Dense Event Ordering with a Multi-Pass Architecture | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1022/ | Chambers, Nathanael and Cassidy, Taylor and McDowell, Bill and Bethard, Steven | null | 273--284 | The past 10 years of event ordering research has focused on learning partial orderings over document events and time expressions. The most popular corpus, the TimeBank, contains a small subset of the possible ordering graph. Many evaluations follow suit by only testing certain pairs of events (e.g., only main verbs of neighboring sentences). This has led most research to focus on specific learners for partial labelings. This paper attempts to nudge the discussion from identifying some relations to all relations. We present new experiments on strongly connected event graphs that contain {\ensuremath{\sim}}10 times more relations per document than the TimeBank. We also describe a shift away from the single learner to a sieve-based architecture that naturally blends multiple learners into a precision-ranked cascade of sieves. Each sieve adds labels to the event graph one at a time, and earlier sieves inform later ones through transitive closure. This paper thus describes innovations in both approach and task. We experiment on the densest event graphs to date and show a 14{\%} gain over state-of-the-art. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00182 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,638 |
article | hill-etal-2014-multi | Multi-Modal Models for Concrete and Abstract Concept Meaning | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1023/ | Hill, Felix and Reichart, Roi and Korhonen, Anna | null | 285--296 | Multi-modal models that learn semantic representations from both linguistic and perceptual input outperform language-only models on a range of evaluations, and better reflect human concept acquisition. Most perceptual input to such models corresponds to concrete noun concepts and the superiority of the multi-modal approach has only been established when evaluating on such concepts. We therefore investigate which concepts can be effectively learned by multi-modal models. We show that concreteness determines both which linguistic features are most informative and the impact of perceptual input in such models. We then introduce ridge regression as a means of propagating perceptual information from concrete nouns to more abstract concepts that is more robust than previous approaches. Finally, we present weighted gram matrix combination, a means of combining representations from distinct modalities that outperforms alternatives when both modalities are sufficiently rich. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00183 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,639 |
article | west-etal-2014-exploiting | Exploiting Social Network Structure for Person-to-Person Sentiment Analysis | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1024/ | West, Robert and Paskov, Hristo S. and Leskovec, Jure and Potts, Christopher | null | 297--310 | Person-to-person evaluations are prevalent in all kinds of discourse and important for establishing reputations, building social bonds, and shaping public opinion. Such evaluations can be analyzed separately using signed social networks and textual sentiment analysis, but this misses the rich interactions between language and social context. To capture such interactions, we develop a model that predicts individual A`s opinion of individual B by synthesizing information from the signed social network in which A and B are embedded with sentiment analysis of the evaluative texts relating A to B. We prove that this problem is NP-hard but can be relaxed to an efficiently solvable hinge-loss Markov random field, and we show that this implementation outperforms text-only and network-only versions in two very different datasets involving community-level decision-making: the Wikipedia Requests for Adminship corpus and the Convote U.S. Congressional speech corpus. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00184 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,640 |
article | passonneau-carpenter-2014-benefits | The Benefits of a Model of Annotation | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1025/ | Passonneau, Rebecca J. and Carpenter, Bob | null | 311--326 | Standard agreement measures for interannotator reliability are neither necessary nor sufficient to ensure a high quality corpus. In a case study of word sense annotation, conventional methods for evaluating labels from trained annotators are contrasted with a probabilistic annotation model applied to crowdsourced data. The annotation model provides far more information, including a certainty measure for each gold standard label; the crowdsourced data was collected at less than half the cost of the conventional approach. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00185 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,641 |
article | lewis-steedman-2014-improved | Improved {CCG} Parsing with Semi-supervised Supertagging | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1026/ | Lewis, Mike and Steedman, Mark | null | 327--338 | Current supervised parsers are limited by the size of their labelled training data, making improving them with unlabelled data an important goal. We show how a state-of-the-art CCG parser can be enhanced, by predicting lexical categories using unsupervised vector-space embeddings of words. The use of word embeddings enables our model to better generalize from the labelled data, and allows us to accurately assign lexical categories without depending on a POS-tagger. Our approach leads to substantial improvements in dependency parsing results over the standard supervised CCG parser when evaluated on Wall Street Journal (0.8{\%}), Wikipedia (1.8{\%}) and biomedical (3.4{\%}) text. We compare the performance of two recently proposed approaches for classification using a wide variety of word embeddings. We also give a detailed error analysis demonstrating where using embeddings outperforms traditional feature sets, and showing how including POS features can decrease accuracy. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00186 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,642 |
article | qian-liu-2014-2 | 2-Slave Dual Decomposition for Generalized Higher Order {CRF}s | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1027/ | Qian, Xian and Liu, Yang | null | 339--350 | We show that the decoding problem in generalized Higher Order Conditional Random Fields (CRFs) can be decomposed into two parts: one is a tree labeling problem that can be solved in linear time using dynamic programming; the other is a supermodular quadratic pseudo-Boolean maximization problem, which can be solved in cubic time using a minimum cut algorithm. We use dual decomposition to force their agreement. Experimental results on Twitter named entity recognition and sentence dependency tagging tasks show that our method outperforms spanning tree based dual decomposition. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00187 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,643 |
article | kuznetsova-etal-2014-treetalk | {T}ree{T}alk: Composition and Compression of Trees for Image Descriptions | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1028/ | Kuznetsova, Polina and Ordonez, Vicente and Berg, Tamara L. and Choi, Yejin | null | 351--362 | We present a new tree based approach to composing expressive image descriptions that makes use of naturally occuring web images with captions. We investigate two related tasks: image caption generalization and generation, where the former is an optional subtask of the latter. The high-level idea of our approach is to harvest expressive phrases (as tree fragments) from existing image descriptions, then to compose a new description by selectively combining the extracted (and optionally pruned) tree fragments. Key algorithmic components are tree composition and compression, both integrating tree structure with sequence structure. Our proposed system attains significantly better performance than previous approaches for both image caption generalization and generation. In addition, our work is the first to show the empirical benefit of automatically generalized captions for composing natural image descriptions. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00188 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,644 |
article | bamman-smith-2014-unsupervised | Unsupervised Discovery of Biographical Structure from Text | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1029/ | Bamman, David and Smith, Noah A. | null | 363--376 | We present a method for discovering abstract event classes in biographies, based on a probabilistic latent-variable model. Taking as input timestamped text, we exploit latent correlations among events to learn a set of event classes (such as Born, Graduates High School, and Becomes Citizen), along with the typical times in a person`s life when those events occur. In a quantitative evaluation at the task of predicting a person`s age for a given event, we find that our generative model outperforms a strong linear regression baseline, along with simpler variants of the model that ablate some features. The abstract event classes that we learn allow us to perform a large-scale analysis of 242,970 Wikipedia biographies. Though it is known that women are greatly underrepresented on Wikipedia{---}not only as editors (Wikipedia, 2011) but also as subjects of articles (Reagle and Rhue, 2011){---}we find that there is a bias in their characterization as well, with biographies of women containing significantly more emphasis on events of marriage and divorce than biographies of men. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00189 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,645 |
article | reddy-etal-2014-large | Large-scale Semantic Parsing without Question-Answer Pairs | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1030/ | Reddy, Siva and Lapata, Mirella and Steedman, Mark | null | 377--392 | In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the Free917 and WebQuestions benchmark datasets show our semantic parser improves over the state of the art. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00190 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,646 |
article | clark-etal-2014-locally | Locally Non-Linear Learning for Statistical Machine Translation via Discretization and Structured Regularization | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1031/ | Clark, Jonathan H. and Dyer, Chris and Lavie, Alon | null | 393--404 | Linear models, which support efficient learning and inference, are the workhorses of statistical machine translation; however, linear decision rules are less attractive from a modeling perspective. In this work, we introduce a technique for learning arbitrary, rule-local, non-linear feature transforms that improve model expressivity, but do not sacrifice the efficient inference and learning associated with linear models. To demonstrate the value of our technique, we discard the customary log transform of lexical probabilities and drop the phrasal translation probability in favor of raw counts. We observe that our algorithm learns a variation of a log transform that leads to better translation quality compared to the explicit log transform. We conclude that non-linear responses play an important role in SMT, an observation that we hope will inform the efforts of feature engineers. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00191 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,647 |
article | rozovskaya-roth-2014-building | Building a State-of-the-Art Grammatical Error Correction System | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1033/ | Rozovskaya, Alla and Roth, Dan | null | 419--434 | This paper identifies and examines the key principles underlying building a state-of-the-art grammatical error correction system. We do this by analyzing the Illinois system that placed first among seventeen teams in the recent CoNLL-2013 shared task on grammatical error correction. The system focuses on five different types of errors common among non-native English writers. We describe four design principles that are relevant for correcting all of these errors, analyze the system along these dimensions, and show how each of these dimensions contributes to the performance. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00193 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,649 |
article | xu-etal-2014-extracting | Extracting Lexically Divergent Paraphrases from {T}witter | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1034/ | Xu, Wei and Ritter, Alan and Callison-Burch, Chris and Dolan, William B. and Ji, Yangfeng | null | 435--448 | We present MultiP (Multi-instance Learning Paraphrase Model), a new model suited to identify paraphrases within the short messages on Twitter. We jointly model paraphrase relations between word and sentence pairs and assume only sentence-level annotations during learning. Using this principled latent variable model alone, we achieve the performance competitive with a state-of-the-art method which combines a latent space model with a feature-based supervised classifier. Our model also captures lexically divergent paraphrases that differ from yet complement previous methods; combining our model with previous work significantly outperforms the state-of-the-art. In addition, we present a novel annotation methodology that has allowed us to crowdsource a paraphrase corpus from Twitter. We make this new dataset available to the research community. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00194 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,650 |
article | jurgens-navigli-2014-fun | It`s All Fun and Games until Someone Annotates: Video Games with a Purpose for Linguistic Annotation | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1035/ | Jurgens, David and Navigli, Roberto | null | 449--464 | Annotated data is prerequisite for many NLP applications. Acquiring large-scale annotated corpora is a major bottleneck, requiring significant time and resources. Recent work has proposed turning annotation into a game to increase its appeal and lower its cost; however, current games are largely text-based and closely resemble traditional annotation tasks. We propose a new linguistic annotation paradigm that produces annotations from playing graphical video games. The effectiveness of this design is demonstrated using two video games: one to create a mapping from WordNet senses to images, and a second game that performs Word Sense Disambiguation. Both games produce accurate results. The first game yields annotation quality equal to that of experts and a cost reduction of 73{\%} over equivalent crowdsourcing; the second game provides a 16.3{\%} improvement in accuracy over current state-of-the-art sense disambiguation games with WordNet. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00195 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,651 |
article | zhai-etal-2014-online | Online {A}daptor {G}rammars with Hybrid Inference | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1036/ | Zhai, Ke and Boyd-Graber, Jordan and Cohen, Shay B. | null | 465--476 | Adaptor grammars are a flexible, powerful formalism for defining nonparametric, unsupervised models of grammar productions. This flexibility comes at the cost of expensive inference. We address the difficulty of inference through an online algorithm which uses a hybrid of Markov chain Monte Carlo and variational inference. We show that this inference strategy improves scalability without sacrificing performance on unsupervised word segmentation and topic modeling tasks. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00196 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,652 |
article | durrett-klein-2014-joint | A Joint Model for Entity Analysis: Coreference, Typing, and Linking | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1037/ | Durrett, Greg and Klein, Dan | null | 477--490 | We present a joint model of three core tasks in the entity analysis stack: coreference resolution (within-document clustering), named entity recognition (coarse semantic typing), and entity linking (matching to Wikipedia entities). Our model is formally a structured conditional random field. Unary factors encode local features from strong baselines for each task. We then add binary and ternary factors to capture cross-task interactions, such as the constraint that coreferent mentions have the same semantic type. On the ACE 2005 and OntoNotes datasets, we achieve state-of-the-art results for all three tasks. Moreover, joint modeling improves performance on each task over strong independent baselines. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00197 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,653 |
article | chandlee-etal-2014-learning | Learning Strictly Local Subsequential Functions | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1038/ | Chandlee, Jane and Eyraud, R{\'e}mi and Heinz, Jeffrey | null | 491--504 | We define two proper subclasses of subsequential functions based on the concept of Strict Locality (McNaughton and Papert, 1971; Rogers and Pullum, 2011; Rogers et al., 2013) for formal languages. They are called Input and Output Strictly Local (ISL and OSL). We provide an automata-theoretic characterization of the ISL class and theorems establishing how the classes are related to each other and to Strictly Local languages. We give evidence that local phonological and morphological processes belong to these classes. Finally we provide a learning algorithm which provably identifies the class of ISL functions in the limit from positive data in polynomial time and data. We demonstrate this learning result on appropriately synthesized artificial corpora. We leave a similar learning result for OSL functions for future work and suggest future directions for addressing non-local phonological processes. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00198 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,654 |
article | yang-cardie-2014-joint | Joint Modeling of Opinion Expression Extraction and Attribute Classification | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1039/ | Yang, Bishan and Cardie, Claire | null | 505--516 | In this paper, we study the problems of opinion expression extraction and expression-level polarity and intensity classification. Traditional fine-grained opinion analysis systems address these problems in isolation and thus cannot capture interactions among the textual spans of opinion expressions and their opinion-related properties. We present two types of joint approaches that can account for such interactions during 1) both learning and inference or 2) only during inference. Extensive experiments on a standard dataset demonstrate that our approaches provide substantial improvements over previously published results. By analyzing the results, we gain some insight into the advantages of different joint models. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00199 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,655 |
article | beinborn-etal-2014-predicting | Predicting the Difficulty of Language Proficiency Tests | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1040/ | Beinborn, Lisa and Zesch, Torsten and Gurevych, Iryna | null | 517--530 | Language proficiency tests are used to evaluate and compare the progress of language learners. We present an approach for automatic difficulty prediction of C-tests that performs on par with human experts. On the basis of detailed analysis of newly collected data, we develop a model for C-test difficulty introducing four dimensions: solution difficulty, candidate ambiguity, inter-gap dependency, and paragraph difficulty. We show that cues from all four dimensions contribute to C-test difficulty. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00200 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,656 |
article | lapesa-evert-2014-large | A Large Scale Evaluation of Distributional Semantic Models: Parameters, Interactions and Model Selection | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1041/ | Lapesa, Gabriella and Evert, Stefan | null | 531--546 | This paper presents the results of a large-scale evaluation study of window-based Distributional Semantic Models on a wide variety of tasks. Our study combines a broad coverage of model parameters with a model selection methodology that is robust to overfitting and able to capture parameter interactions. We show that our strategy allows us to identify parameter configurations that achieve good performance across different datasets and tasks. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00201 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,657 |
article | vlachos-clark-2014-new | A New Corpus and Imitation Learning Framework for Context-Dependent Semantic Parsing | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1042/ | Vlachos, Andreas and Clark, Stephen | null | 547--560 | Semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation. Most approaches to this task have been evaluated on a small number of existing corpora which assume that all utterances must be interpreted according to a database and typically ignore context. In this paper we present a new, publicly available corpus for context-dependent semantic parsing. The MRL used for the annotation was designed to support a portable, interactive tourist information system. We develop a semantic parser for this corpus by adapting the imitation learning algorithm DAgger without requiring alignment information during training. DAgger improves upon independently trained classifiers by 9.0 and 4.8 points in F-score on the development and test sets respectively. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00202 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,658 |
article | belinkov-etal-2014-exploring | Exploring Compositional Architectures and Word Vector Representations for Prepositional Phrase Attachment | Lin, Dekang and Collins, Michael and Lee, Lillian | null | 2014 | Cambridge, MA | MIT Press | https://aclanthology.org/Q14-1043/ | Belinkov, Yonatan and Lei, Tao and Barzilay, Regina and Globerson, Amir | null | 561--572 | Prepositional phrase (PP) attachment disambiguation is a known challenge in syntactic parsing. The lexical sparsity associated with PP attachments motivates research in word representations that can capture pertinent syntactic and semantic features of the word. One promising solution is to use word vectors induced from large amounts of raw text. However, state-of-the-art systems that employ such representations yield modest gains in PP attachment accuracy. In this paper, we show that word vector representations can yield significant PP attachment performance gains. This is achieved via a non-linear architecture that is discriminatively trained to maximize PP attachment accuracy. The architecture is initialized with word vectors trained from unlabeled data, and relearns those to maximize attachment accuracy. We obtain additional performance gains with alternative representations such as dependency-based word vectors. When tested on both English and Arabic datasets, our method outperforms both a strong SVM classifier and state-of-the-art parsers. For instance, we achieve 82.6{\%} PP attachment accuracy on Arabic, while the Turbo and Charniak self-trained parsers obtain 76.7{\%} and 80.8{\%} respectively. | Transactions of the Association for Computational Linguistics | 2 | 10.1162/tacl_a_00203 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 66,659 |
inproceedings | verhoeven-daelemans-2014-clips | {CL}i{PS} Stylometry Investigation ({CSI}) corpus: A {D}utch corpus for the detection of age, gender, personality, sentiment and deception in text | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1001/ | Verhoeven, Ben and Daelemans, Walter | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | 3081--3085 | We present the CLiPS Stylometry Investigation (CSI) corpus, a new Dutch corpus containing reviews and essays written by university students. It is designed to serve multiple purposes: detection of age, gender, authorship, personality, sentiment, deception, topic and genre. Another major advantage is its planned yearly expansion with each year`s new students. The corpus currently contains about 305,000 tokens spread over 749 documents. The average review length is 128 tokens; the average essay length is 1126 tokens. The corpus will be made available on the CLiPS website (www.clips.uantwerpen.be/datasets) and can freely be used for academic research purposes. An initial deception detection experiment was performed on this data. Deception detection is the task of automatically classifying a text as being either truthful or deceptive, in our case by examining the writing style of the author. This task has never been investigated for Dutch before. We performed a supervised machine learning experiment using the SVM algorithm in a 10-fold cross-validation setup. The only features were the token unigrams present in the training data. Using this simple method, we reached a state-of-the-art F-score of 72.2{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,037 |
inproceedings | rus-etal-2014-paraphrase | On Paraphrase Identification Corpora | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1002/ | Rus, Vasile and Banjade, Rajendra and Lintean, Mihai | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | 2422--2429 | We analyze in this paper a number of data sets proposed over the last decade or so for the task of paraphrase identification. The goal of the analysis is to identify the advantages as well as shortcomings of the previously proposed data sets. Based on the analysis, we then make recommendations about how to improve the process of creating and using such data sets for evaluating in the future approaches to the task of paraphrase identification or the more general task of semantic similarity. The recommendations are meant to improve our understanding of what a paraphrase is, offer a more fair ground for comparing approaches, increase the diversity of actual linguistic phenomena that future data sets will cover, and offer ways to improve our understanding of the contributions of various modules or approaches proposed for solving the task of paraphrase identification or similar tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,038 |
inproceedings | kessler-kuhn-2014-corpus | A Corpus of Comparisons in Product Reviews | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1003/ | Kessler, Wiltrud and Kuhn, Jonas | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | 2242--2248 | Sentiment analysis (or opinion mining) deals with the task of determining the polarity of an opinionated document or sentence. Users often express sentiment about one product by comparing it to a different product. In this work, we present a corpus of comparison sentences from English camera reviews. For our purposes we define a comparison to be any statement about the similarity or difference of two entities. For each sentence we have annotated detailed information about the comparisons it contains: The comparative predicate that expresses the comparison, the type of the comparison, the two entities that are being compared, and the aspect they are compared in. The results of our agreement study show that the decision whether a sentence contains a comparison is difficult to make even for trained human annotators. Once that decision is made, we can achieve consistent results for the very detailed annotations. In total, we have annotated 2108 comparisons in 1707 sentences from camera reviews which makes our corpus the largest resource currently available. The corpus and the annotation guidelines are publicly available on our website. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,039 |
inproceedings | claveau-kijak-2014-generating | Generating and using probabilistic morphological resources for the biomedical domain | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1004/ | Claveau, Vincent and Kijak, Ewa | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | 3348--3354 | In most Indo-European languages, many biomedical terms are rich morphological structures composed of several constituents mainly originating from Greek or Latin. The interpretation of these compounds are keystones to access information. In this paper, we present morphological resources aiming at coping with these biomedical morphological compounds. Following previous work (Claveau et al. 2011,Claveau et al. 12), these resources are automatically built using Japanese terms in Kanjis as a pivot language and alignment techniques. We show how these alignment information can be used for segmenting compounds, attaching semantic interpretation to each part, proposing definitions (gloses) of the compounds... When possible, these tasks are compared with state-of-the-art tools, and the results show the interest of our automatically built probabilistic resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,040 |
inproceedings | simov-etal-2014-system | A System for Experiments with Dependency Parsers | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1005/ | Simov, Kiril and Simova, Iliana and Ivanova, Ginka and Mateva, Maria and Osenova, Petya | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | 4061--4065 | In this paper we present a system for experimenting with combinations of dependency parsers. The system supports initial training of different parsing models, creation of parsebank(s) with these models, and different strategies for the construction of ensemble models aimed at improving the output of the individual models by voting. The system employs two algorithms for construction of dependency trees from several parses of the same sentence and several ways for ranking of the arcs in the resulting trees. We have performed experiments with state-of-the-art dependency parsers including MaltParser, MSTParser, TurboParser, and MATEParser, on the data from the Bulgarian treebank {--} BulTreeBank. Our best result from these experiments is slightly better then the best result reported in the literature for this language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,041 |
inproceedings | solorio-etal-2014-sockpuppet | Sockpuppet Detection in {W}ikipedia: A Corpus of Real-World Deceptive Writing for Linking Identities | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1006/ | Solorio, Thamar and Hasan, Ragib and Mizan, Mainul | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | 1355--1358 | This paper describes a corpus of sockpuppet cases from Wikipedia. A sockpuppet is an online user account created with a fake identity for the purpose of covering abusive behavior and/or subverting the editing regulation process. We used a semi-automated method for crawling and curating a dataset of real sockpuppet investigation cases. To the best of our knowledge, this is the first corpus available on real-world deceptive writing. We describe the process for crawling the data and some preliminary results that can be used as baseline for benchmarking research. The dataset has been released under a Creative Commons license from our project website (\url{http://docsig.cis.uab.edu/tools-and-datasets/}). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,042 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.