entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | oda-etal-2016-phrase | Phrase-based Machine Translation using Multiple Preordering Candidates | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1134/ | Oda, Yusuke and Kudo, Taku and Nakagawa, Tetsuji and Watanabe, Taro | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1419--1428 | In this paper, we propose a new decoding method for phrase-based statistical machine translation which directly uses multiple preordering candidates as a graph structure. Compared with previous phrase-based decoding methods, our method is based on a simple left-to-right dynamic programming in which no decoding-time reordering is performed. As a result, its runtime is very fast and implementing the algorithm becomes easy. Our system does not depend on specific preordering methods as long as they output multiple preordering candidates, and it is trivial to employ existing preordering methods into our system. In our experiments for translating diverse 11 languages into English, the proposed method outperforms conventional phrase-based decoder in terms of translation qualities under comparable or faster decoding time. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,552 |
inproceedings | suggu-etal-2016-hand | Hand in Glove: Deep Feature Fusion Network Architectures for Answer Quality Prediction in Community Question Answering | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1135/ | Suggu, Sai Praneeth and Naga Goutham, Kushwanth and Chinnakotla, Manoj K. and Shrivastava, Manish | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1429--1440 | Community Question Answering (cQA) forums have become a popular medium for soliciting direct answers to specific questions of users from experts or other experienced users on a given topic. However, for a given question, users sometimes have to sift through a large number of low-quality or irrelevant answers to find out the answer which satisfies their information need. To alleviate this, the problem of Answer Quality Prediction (AQP) aims to predict the quality of an answer posted in response to a forum question. Current AQP systems either learn models using - a) various hand-crafted features (HCF) or b) Deep Learning (DL) techniques which automatically learn the required feature representations. In this paper, we propose a novel approach for AQP known as - {\textquotedblleft}Deep Feature Fusion Network (DFFN){\textquotedblright} which combines the advantages of both hand-crafted features and deep learning based systems. Given a question-answer pair along with its metadata, the DFFN architecture independently - a) learns features from the Deep Neural Network (DNN) and b) computes hand-crafted features using various external resources and then combines them using a fully connected neural network trained to predict the final answer quality. DFFN is end-end differentiable and trained as a single system. We propose two different DFFN architectures which vary mainly in the way they model the input question/answer pair - DFFN-CNN uses a Convolutional Neural Network (CNN) and DFFN-BLNA uses a Bi-directional LSTM with Neural Attention (BLNA). Both these proposed variants of DFFN (DFFN-CNN and DFFN-BLNA) achieve state-of-the-art performance on the standard SemEval-2015 and SemEval-2016 benchmark datasets and outperforms baseline approaches which individually employ either HCF or DL based techniques alone. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,553 |
inproceedings | li-etal-2016-learning | Learning Event Expressions via Bilingual Structure Projection | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1136/ | Li, Fangyuan and Huang, Ruihong and Xiong, Deyi and Zhang, Min | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1441--1450 | Identifying events of a specific type is a challenging task as events in texts are described in numerous and diverse ways. Aiming to resolve high complexities of event descriptions, previous work (Huang and Riloff, 2013) proposes multi-faceted event recognition and a bootstrapping method to automatically acquire both event facet phrases and event expressions from unannotated texts. However, to ensure high quality of learned phrases, this method is constrained to only learn phrases that match certain syntactic structures. In this paper, we propose a bilingual structure projection algorithm that explores linguistic divergences between two languages (Chinese and English) and mines new phrases with new syntactic structures, which have been ignored in the previous work. Experiments show that our approach can successfully find novel event phrases and structures, e.g., phrases headed by nouns. Furthermore, the newly mined phrases are capable of recognizing additional event descriptions and increasing the recall of event recognition. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,554 |
inproceedings | li-etal-2016-global | Global Inference to {C}hinese Temporal Relation Extraction | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1137/ | Li, Peifeng and Zhu, Qiaoming and Zhou, Guodong and Wang, Hongling | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1451--1460 | Previous studies on temporal relation extraction focus on mining sentence-level information or enforcing coherence on different temporal relation types among various event mentions in the same sentence or neighboring sentences, largely ignoring those discourse-level temporal relations in nonadjacent sentences. In this paper, we propose a discourse-level global inference model to mine those temporal relations between event mentions in document-level, especially in nonadjacent sentences. Moreover, we provide various kinds of discourse-level constraints, which derived from event semantics, to further improve our global inference model. Evaluation on a Chinese corpus justifies the effectiveness of our discourse-level global inference model over two strong baselines. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,555 |
inproceedings | xu-etal-2016-improved | Improved relation classification by deep recurrent neural networks with data augmentation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1138/ | Xu, Yan and Jia, Ran and Mou, Lili and Li, Ge and Chen, Yunchuan and Lu, Yangyang and Jin, Zhi | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1461--1470 | Nowadays, neural networks play an important role in the task of relation classification. By designing different neural architectures, researchers have improved the performance to a large extent in comparison with traditional methods. However, existing neural networks for relation classification are usually of shallow architectures (e.g., one-layer convolutional neural networks or recurrent networks). They may fail to explore the potential representation space in different abstraction levels. In this paper, we propose deep recurrent neural networks (DRNNs) for relation classification to tackle this challenge. Further, we propose a data augmentation method by leveraging the directionality of relations. We evaluated our DRNNs on the SemEval-2010 Task 8, and achieve an F1-score of 86.1{\%}, outperforming previous state-of-the-art recorded results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,556 |
inproceedings | jiang-etal-2016-relation | Relation Extraction with Multi-instance Multi-label Convolutional Neural Networks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1139/ | Jiang, Xiaotian and Wang, Quan and Li, Peng and Wang, Bin | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1471--1480 | Distant supervision is an efficient approach that automatically generates labeled data for relation extraction (RE). Traditional distantly supervised RE systems rely heavily on handcrafted features, and hence suffer from error propagation. Recently, a neural network architecture has been proposed to automatically extract features for relation classification. However, this approach follows the traditional expressed-at-least-once assumption, and fails to make full use of information across different sentences. Moreover, it ignores the fact that there can be multiple relations holding between the same entity pair. In this paper, we propose a multi-instance multi-label convolutional neural network for distantly supervised RE. It first relaxes the expressed-at-least-once assumption, and employs cross-sentence max-pooling so as to enable information sharing across different sentences. Then it handles overlapping relations by multi-label learning with a neural network classifier. Experimental results show that our approach performs significantly and consistently better than state-of-the-art methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,557 |
inproceedings | glaser-kuhn-2016-named | Named Entity Disambiguation for little known referents: a topic-based approach | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1140/ | Glaser, Andrea and Kuhn, Jonas | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1481--1492 | We propose an approach to Named Entity Disambiguation that avoids a problem of standard work on the task (likewise affecting fully supervised, weakly supervised, or distantly supervised machine learning techniques): the treatment of name mentions referring to people with no (or very little) coverage in the textual training data is systematically incorrect. We propose to indirectly take into account the property information for the {\textquotedblleft}non-prominent{\textquotedblright} name bearers, such as nationality and profession (e.g., for a Canadian law professor named Michael Jackson, with no Wikipedia article, it is very hard to obtain reliable textual training data). The target property information for the entities is directly available from name authority files, or inferrable, e.g., from listings of sportspeople etc. Our proposed approach employs topic modeling to exploit textual training data based on entities sharing the relevant properties. In experiments with a pilot implementation of the general approach, we show that the approach does indeed work well for name/referent pairs with limited textual coverage in the training data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,558 |
inproceedings | perez-beltrachini-etal-2016-building | Building {RDF} Content for Data-to-Text Generation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1141/ | Perez-Beltrachini, Laura and Sayed, Rania and Gardent, Claire | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1493--1502 | In Natural Language Generation (NLG), one important limitation is the lack of common benchmarks on which to train, evaluate and compare data-to-text generators. In this paper, we make one step in that direction and introduce a method for automatically creating an arbitrary large repertoire of data units that could serve as input for generation. Using both automated metrics and a human evaluation, we show that the data units produced by our method are both diverse and coherent. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,559 |
inproceedings | ive-yvon-2016-parallel | Parallel Sentence Compression | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1142/ | Ive, Julia and Yvon, Fran{\c{c}}ois | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1503--1513 | Sentence compression is a way to perform text simplification and is usually handled in a monolingual setting. In this paper, we study ways to extend sentence compression in a bilingual context, where the goal is to obtain parallel compressions of parallel sentences. This can be beneficial for a series of multilingual natural language processing (NLP) tasks. We compare two ways to take bilingual information into account when compressing parallel sentences. Their efficiency is contrasted on a parallel corpus of News articles. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,560 |
inproceedings | ma-etal-2016-unsupervised | An Unsupervised Multi-Document Summarization Framework Based on Neural Document Model | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1143/ | Ma, Shulei and Deng, Zhi-Hong and Yang, Yunlun | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1514--1523 | In the age of information exploding, multi-document summarization is attracting particular attention for the ability to help people get the main ideas in a short time. Traditional extractive methods simply treat the document set as a group of sentences while ignoring the global semantics of the documents. Meanwhile, neural document model is effective on representing the semantic content of documents in low-dimensional vectors. In this paper, we propose a document-level reconstruction framework named DocRebuild, which reconstructs the documents with summary sentences through a neural document model and selects summary sentences to minimize the reconstruction error. We also apply two strategies, sentence filtering and beamsearch, to improve the performance of our method. Experimental results on the benchmark datasets DUC 2006 and DUC 2007 show that DocRebuild is effective and outperforms the state-of-the-art unsupervised algorithms. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,561 |
inproceedings | schwenger-etal-2016-openccg | From {O}pen{CCG} to {AI} Planning: Detecting Infeasible Edges in Sentence Generation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1144/ | Schwenger, Maximilian and Torralba, {\'A}lvaro and Hoffmann, Joerg and Howcroft, David M. and Demberg, Vera | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1524--1534 | The search space in grammar-based natural language generation tasks can get very large, which is particularly problematic when generating long utterances or paragraphs. Using surface realization with OpenCCG as an example, we show that we can effectively detect partial solutions (edges) which cannot ultimately be part of a complete sentence because of their syntactic category. Formulating the completion of an edge into a sentence as finding a solution path in a large state-transition system, we demonstrate a connection to AI Planning which is concerned with this kind of problem. We design a compilation from OpenCCG into AI Planning allowing the detection of infeasible edges via AI Planning dead-end detection methods (proving the absence of a solution to the compilation). Our experiments show that this can filter out large fractions of infeasible edges in, and thus benefit the performance of, complex realization processes. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,562 |
inproceedings | zopf-etal-2016-next | The Next Step for Multi-Document Summarization: A Heterogeneous Multi-Genre Corpus Built with a Novel Construction Approach | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1145/ | Zopf, Markus and Peyrard, Maxime and Eckle-Kohler, Judith | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1535--1545 | Research in multi-document summarization has focused on newswire corpora since the early beginnings. However, the newswire genre provides genre-specific features such as sentence position which are easy to exploit in summarization systems. Such easy to exploit genre-specific features are available in other genres as well. We therefore present the new hMDS corpus for multi-document summarization, which contains heterogeneous source documents from multiple text genres, as well as summaries with different lengths. For the construction of the corpus, we developed a novel construction approach which is suited to build large and heterogeneous summarization corpora with little effort. The method reverses the usual process of writing summaries for given source documents: it combines already available summaries with appropriate source documents. In a detailed analysis, we show that our new corpus is significantly different from the homogeneous corpora commonly used, and that it is heterogeneous along several dimensions. Our experimental evaluation using well-known state-of-the-art summarization systems shows that our corpus poses new challenges in the field of multi-document summarization. Last but not least, we make our corpus publicly available to the research community at the corpus web page \url{https://github.com/AIPHES/hMDS}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,563 |
inproceedings | saeidi-etal-2016-sentihood | {S}enti{H}ood: Targeted Aspect Based Sentiment Analysis Dataset for Urban Neighbourhoods | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1146/ | Saeidi, Marzieh and Bouchard, Guillaume and Liakata, Maria and Riedel, Sebastian | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1546--1556 | In this paper, we introduce the task of targeted aspect-based sentiment analysis. The goal is to extract fine-grained information with respect to entities mentioned in user comments. This work extends both aspect-based sentiment analysis {--} that assumes a single entity per document {---} and targeted sentiment analysis {---} that assumes a single sentiment towards a target entity. In particular, we identify the sentiment towards each aspect of one or more entities. As a testbed for this task, we introduce the SentiHood dataset, extracted from a question answering (QA) platform where urban neighbourhoods are discussed by users. In this context units of text often mention several aspects of one or more neighbourhoods. This is the first time that a generic social media platform,i.e. QA, is used for fine-grained opinion mining. Text coming from QA platforms are far less constrained compared to text from review specific platforms which current datasets are based on. We develop several strong baselines, relying on logistic regression and state-of-the-art recurrent neural networks | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,564 |
inproceedings | jovanoski-etal-2016-impact | On the Impact of Seed Words on Sentiment Polarity Lexicon Induction | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1147/ | Jovanoski, Dame and Pachovski, Veno and Nakov, Preslav | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1557--1567 | Sentiment polarity lexicons are key resources for sentiment analysis, and researchers have invested a lot of efforts in their manual creation. However, there has been a recent shift towards automatically extracted lexicons, which are orders of magnitude larger and perform much better. These lexicons are typically mined using bootstrapping, starting from very few seed words whose polarity is given, e.g., 50-60 words, and sometimes even just 5-6. Here we demonstrate that much higher-quality lexicons can be built by starting with hundreds of words and phrases as seeds, especially when they are in-domain. Thus, we combine (i) mid-sized high-quality manually crafted lexicons as seeds and (ii) bootstrapping, in order to build large-scale lexicons. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,565 |
inproceedings | somasundaran-etal-2016-evaluating | Evaluating Argumentative and Narrative Essays using Graphs | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1148/ | Somasundaran, Swapna and Riordan, Brian and Gyawali, Binod and Yoon, Su-Youn | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1568--1578 | This work investigates whether the development of ideas in writing can be captured by graph properties derived from the text. Focusing on student essays, we represent the essay as a graph, and encode a variety of graph properties including PageRank as features for modeling essay scores related to quality of development. We demonstrate that our approach improves on a state-of-the-art system on the task of holistic scoring of persuasive essays and on the task of scoring narrative essays along the development dimension. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,566 |
inproceedings | agrawal-an-2016-selective | Selective Co-occurrences for Word-Emotion Association | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1149/ | Agrawal, Ameeta and An, Aijun | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1579--1590 | Emotion classification from text typically requires some degree of word-emotion association, either gathered from pre-existing emotion lexicons or calculated using some measure of semantic relatedness. Most emotion lexicons contain a fixed number of emotion categories and provide a rather limited coverage. Current measures of computing semantic relatedness, on the other hand, do not adapt well to the specific task of word-emotion association and therefore, yield average results. In this work, we propose an unsupervised method of learning word-emotion association from large text corpora, called Selective Co-occurrences (SECO), by leveraging the property of mutual exclusivity generally exhibited by emotions. Extensive evaluation, using just one seed word per emotion category, indicates the effectiveness of the proposed approach over three emotion lexicons and two state-of-the-art models of word embeddings on three datasets from different domains. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,567 |
inproceedings | li-etal-2016-weighted | Weighted Neural Bag-of-n-grams Model: New Baselines for Text Classification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1150/ | Li, Bofang and Zhao, Zhe and Liu, Tao and Wang, Puwei and Du, Xiaoyong | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1591--1600 | NBSVM is one of the most popular methods for text classification and has been widely used as baselines for various text representation approaches. It uses Naive Bayes (NB) feature to weight sparse bag-of-n-grams representation. N-gram captures word order in short context and NB feature assigns more weights to those important words. However, NBSVM suffers from sparsity problem and is reported to be exceeded by newly proposed distributed (dense) text representations learned by neural networks. In this paper, we transfer the n-grams and NB weighting to neural models. We train n-gram embeddings and use NB weighting to guide the neural models to focus on important words. In fact, our methods can be viewed as distributed (dense) counterparts of sparse bag-of-n-grams in NBSVM. We discover that n-grams and NB weighting are also effective in distributed representations. As a result, our models achieve new strong baselines on 9 text classification datasets, e.g. on IMDB dataset, we reach performance of 93.5{\%} accuracy, which exceeds previous state-of-the-art results obtained by deep neural models. All source codes are publicly available at \url{https://github.com/zhezhaoa/neural_BOW_toolkit}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,568 |
inproceedings | poria-etal-2016-deeper | A Deeper Look into Sarcastic Tweets Using Deep Convolutional Neural Networks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1151/ | Poria, Soujanya and Cambria, Erik and Hazarika, Devamanyu and Vij, Prateek | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1601--1612 | Sarcasm detection is a key task for many natural language processing tasks. In sentiment analysis, for example, sarcasm can flip the polarity of an {\textquotedblleft}apparently positive{\textquotedblright} sentence and, hence, negatively affect polarity detection performance. To date, most approaches to sarcasm detection have treated the task primarily as a text categorization problem. Sarcasm, however, can be expressed in very subtle ways and requires a deeper understanding of natural language that standard text categorization techniques cannot grasp. In this work, we develop models based on a pre-trained convolutional neural network for extracting sentiment, emotion and personality features for sarcasm detection. Such features, along with the network`s baseline features, allow the proposed models to outperform the state of the art on benchmark datasets. We also address the often ignored generalizability issue of classifying data that have not been seen by the models at learning phase. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,569 |
inproceedings | barnes-etal-2016-exploring | Exploring Distributional Representations and Machine Translation for Aspect-based Cross-lingual Sentiment Classification. | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1152/ | Barnes, Jeremy and Lambert, Patrik and Badia, Toni | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1613--1623 | Cross-lingual sentiment classification (CLSC) seeks to use resources from a source language in order to detect sentiment and classify text in a target language. Almost all research into CLSC has been carried out at sentence and document level, although this level of granularity is often less useful. This paper explores methods for performing aspect-based cross-lingual sentiment classification (aspect-based CLSC) for under-resourced languages. Given the limited nature of parallel data for many languages, we would like to make the most of this resource for our task. We compare zero-shot learning, bilingual word embeddings, stacked denoising autoencoder representations and machine translation techniques for aspect-based CLSC. Each of these approaches requires differing amounts of parallel data. We show that models based on distributed semantics can achieve comparable results to machine translation on aspect-based CLSC and give an analysis of the errors found for each method. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,570 |
inproceedings | wang-etal-2016-bilingual | A Bilingual Attention Network for Code-switched Emotion Prediction | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1153/ | Wang, Zhongqing and Zhang, Yue and Lee, Sophia and Li, Shoushan and Zhou, Guodong | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1624--1634 | Emotions in code-switching text can be expressed in either monolingual or bilingual forms. However, relatively little research has emphasized on code-switching text. In this paper, we propose a Bilingual Attention Network (BAN) model to aggregate the monolingual and bilingual informative words to form vectors from the document representation, and integrate the attention vectors to predict the emotion. The experiments show that the effectiveness of the proposed model. Visualization of the attention layers illustrates that the model selects qualitatively informative words. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,571 |
inproceedings | chen-ku-2016-utcnn | {UTCNN}: a Deep Learning Model of Stance Classification on Social Media Text | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1154/ | Chen, Wei-Fan and Ku, Lun-Wei | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1635--1645 | Most neural network models for document classification on social media focus on text information to the neglect of other information on these platforms. In this paper, we classify post stance on social media channels and develop UTCNN, a neural network model that incorporates user tastes, topic tastes, and user comments on posts. UTCNN not only works on social media texts, but also analyzes texts in forums and message boards. Experiments performed on Chinese Facebook data and English online debate forum data show that UTCNN achieves a 0.755 macro average f-score for supportive, neutral, and unsupportive stance classes on Facebook data, which is significantly better than models in which either user, topic, or comment information is withheld. This model design greatly mitigates the lack of data for the minor class. In addition, UTCNN yields a 0.842 accuracy on English online debate forum data, which also significantly outperforms results from previous work, showing that UTCNN performs well regardless of language or platform. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,572 |
inproceedings | cornudella-etal-2016-role | The Role of Intrinsic Motivation in Artificial Language Emergence: a Case Study on Colour | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1155/ | Cornudella, Miquel and Poibeau, Thierry and van Trijp, Remi | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1646--1656 | Human languages have multiple strategies that allow us to discriminate objects in a vast variety of contexts. Colours have been extensively studied from this point of view. In particular, previous research in artificial language evolution has shown how artificial languages may emerge based on specific strategies to distinguish colours. Still, it has not been shown how several strategies of diverse complexity can be autonomously managed by artificial agents . We propose an intrinsic motivation system that allows agents in a population to create a shared artificial language and progressively increase its expressive power. Our results show that with such a system agents successfully regulate their language development, which indicates a relation between population size and consistency in the emergent communicative systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,573 |
inproceedings | hayashi-2016-predicting | Predicting the Evocation Relation between Lexicalized Concepts | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1156/ | Hayashi, Yoshihiko | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1657--1668 | Evocation is a directed yet weighted semantic relationship between lexicalized concepts. Although evocation relations are considered potentially useful in several semantic NLP tasks, the prediction of the evocation relation between an arbitrary pair of concepts remains difficult, since evocation relationships cover a broader range of semantic relations rooted in human perception and experience. This paper presents a supervised learning approach to predict the strength (by regression) and to determine the directionality (by classification) of the evocation relation that might hold between a pair of lexicalized concepts. Empirical results that were obtained by investigating useful features are shown, indicating that a combination of the proposed features largely outperformed individual baselines, and also suggesting that semantic relational vectors computed from existing semantic vectors for lexicalized concepts were indeed effective for both the prediction of strength and the determination of directionality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,574 |
inproceedings | paetzold-specia-2016-collecting | Collecting and Exploring Everyday Language for Predicting Psycholinguistic Properties of Words | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1157/ | Paetzold, Gustavo and Specia, Lucia | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1669--1679 | Exploring language usage through frequency analysis in large corpora is a defining feature in most recent work in corpus and computational linguistics. From a psycholinguistic perspective, however, the corpora used in these contributions are often not representative of language usage: they are either domain-specific, limited in size, or extracted from unreliable sources. In an effort to address this limitation, we introduce SubIMDB, a corpus of everyday language spoken text we created which contains over 225 million words. The corpus was extracted from 38,102 subtitles of family, comedy and children movies and series, and is the first sizeable structured corpus of subtitles made available. Our experiments show that word frequency norms extracted from this corpus are more effective than those from well-known norms such as Kucera-Francis, HAL and SUBTLEXus in predicting various psycholinguistic properties of words, such as lexical decision times, familiarity, age of acquisition and simplicity. We also provide evidence that contradict the long-standing assumption that the ideal size for a corpus can be determined solely based on how well its word frequencies correlate with lexical decision times. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,575 |
inproceedings | wachsmuth-etal-2016-using | Using Argument Mining to Assess the Argumentation Quality of Essays | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1158/ | Wachsmuth, Henning and Al-Khatib, Khalid and Stein, Benno | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1680--1691 | Argument mining aims to determine the argumentative structure of texts. Although it is said to be crucial for future applications such as writing support systems, the benefit of its output has rarely been evaluated. This paper puts the analysis of the output into the focus. In particular, we investigate to what extent the mined structure can be leveraged to assess the argumentation quality of persuasive essays. We find insightful statistical patterns in the structure of essays. From these, we derive novel features that we evaluate in four argumentation-related essay scoring tasks. Our results reveal the benefit of argument mining for assessing argumentation quality. Among others, we improve the state of the art in scoring an essay`s organization and its argument strength. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,576 |
inproceedings | wang-andersen-2016-grammatical | Grammatical Templates: Improving Text Difficulty Evaluation for Language Learners | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1159/ | Wang, Shuhan and Andersen, Erik | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1692--1702 | Language students are most engaged while reading texts at an appropriate difficulty level. However, existing methods of evaluating text difficulty focus mainly on vocabulary and do not prioritize grammatical features, hence they do not work well for language learners with limited knowledge of grammar. In this paper, we introduce grammatical templates, the expert-identified units of grammar that students learn from class, as an important feature of text difficulty evaluation. Experimental classification results show that grammatical template features significantly improve text difficulty prediction accuracy over baseline readability features by 7.4{\%}. Moreover,we build a simple and human-understandable text difficulty evaluation approach with 87.7{\%} accuracy, using only 5 grammatical template features. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,577 |
inproceedings | schnober-etal-2016-still | Still not there? Comparing Traditional Sequence-to-Sequence Models to Encoder-Decoder Neural Networks on Monotone String Translation Tasks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1160/ | Schnober, Carsten and Eger, Steffen and Do Dinh, Erik-L{\^a}n and Gurevych, Iryna | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1703--1714 | We analyze the performance of encoder-decoder neural models and compare them with well-known established methods. The latter represent different classes of traditional approaches that are applied to the monotone sequence-to-sequence tasks OCR post-correction, spelling correction, grapheme-to-phoneme conversion, and lemmatization. Such tasks are of practical relevance for various higher-level research fields including digital humanities, automatic text correction, and speech recognition. We investigate how well generic deep-learning approaches adapt to these tasks, and how they perform in comparison with established and more specialized methods, including our own adaptation of pruned CRFs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,578 |
inproceedings | jiang-etal-2016-towards | Towards Time-Aware Knowledge Graph Completion | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1161/ | Jiang, Tingsong and Liu, Tianyu and Ge, Tao and Sha, Lei and Chang, Baobao and Li, Sujian and Sui, Zhifang | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1715--1724 | Knowledge graph (KG) completion adds new facts to a KG by making inferences from existing facts. Most existing methods ignore the time information and only learn from time-unknown fact triples. In dynamic environments that evolve over time, it is important and challenging for knowledge graph completion models to take into account the temporal aspects of facts. In this paper, we present a novel time-aware knowledge graph completion model that is able to predict links in a KG using both the existing facts and the temporal information of the facts. To incorporate the happening time of facts, we propose a time-aware KG embedding model using temporal order information among facts. To incorporate the valid time of facts, we propose a joint time-aware inference model based on Integer Linear Programming (ILP) using temporal consistencyinformationasconstraints. Wefurtherintegratetwomodelstomakefulluseofglobal temporal information. We empirically evaluate our models on time-aware KG completion task. Experimental results show that our time-aware models achieve the state-of-the-art on temporal facts consistently. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,579 |
inproceedings | dadashkarimi-etal-2016-learning | Learning to Weight Translations using Ordinal Linear Regression and Query-generated Training Data for Ad-hoc Retrieval with Long Queries | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1162/ | Dadashkarimi, Javid and Jalili Sabet, Masoud and Shakery, Azadeh | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1725--1733 | Ordinal regression which is known with learning to rank has long been used in information retrieval (IR). Learning to rank algorithms, have been tailored in document ranking, information filtering, and building large aligned corpora successfully. In this paper, we propose to use this algorithm for query modeling in cross-language environments. To this end, first we build a query-generated training data using pseudo-relevant documents to the query and all translation candidates. The pseudo-relevant documents are obtained by top-ranked documents in response to a translation of the original query. The class of each candidate in the training data is determined based on presence/absence of the candidate in the pseudo-relevant documents. We learn an ordinal regression model to score the candidates based on their relevance to the context of the query, and after that, we construct a query-dependent translation model using a softmax function. Finally, we re-weight the query based on the obtained model. Experimental results on French, German, Spanish, and Italian CLEF collections demonstrate that the proposed method achieves better results compared to state-of-the-art cross-language information retrieval methods, particularly in long queries with large training data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,580 |
inproceedings | romeo-etal-2016-neural | Neural Attention for Learning to Rank Questions in Community Question Answering | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1163/ | Romeo, Salvatore and Da San Martino, Giovanni and Barr{\'o}n-Cede{\~n}o, Alberto and Moschitti, Alessandro and Belinkov, Yonatan and Hsu, Wei-Ning and Zhang, Yu and Mohtarami, Mitra and Glass, James | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1734--1745 | In real-world data, e.g., from Web forums, text is often contaminated with redundant or irrelevant content, which leads to introducing noise in machine learning algorithms. In this paper, we apply Long Short-Term Memory networks with an attention mechanism, which can select important parts of text for the task of similar question retrieval from community Question Answering (cQA) forums. In particular, we use the attention weights for both selecting entire sentences and their subparts, i.e., word/chunk, from shallow syntactic trees. More interestingly, we apply tree kernels to the filtered text representations, thus exploiting the implicit features of the subtree space for learning question reranking. Our results show that the attention-based pruning allows for achieving the top position in the cQA challenge of SemEval 2016, with a relatively large gap from the other participants while greatly decreasing running time. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,581 |
inproceedings | yin-etal-2016-simple | Simple Question Answering by Attentive Convolutional Neural Network | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1164/ | Yin, Wenpeng and Yu, Mo and Xiang, Bing and Zhou, Bowen and Sch{\"utze, Hinrich | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1746--1756 | This work focuses on answering single-relation factoid questions over Freebase. Each question can acquire the answer from a single fact of form (subject, predicate, object) in Freebase. This task, simple question answering (SimpleQA), can be addressed via a two-step pipeline: entity linking and fact selection. In fact selection, we match the subject entity in a fact candidate with the entity mention in the question by a character-level convolutional neural network (char-CNN), and match the predicate in that fact with the question by a word-level CNN (word-CNN). This work makes two main contributions. (i) A simple and effective entity linker over Freebase is proposed. Our entity linker outperforms the state-of-the-art entity linker over SimpleQA task. (ii) A novel attentive maxpooling is stacked over word-CNN, so that the predicate representation can be matched with the predicate-focused question representation more effectively. Experiments show that our system sets new state-of-the-art in this task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,582 |
inproceedings | semeniuta-etal-2016-recurrent | Recurrent Dropout without Memory Loss | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1165/ | Semeniuta, Stanislau and Severyn, Aliaksei and Barth, Erhardt | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1757--1766 | This paper presents a novel approach to recurrent neural network (RNN) regularization. Differently from the widely adopted dropout method, which is applied to forward connections of feedforward architectures or RNNs, we propose to drop neurons directly in recurrent connections in a way that does not cause loss of long-term memory. Our approach is as easy to implement and apply as the regular feed-forward dropout and we demonstrate its effectiveness for the most effective modern recurrent network {--} Long Short-Term Memory network. Our experiments on three NLP benchmarks show consistent improvements even when combined with conventional feed-forward dropout. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,583 |
inproceedings | balikas-etal-2016-modeling | Modeling topic dependencies in semantically coherent text spans with copulas | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1166/ | Balikas, Georgios and Amoualian, Hesam and Clausel, Marianne and Gaussier, Eric and Amini, Massih R. | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1767--1776 | The exchangeability assumption in topic models like Latent Dirichlet Allocation (LDA) often results in inferring inconsistent topics for the words of text spans like noun-phrases, which are usually expected to be topically coherent. We propose copulaLDA, that extends LDA by integrating part of the text structure to the model and relaxes the conditional independence assumption between the word-specific latent topics given the per-document topic distributions. To this end, we assume that the words of text spans like noun-phrases are topically bound and we model this dependence with copulas. We demonstrate empirically the effectiveness of copulaLDA on both intrinsic and extrinsic evaluation tasks on several publicly available corpora. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,584 |
inproceedings | cui-etal-2016-consensus | Consensus Attention-based Neural Networks for {C}hinese Reading Comprehension | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1167/ | Cui, Yiming and Liu, Ting and Chen, Zhipeng and Wang, Shijin and Hu, Guoping | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1777--1786 | Reading comprehension has embraced a booming in recent NLP research. Several institutes have released the Cloze-style reading comprehension data, and these have greatly accelerated the research of machine comprehension. In this work, we firstly present Chinese reading comprehension datasets, which consist of People Daily news dataset and Children`s Fairy Tale (CFT) dataset. Also, we propose a consensus attention-based neural network architecture to tackle the Cloze-style reading comprehension problem, which aims to induce a consensus attention over every words in the query. Experimental results show that the proposed neural network significantly outperforms the state-of-the-art baselines in several public datasets. Furthermore, we setup a baseline for Chinese reading comprehension task, and hopefully this would speed up the process for future research. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,585 |
inproceedings | felt-etal-2016-semantic | Semantic Annotation Aggregation with Conditional Crowdsourcing Models and Word Embeddings | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1168/ | Felt, Paul and Ringger, Eric and Seppi, Kevin | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1787--1796 | In modern text annotation projects, crowdsourced annotations are often aggregated using item response models or by majority vote. Recently, item response models enhanced with generative data models have been shown to yield substantial benefits over those with conditional or no data models. However, suitable generative data models do not exist for many tasks, such as semantic labeling tasks. When no generative data model exists, we demonstrate that similar benefits may be derived by conditionally modeling documents that have been previously embedded in a semantic space using recent work in vector space models. We use this approach to show state-of-the-art results on a variety of semantic annotation aggregation tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,586 |
inproceedings | ye-etal-2016-interactive | Interactive-Predictive Machine Translation based on Syntactic Constraints of Prefix | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1169/ | Ye, Na and Zhang, Guiping and Cai, Dongfeng | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1797--1806 | Interactive-predictive machine translation (IPMT) is a translation mode which combines machine translation technology and human behaviours. In the IPMT system, the utilization of the prefix greatly affects the interaction efficiency. However, state-of-the-art methods filter translation hypotheses mainly according to their matching results with the prefix on character level, and the advantage of the prefix is not fully developed. Focusing on this problem, this paper mines the deep constraints of prefix on syntactic level to improve the performance of IPMT systems. Two syntactic subtree matching rules based on phrase structure grammar are proposed to filter the translation hypotheses more strictly. Experimental results on LDC Chinese-English corpora show that the proposed method outperforms state-of-the-art phrase-based IPMT system while keeping comparable decoding speed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,587 |
inproceedings | zhang-etal-2016-topic | Topic-Informed Neural Machine Translation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1170/ | Zhang, Jian and Li, Liangyou and Way, Andy and Liu, Qun | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1807--1817 | In recent years, neural machine translation (NMT) has demonstrated state-of-the-art machine translation (MT) performance. It is a new approach to MT, which tries to learn a set of parameters to maximize the conditional probability of target sentences given source sentences. In this paper, we present a novel approach to improve the translation performance in NMT by conveying topic knowledge during translation. The proposed topic-informed NMT can increase the likelihood of selecting words from the same topic and domain for translation. Experimentally, we demonstrate that topic-informed NMT can achieve a 1.15 (3.3{\%} relative) and 1.67 (5.4{\%} relative) absolute improvement in BLEU score on the Chinese-to-English language pair using NIST 2004 and 2005 test sets, respectively, compared to NMT without topic information. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,588 |
inproceedings | cao-etal-2016-distribution | A Distribution-based Model to Learn Bilingual Word Embeddings | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1171/ | Cao, Hailong and Zhao, Tiejun and Zhang, Shu and Meng, Yao | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1818--1827 | We introduce a distribution based model to learn bilingual word embeddings from monolingual data. It is simple, effective and does not require any parallel data or any seed lexicon. We take advantage of the fact that word embeddings are usually in form of dense real-valued low-dimensional vector and therefore the distribution of them can be accurately estimated. A novel cross-lingual learning objective is proposed which directly matches the distributions of word embeddings in one language with that in the other language. During the joint learning process, we dynamically estimate the distributions of word embeddings in two languages respectively and minimize the dissimilarity between them through standard back propagation algorithm. Our learned bilingual word embeddings allow to group each word and its translations together in the shared vector space. We demonstrate the utility of the learned embeddings on the task of finding word-to-word translations from monolingual corpora. Our model achieved encouraging performance on data in both related languages and substantially different languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,589 |
inproceedings | niehues-etal-2016-pre | Pre-Translation for Neural Machine Translation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1172/ | Niehues, Jan and Cho, Eunah and Ha, Thanh-Le and Waibel, Alex | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1828--1836 | Recently, the development of neural machine translation (NMT) has significantly improved the translation quality of automatic machine translation. While most sentences are more accurate and fluent than translations by statistical machine translation (SMT)-based systems, in some cases, the NMT system produces translations that have a completely different meaning. This is especially the case when rare words occur. When using statistical machine translation, it has already been shown that significant gains can be achieved by simplifying the input in a preprocessing step. A commonly used example is the pre-reordering approach. In this work, we used phrase-based machine translation to pre-translate the input into the target language. Then a neural machine translation system generates the final hypothesis using the pre-translation. Thereby, we use either only the output of the phrase-based machine translation (PBMT) system or a combination of the PBMT output and the source sentence. We evaluate the technique on the English to German translation task. Using this approach we are able to outperform the PBMT system as well as the baseline neural MT system by up to 2 BLEU points. We analyzed the influence of the quality of the initial system on the final result. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,590 |
inproceedings | claveau-kijak-2016-direct | Direct vs. indirect evaluation of distributional thesauri | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1173/ | Claveau, Vincent and Kijak, Ewa | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1837--1848 | With the success of word embedding methods in various Natural Language Processing tasks, all the field of distributional semantics has experienced a renewed interest. Beside the famous word2vec, recent studies have presented efficient techniques to build distributional thesaurus; in particular, Claveau et al. (2014) have already shown that Information Retrieval (IR) tools and concepts can be successfully used to build a thesaurus. In this paper, we address the problem of the evaluation of such thesauri or embedding models and compare their results. Through several experiments and by evaluating directly the results with reference lexicons, we show that the recent IR-based distributional models outperform state-of-the-art systems such as word2vec. Following the work of Claveau and Kijak (2016), we use IR as an applicative framework to indirectly evaluate the generated thesaurus. Here again, this task-based evaluation validates the IR approach used to build the thesaurus. Moreover, it allows us to compare these results with those from the direct evaluation framework used in the literature. The observed differences bring these evaluation habits into question. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,591 |
inproceedings | jameel-schockaert-2016-glove | {D}-{G}lo{V}e: A Feasible Least Squares Model for Estimating Word Embedding Densities | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1174/ | Jameel, Shoaib and Schockaert, Steven | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1849--1860 | We propose a new word embedding model, inspired by GloVe, which is formulated as a feasible least squares optimization problem. In contrast to existing models, we explicitly represent the uncertainty about the exact definition of each word vector. To this end, we estimate the error that results from using noisy co-occurrence counts in the formulation of the model, and we model the imprecision that results from including uninformative context words. Our experimental results demonstrate that this model compares favourably with existing word embedding models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,592 |
inproceedings | de-deyne-etal-2016-predicting | Predicting human similarity judgments with distributional models: The value of word associations. | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1175/ | De Deyne, Simon and Perfors, Amy and Navarro, Daniel J | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1861--1870 | Most distributional lexico-semantic models derive their representations based on external language resources such as text corpora. In this study, we propose that internal language models, that are more closely aligned to the mental representations of words could provide important insights into cognitive science, including linguistics. Doing so allows us to reflect upon theoretical questions regarding the structure of the mental lexicon, and also puts into perspective a number of assumptions underlying recently proposed distributional text-based models. In particular, we focus on word-embedding models which have been proposed to learn aspects of word meaning in a manner similar to humans. These are contrasted with internal language models derived from a new extensive data set of word associations. Using relatedness and similarity judgments we evaluate these models and find that the word-association-based internal language models consistently outperform current state-of-the art text-based external language models, often with a large margin. These results are not just a performance improvement; they also have implications for our understanding of how distributional knowledge is used by people. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,593 |
inproceedings | yamane-etal-2016-distributional | Distributional Hypernym Generation by Jointly Learning Clusters and Projections | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1176/ | Yamane, Josuke and Takatani, Tomoya and Yamada, Hitoshi and Miwa, Makoto and Sasaki, Yutaka | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1871--1879 | We propose a novel word embedding-based hypernym generation model that jointly learns clusters of hyponym-hypernym relations, i.e., hypernymy, and projections from hyponym to hypernym embeddings. Most of the recent hypernym detection models focus on a hypernymy classification problem that determines whether a pair of words is in hypernymy or not. These models do not directly deal with a hypernym generation problem in that a model generates hypernyms for a given word. Differently from previous studies, our model jointly learns the clusters and projections with adjusting the number of clusters so that the number of clusters can be determined depending on the learned projections and vice versa. Our model also boosts the performance by incorporating inner product-based similarity measures and negative examples, i.e., sampled non-hypernyms, into our objectives in learning. We evaluated our joint learning models on the task of Japanese and English hypernym generation and showed a significant improvement over an existing pipeline model. Our model also compared favorably to existing distributed hypernym detection models on the English hypernym classification task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,594 |
inproceedings | hou-2016-incremental | Incremental Fine-grained Information Status Classification Using Attention-based {LSTM}s | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1177/ | Hou, Yufang | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1880--1890 | Information status plays an important role in discourse processing. According to the hearer`s common sense knowledge and his comprehension of the preceding text, a discourse entity could be old, mediated or new. In this paper, we propose an attention-based LSTM model to address the problem of fine-grained information status classification in an incremental manner. Our approach resembles how human beings process the task, i.e., decide the information status of the current discourse entity based on its preceding context. Experimental results on the ISNotes corpus (Markert et al., 2012) reveal that (1) despite its moderate result, our model with only word embedding features captures the necessary semantic knowledge needed for the task by a large extent; and (2) when incorporating with additional several simple features, our model achieves the competitive results compared to the state-of-the-art approach (Hou et al., 2013) which heavily depends on lots of hand-crafted semantic features. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,595 |
inproceedings | shih-chen-2016-detection | Detection, Disambiguation and Argument Identification of Discourse Connectives in {C}hinese Discourse Parsing | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1178/ | Shih, Yong-Siang and Chen, Hsin-Hsi | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1891--1902 | In this paper, we investigate four important issues together for explicit discourse relation labelling in Chinese texts: (1) discourse connective extraction, (2) linking ambiguity resolution, (3) relation type disambiguation, and (4) argument boundary identification. In a pipelined Chinese discourse parser, we identify potential connective candidates by string matching, eliminate non-discourse usages from them with a binary classifier, resolve linking ambiguities among connective components by ranking, disambiguate relation types by a multiway classifier, and determine the argument boundaries by conditional random fields. The experiments on Chinese Discourse Treebank show that the F1 scores of 0.7506, 0.7693, 0.7458, and 0.3134 are achieved for discourse usage disambiguation, linking disambiguation, relation type disambiguation, and argument boundary identification, respectively, in a pipelined Chinese discourse parser. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,596 |
inproceedings | braud-etal-2016-multi | Multi-view and multi-task training of {RST} discourse parsers | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1179/ | Braud, Chlo{\'e} and Plank, Barbara and S{\o}gaard, Anders | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1903--1913 | We experiment with different ways of training LSTM networks to predict RST discourse trees. The main challenge for RST discourse parsing is the limited amounts of training data. We combat this by regularizing our models using task supervision from related tasks as well as alternative views on discourse structures. We show that a simple LSTM sequential discourse parser takes advantage of this multi-view and multi-task framework with 12-15{\%} error reductions over our baseline (depending on the metric) and results that rival more complex state-of-the-art parsers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,597 |
inproceedings | qin-etal-2016-implicit | Implicit Discourse Relation Recognition with Context-aware Character-enhanced Embeddings | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1180/ | Qin, Lianhui and Zhang, Zhisong and Zhao, Hai | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1914--1924 | For the task of implicit discourse relation recognition, traditional models utilizing manual features can suffer from data sparsity problem. Neural models provide a solution with distributed representations, which could encode the latent semantic information, and are suitable for recognizing semantic relations between argument pairs. However, conventional vector representations usually adopt embeddings at the word level and cannot well handle the rare word problem without carefully considering morphological information at character level. Moreover, embeddings are assigned to individual words independently, which lacks of the crucial contextual information. This paper proposes a neural model utilizing context-aware character-enhanced embeddings to alleviate the drawbacks of the current word level representation. Our experiments show that the enhanced embeddings work well and the proposed model obtains state-of-the-art results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,598 |
inproceedings | pluss-piwek-2016-measuring | Measuring Non-cooperation in Dialogue | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1181/ | Pl{\"uss, Brian and Piwek, Paul | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1925--1936 | This paper introduces a novel method for measuring non-cooperation in dialogue. The key idea is that linguistic non-cooperation can be measured in terms of the extent to which dialogue participants deviate from conventions regarding the proper introduction and discharging of conversational obligations (e.g., the obligation to respond to a question). Previous work on non cooperation has focused mainly on non-linguistic task-related non-cooperation or modelled non-cooperation in terms of special rules describing non-cooperative behaviours. In contrast, we start from rules for normal/correct dialogue behaviour - i.e., a dialogue game - which in principle can be derived from a corpus of cooperative dialogues, and provide a quantitative measure for the degree to which participants comply with these rules. We evaluated the model on a corpus of political interviews, with encouraging results. The model predicts accurately the degree of cooperation for one of the two dialogue game roles (interviewer) and also the relative cooperation for both roles (i.e., which interlocutor in the conversation was most cooperative). Being able to measure cooperation has applications in many areas from the analysis - manual, semi and fully automatic - of natural language interactions to human-like virtual personal assistants, tutoring agents, sophisticated dialogue systems, and role-playing virtual humans. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,599 |
inproceedings | derczynski-2016-representation | Representation and Learning of Temporal Relations | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1182/ | Derczynski, Leon | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1937--1948 | Determining the relative order of events and times described in text is an important problem in natural language processing. It is also a difficult one: general state-of-the-art performance has been stuck at a relatively low ceiling for years. We investigate the representation of temporal relations, and empirically evaluate the effect that various temporal relation representations have on machine learning performance. While machine learning performance decreases with increased representational expressiveness, not all representation simplifications have equal impact. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,600 |
inproceedings | upadhyay-etal-2016-revisiting | Revisiting the Evaluation for Cross Document Event Coreference | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1183/ | Upadhyay, Shyam and Gupta, Nitish and Christodoulopoulos, Christos and Roth, Dan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1949--1958 | Cross document event coreference (CDEC) is an important task that aims at aggregating event-related information across multiple documents. We revisit the evaluation for CDEC, and discover that past works have adopted different, often inconsistent, evaluation settings, which either overlook certain mistakes in coreference decisions, or make assumptions that simplify the coreference task considerably. We suggest a new evaluation methodology which overcomes these limitations, and allows for an accurate assessment of CDEC systems. Our new evaluation setting better reflects the corpus-wide information aggregation ability of CDEC systems by separating event-coreference decisions made across documents from those made within a document. In addition, we suggest a better baseline for the task and semi-automatically identify several inconsistent annotations in the evaluation dataset. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,601 |
inproceedings | watanabe-etal-2016-modeling | Modeling Discourse Segments in Lyrics Using Repeated Patterns | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1184/ | Watanabe, Kento and Matsubayashi, Yuichiroh and Orita, Naho and Okazaki, Naoaki and Inui, Kentaro and Fukayama, Satoru and Nakano, Tomoyasu and Smith, Jordan and Goto, Masataka | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1959--1969 | This study proposes a computational model of the discourse segments in lyrics to understand and to model the structure of lyrics. To test our hypothesis that discourse segmentations in lyrics strongly correlate with repeated patterns, we conduct the first large-scale corpus study on discourse segments in lyrics. Next, we propose the task to automatically identify segment boundaries in lyrics and train a logistic regression model for the task with the repeated pattern and textual features. The results of our empirical experiments illustrate the significance of capturing repeated patterns in predicting the boundaries of discourse segments in lyrics. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,602 |
inproceedings | li-wu-2016-multi | Multi-level Gated Recurrent Neural Network for dialog act classification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1185/ | Li, Wei and Wu, Yunfang | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1970--1979 | In this paper we focus on the problem of dialog act (DA) labelling. This problem has recently attracted a lot of attention as it is an important sub-part of an automatic question answering system, which is currently in great demand. Traditional methods tend to see this problem as a sequence labelling task and deals with it by applying classifiers with rich features. Most of the current neural network models still omit the sequential information in the conversation. Henceforth, we apply a novel multi-level gated recurrent neural network (GRNN) with non-textual information to predict the DA tag. Our model not only utilizes textual information, but also makes use of non-textual and contextual information. In comparison, our model has shown significant improvement over previous works on Switchboard Dialog Act (SWDA) task by over 6{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,603 |
inproceedings | patra-etal-2016-multimodal | Multimodal Mood Classification - A Case Study of Differences in {H}indi and Western Songs | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1186/ | Patra, Braja Gopal and Das, Dipankar and Bandyopadhyay, Sivaji | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1980--1989 | Music information retrieval has emerged as a mainstream research area in the past two decades. Experiments on music mood classification have been performed mainly on Western music based on audio, lyrics and a combination of both. Unfortunately, due to the scarcity of digitalized resources, Indian music fares poorly in music mood retrieval research. In this paper, we identified the mood taxonomy and prepared multimodal mood annotated datasets for Hindi and Western songs. We identified important audio and lyric features using correlation based feature selection technique. Finally, we developed mood classification systems using Support Vector Machines and Feed Forward Neural Networks based on the features collected from audio, lyrics, and a combination of both. The best performing multimodal systems achieved F-measures of 75.1 and 83.5 for classifying the moods of the Hindi and Western songs respectively using Feed Forward Neural Networks. A comparative analysis indicates that the selected features work well for mood classification of the Western songs and produces better results as compared to the mood classification systems for Hindi songs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,604 |
inproceedings | li-etal-2016-detecting | Detecting Context Dependent Messages in a Conversational Environment | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1187/ | Li, Chaozhuo and Wu, Yu and Wu, Wei and Xing, Chen and Li, Zhoujun and Zhou, Ming | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1990--1999 | While automatic response generation for building chatbot systems has drawn a lot of attention recently, there is limited understanding on when we need to consider the linguistic context of an input text in the generation process. The task is challenging, as messages in a conversational environment are short and informal, and evidence that can indicate a message is context dependent is scarce. After a study of social conversation data crawled from the web, we observed that some characteristics estimated from the responses of messages are discriminative for identifying context dependent messages. With the characteristics as weak supervision, we propose using a Long Short Term Memory (LSTM) network to learn a classifier. Our method carries out text representation and classifier learning in a unified framework. Experimental results show that the proposed method can significantly outperform baseline methods on accuracy of classification. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,605 |
inproceedings | venugopal-rus-2016-joint | Joint Inference for Mode Identification in Tutorial Dialogues | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1188/ | Venugopal, Deepak and Rus, Vasile | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2000--2011 | Identifying dialogue acts and dialogue modes during tutorial interactions is an extremely crucial sub-step in understanding patterns of effective tutor-tutee interactions. In this work, we develop a novel joint inference method that labels each utterance in a tutoring dialogue session with a dialogue act and a specific mode from a set of pre-defined dialogue acts and modes, respectively. Specifically, we develop our joint model using Markov Logic Networks (MLNs), a framework that combines first-order logic with probabilities, and is thus capable of representing complex, uncertain knowledge. We define first-order formulas in our MLN that encode the inter-dependencies between dialogue modes and more fine-grained dialogue actions. We then use a joint inference to jointly label the modes as well as the dialogue acts in an utterance. We compare our system against a pipeline system based on SVMs on a real-world dataset with tutoring sessions of over 500 students. Our results show that the joint inference system is far more effective than the pipeline system in mode detection, and improves over the performance of the pipeline system by about 6 points in F1 score. The joint inference system also performs much better than the pipeline system in the context of labeling modes that highlight important pedagogical steps in tutoring. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,606 |
inproceedings | khanpour-etal-2016-dialogue | Dialogue Act Classification in Domain-Independent Conversations Using a Deep Recurrent Neural Network | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1189/ | Khanpour, Hamed and Guntakandla, Nishitha and Nielsen, Rodney | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2012--2021 | In this study, we applied a deep LSTM structure to classify dialogue acts (DAs) in open-domain conversations. We found that the word embeddings parameters, dropout regularization, decay rate and number of layers are the parameters that have the largest effect on the final system accuracy. Using the findings of these experiments, we trained a deep LSTM network that outperforms the state-of-the-art on the Switchboard corpus by 3.11{\%}, and MRDA by 2.2{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,607 |
inproceedings | kumar-joshi-2016-non | Non-sentential Question Resolution using Sequence to Sequence Learning | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1190/ | Kumar, Vineet and Joshi, Sachindra | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2022--2031 | An interactive Question Answering (QA) system frequently encounters non-sentential (incomplete) questions. These non-sentential questions may not make sense to the system when a user asks them without the context of conversation. The system thus needs to take into account the conversation context to process the question. In this work, we present a recurrent neural network (RNN) based encoder decoder network that can generate a complete (intended) question, given an incomplete question and conversation context. RNN encoder decoder networks have been show to work well when trained on a parallel corpus with millions of sentences, however it is extremely hard to obtain conversation data of this magnitude. We therefore propose to decompose the original problem into two separate simplified problems where each problem focuses on an abstraction. Specifically, we train a semantic sequence model to learn semantic patterns, and a syntactic sequence model to learn linguistic patterns. We further combine syntactic and semantic sequence models to generate an ensemble model. Our model achieves a BLEU score of 30.15 as compared to 18.54 using a standard RNN encoder decoder model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,608 |
inproceedings | zhou-etal-2016-context | Context-aware Natural Language Generation for Spoken Dialogue Systems | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1191/ | Zhou, Hao and Huang, Minlie and Zhu, Xiaoyan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2032--2041 | Natural language generation (NLG) is an important component of question answering(QA) systems which has a significant impact on system quality. Most tranditional QA systems based on templates or rules tend to generate rigid and stylised responses without the natural variation of human language. Furthermore, such methods need an amount of work to generate the templates or rules. To address this problem, we propose a Context-Aware LSTM model for NLG. The model is completely driven by data without manual designed templates or rules. In addition, the context information, including the question to be answered, semantic values to be addressed in the response, and the dialogue act type during interaction, are well approached in the neural network model, which enables the model to produce variant and informative responses. The quantitative evaluation and human evaluation show that CA-LSTM obtains state-of-the-art performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,609 |
inproceedings | serriere-etal-2016-weakly | Weakly-supervised text-to-speech alignment confidence measure | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1192/ | Serri{\`e}re, Guillaume and Cerisara, Christophe and Fohr, Dominique and Mella, Odile | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2042--2050 | This work proposes a new confidence measure for evaluating text-to-speech alignment systems outputs, which is a key component for many applications, such as semi-automatic corpus anonymization, lips syncing, film dubbing, corpus preparation for speech synthesis and speech recognition acoustic models training. This confidence measure exploits deep neural networks that are trained on large corpora without direct supervision. It is evaluated on an open-source spontaneous speech corpus and outperforms a confidence score derived from a state-of-the-art text-to-speech aligner. We further show that this confidence measure can be used to fine-tune the output of this aligner and improve the quality of the resulting alignment. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,610 |
inproceedings | kim-etal-2016-domainless | Domainless Adaptation by Constrained Decoding on a Schema Lattice | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1193/ | Kim, Young-Bum and Stratos, Karl and Sarikaya, Ruhi | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2051--2060 | In many applications such as personal digital assistants, there is a constant need for new domains to increase the system`s coverage of user queries. A conventional approach is to learn a separate model every time a new domain is introduced. This approach is slow, inefficient, and a bottleneck for scaling to a large number of domains. In this paper, we introduce a framework that allows us to have a single model that can handle all domains: including unknown domains that may be created in the future as long as they are covered in the master schema. The key idea is to remove the need for distinguishing domains by explicitly predicting the schema of queries. Given permitted schema of a query, we perform constrained decoding on a lattice of slot sequences allowed under the schema. The proposed model achieves competitive and often superior performance over the conventional model trained separately per domain. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,611 |
inproceedings | singh-etal-2016-sub | Sub-Word Similarity based Search for Embeddings: Inducing Rare-Word Embeddings for Word Similarity Tasks and Language Modelling | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1194/ | Singh, Mittul and Greenberg, Clayton and Oualil, Youssef and Klakow, Dietrich | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2061--2070 | Training good word embeddings requires large amounts of data. Out-of-vocabulary words will still be encountered at test-time, leaving these words without embeddings. To overcome this lack of embeddings for rare words, existing methods leverage morphological features to generate embeddings. While the existing methods use computationally-intensive rule-based (Soricut and Och, 2015) or tool-based (Botha and Blunsom, 2014) morphological analysis to generate embeddings, our system applies a computationally-simpler sub-word search on words that have existing embeddings. Embeddings of the sub-word search results are then combined using string similarity functions to generate rare word embeddings. We augmented pre-trained word embeddings with these novel embeddings and evaluated on a rare word similarity task, obtaining up to 3 times improvement in correlation over the original set of embeddings. Applying our technique to embeddings trained on larger datasets led to on-par performance with the existing state-of-the-art for this task. Additionally, while analysing augmented embeddings in a log-bilinear language model, we observed up to 50{\%} reduction in rare word perplexity in comparison to other more complex language models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,612 |
inproceedings | meyer-etal-2016-semi | Semi-automatic Detection of Cross-lingual Marketing Blunders based on Pragmatic Label Propagation in {W}iktionary | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1195/ | Meyer, Christian M. and Eckle-Kohler, Judith and Gurevych, Iryna | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2071--2081 | We introduce the task of detecting cross-lingual marketing blunders, which occur if a trade name resembles an inappropriate or negatively connotated word in a target language. To this end, we suggest a formal task definition and a semi-automatic method based the propagation of pragmatic labels from Wiktionary across sense-disambiguated translations. Our final tool assists users by providing clues for problematic names in any language, which we simulate in two experiments on detecting previously occurred marketing blunders and identifying relevant clues for established international brands. We conclude the paper with a suggested research roadmap for this new task. To initiate further research, we publish our online demo along with the source code and data at \url{http://uby.ukp.informatik.tu-darmstadt.de/blunder/}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,613 |
inproceedings | milde-etal-2016-ambient | Ambient Search: A Document Retrieval System for Speech Streams | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1196/ | Milde, Benjamin and Wacker, Jonas and Radomski, Stefan and M{\"uhlh{\"auser, Max and Biemann, Chris | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2082--2091 | We present Ambient Search, an open source system for displaying and retrieving relevant documents in real time for speech input. The system works ambiently, that is, it unobstructively listens to speech streams in the background, identifies keywords and keyphrases for query construction and continuously serves relevant documents from its index. Query terms are ranked with Word2Vec and TF-IDF and are continuously updated to allow for ongoing querying of a document collection. The retrieved documents, in our case Wikipedia articles, are visualized in real time in a browser interface. Our evaluation shows that Ambient Search compares favorably to another implicit information retrieval system on speech streams. Furthermore, we extrinsically evaluate multiword keyphrase generation, showing positive impact for manual transcriptions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,614 |
inproceedings | li-etal-2016-semi | Semi-supervised Gender Classification with Joint Textual and Social Modeling | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1197/ | Li, Shoushan and Dai, Bin and Gong, Zhengxian and Zhou, Guodong | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2092--2100 | In gender classification, labeled data is often limited while unlabeled data is ample. This motivates semi-supervised learning for gender classification to improve the performance by exploring the knowledge in both labeled and unlabeled data. In this paper, we propose a semi-supervised approach to gender classification by leveraging textual features and a specific kind of indirect links among the users which we call {\textquotedblleft}same-interest{\textquotedblright} links. Specifically, we propose a factor graph, namely Textual and Social Factor Graph (TSFG), to model both the textual and the {\textquotedblleft}same-interest{\textquotedblright} link information. Empirical studies demonstrate the effectiveness of the proposed approach to semi-supervised gender classification. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,615 |
inproceedings | pilan-etal-2016-predicting | Predicting proficiency levels in learner writings by transferring a linguistic complexity model from expert-written coursebooks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1198/ | Pil{\'a}n, Ildik{\'o} and Volodina, Elena and Zesch, Torsten | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2101--2111 | The lack of a sufficient amount of data tailored for a task is a well-recognized problem for many statistical NLP methods. In this paper, we explore whether data sparsity can be successfully tackled when classifying language proficiency levels in the domain of learner-written output texts. We aim at overcoming data sparsity by incorporating knowledge in the trained model from another domain consisting of input texts written by teaching professionals for learners. We compare different domain adaptation techniques and find that a weighted combination of the two types of data performs best, which can even rival systems based on considerably larger amounts of in-domain data. Moreover, we show that normalizing errors in learners' texts can substantially improve classification when level-annotated in-domain data is not available. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,616 |
inproceedings | zhang-etal-2016-user | User Classification with Multiple Textual Perspectives | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1199/ | Zhang, Dong and Li, Shoushan and Wang, Hongling and Zhou, Guodong | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2112--2121 | Textual information is of critical importance for automatic user classification in social media. However, most previous studies model textual features in a single perspective while the text in a user homepage typically possesses different styles of text, such as original message and comment from others. In this paper, we propose a novel approach, namely ensemble LSTM, to user classification by incorporating multiple textual perspectives. Specifically, our approach first learns a LSTM representation with a LSTM recurrent neural network and then presents a joint learning method to integrating all naturally-divided textual perspectives. Empirical studies on two basic user classification tasks, i.e., gender classification and age classification, demonstrate the effectiveness of the proposed approach to user classification with multiple textual perspectives. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,617 |
inproceedings | jiang-diesner-2016-says | Says Who{\textellipsis}? Identification of Expert versus Layman Critics' Reviews of Documentary Films | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1200/ | Jiang, Ming and Diesner, Jana | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2122--2132 | We extend classic review mining work by building a binary classifier that predicts whether a review of a documentary film was written by an expert or a layman with 90.70{\%} accuracy (F1 score), and compare the characteristics of the predicted classes. A variety of standard lexical and syntactic features was used for this supervised learning task. Our results suggest that experts write comparatively lengthier and more detailed reviews that feature more complex grammar and a higher diversity in their vocabulary. Layman reviews are more subjective and contextualized in peoples' everyday lives. Our error analysis shows that laymen are about twice as likely to be mistaken as experts than vice versa. We argue that the type of author might be a useful new feature for improving the accuracy of predicting the rating, helpfulness and authenticity of reviews. Finally, the outcomes of this work might help researchers and practitioners in the field of impact assessment to gain a more fine-grained understanding of the perception of different types of media consumers and reviewers of a topic, genre or information product. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,618 |
inproceedings | ding-etal-2016-knowledge | Knowledge-Driven Event Embedding for Stock Prediction | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1201/ | Ding, Xiao and Zhang, Yue and Liu, Ting and Duan, Junwen | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2133--2142 | Representing structured events as vectors in continuous space offers a new way for defining dense features for natural language processing (NLP) applications. Prior work has proposed effective methods to learn event representations that can capture syntactic and semantic information over text corpus, demonstrating their effectiveness for downstream tasks such as event-driven stock prediction. On the other hand, events extracted from raw texts do not contain background knowledge on entities and relations that they are mentioned. To address this issue, this paper proposes to leverage extra information from knowledge graph, which provides ground truth such as attributes and properties of entities and encodes valuable relations between entities. Specifically, we propose a joint model to combine knowledge graph information into the objective function of an event embedding learning model. Experiments on event similarity and stock market prediction show that our model is more capable of obtaining better event embeddings and making more accurate prediction on stock market volatilities. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,619 |
inproceedings | chen-etal-2016-distributed | Distributed Representations for Building Profiles of Users and Items from Text Reviews | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1202/ | Chen, Wenliang and Zhang, Zhenjie and Li, Zhenghua and Zhang, Min | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2143--2153 | In this paper, we propose an approach to learn distributed representations of users and items from text comments for recommendation systems. Traditional recommendation algorithms, e.g. collaborative filtering and matrix completion, are not designed to exploit the key information hidden in the text comments, while existing opinion mining methods do not provide direct support to recommendation systems with useful features on users and items. Our approach attempts to construct vectors to represent profiles of users and items under a unified framework to maximize word appearance likelihood. Then, the vector representations are used for a recommendation task in which we predict scores on unobserved user-item pairs without given texts. The recommendation-aware distributed representation approach is fully supported by effective and efficient learning algorithms over massive text archive. Our empirical evaluations on real datasets show that our system outperforms the state-of-the-art baseline systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,620 |
inproceedings | tang-etal-2016-improving | Improving Statistical Machine Translation with Selectional Preferences | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1203/ | Tang, Haiqing and Xiong, Deyi and Zhang, Min and Gong, Zhengxian | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2154--2163 | Long-distance semantic dependencies are crucial for lexical choice in statistical machine translation. In this paper, we study semantic dependencies between verbs and their arguments by modeling selectional preferences in the context of machine translation. We incorporate preferences that verbs impose on subjects and objects into translation. In addition, bilingual selectional preferences between source-side verbs and target-side arguments are also investigated. Our experiments on Chinese-to-English translation tasks with large-scale training data demonstrate that statistical machine translation using verbal selectional preferences can achieve statistically significant improvements over a state-of-the-art baseline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,621 |
inproceedings | stanojevic-simaan-2016-hierarchical | Hierarchical Permutation Complexity for Word Order Evaluation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1204/ | Stanojevi{\'c}, Milo{\v{s}} and Sima{'}an, Khalil | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2164--2173 | Existing approaches for evaluating word order in machine translation work with metrics computed directly over a permutation of word positions in system output relative to a reference translation. However, every permutation factorizes into a permutation tree (PET) built of primal permutations, i.e., atomic units that do not factorize any further. In this paper we explore the idea that permutations factorizing into (on average) shorter primal permutations should represent simpler ordering as well. Consequently, we contribute Permutation Complexity, a class of metrics over PETs and their extension to forests, and define tight metrics, a sub-class of metrics implementing this idea. Subsequently we define example tight metrics and empirically test them in word order evaluation. Experiments on the WMT13 data sets for ten language pairs show that a tight metric is more often than not better than the baselines. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,622 |
inproceedings | meng-etal-2016-interactive | Interactive Attention for Neural Machine Translation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1205/ | Meng, Fandong and Lu, Zhengdong and Li, Hang and Liu, Qun | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2174--2185 | Conventional attention-based Neural Machine Translation (NMT) conducts dynamic alignment in generating the target sentence. By repeatedly reading the representation of source sentence, which keeps fixed after generated by the encoder (Bahdanau et al., 2015), the attention mechanism has greatly enhanced state-of-the-art NMT. In this paper, we propose a new attention mechanism, called INTERACTIVE ATTENTION, which models the interaction between the decoder and the representation of source sentence during translation by both reading and writing operations. INTERACTIVE ATTENTION can keep track of the interaction history and therefore improve the translation performance. Experiments on NIST Chinese-English translation task show that INTERACTIVE ATTENTION can achieve significant improvements over both the previous attention-based NMT baseline and some state-of-the-art variants of attention-based NMT (i.e., coverage models (Tu et al., 2016)). And neural machine translator with our INTERACTIVE ATTENTION can outperform the open source attention-based NMT system Groundhog by 4.22 BLEU points and the open source phrase-based system Moses by 3.94 BLEU points averagely on multiple test sets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,623 |
inproceedings | pado-2016-get | Get Semantic With Me! The Usefulness of Different Feature Types for Short-Answer Grading | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1206/ | Pad{\'o}, Ulrike | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2186--2195 | Automated short-answer grading is key to help close the automation loop for large-scale, computerised testing in education. A wide range of features on different levels of linguistic processing has been proposed so far. We investigate the relative importance of the different types of features across a range of standard corpora (both from a language skill and content assessment context, in English and in German). We find that features on the lexical, text similarity and dependency level often suffice to approximate full-model performance. Features derived from semantic processing particularly benefit the linguistically more varied answers in content assessment corpora. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,624 |
inproceedings | blevins-etal-2016-automatically | Automatically Processing Tweets from Gang-Involved Youth: Towards Detecting Loss and Aggression | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1207/ | Blevins, Terra and Kwiatkowski, Robert and MacBeth, Jamie and McKeown, Kathleen and Patton, Desmond and Rambow, Owen | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2196--2206 | Violence is a serious problems for cities like Chicago and has been exacerbated by the use of social media by gang-involved youths for taunting rival gangs. We present a corpus of tweets from a young and powerful female gang member and her communicators, which we have annotated with discourse intention, using a deep read to understand how and what triggered conversations to escalate into aggression. We use this corpus to develop a part-of-speech tagger and phrase table for the variant of English that is used and a classifier for identifying tweets that express grieving and aggression. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,625 |
inproceedings | chen-etal-2016-content | Content-based Influence Modeling for Opinion Behavior Prediction | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1208/ | Chen, Chengyao and Wang, Zhitao and Lei, Yu and Li, Wenjie | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2207--2216 | Nowadays, social media has become a popular platform for companies to understand their customers. It provides valuable opportunities to gain new insights into how a person`s opinion about a product is influenced by his friends. Though various approaches have been proposed to study the opinion formation problem, they all formulate opinions as the derived sentiment values either discrete or continuous without considering the semantic information. In this paper, we propose a Content-based Social Influence Model to study the implicit mechanism underlying the change of opinions. We then apply the learned model to predict users' future opinions. The advantages of the proposed model is the ability to handle the semantic information and to learn two influence components including the opinion influence of the content information and the social relation factors. In the experiments conducted on Twitter datasets, our model significantly outperforms other popular opinion formation models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,626 |
inproceedings | doyle-levy-2016-data | Data-driven learning of symbolic constraints for a log-linear model in a phonological setting | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1209/ | Doyle, Gabriel and Levy, Roger | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2217--2226 | We propose a non-parametric Bayesian model for learning and weighting symbolically-defined constraints to populate a log-linear model. The model jointly infers a vector of binary constraint values for each candidate output and likely definitions for these constraints, combining observations of the output classes with a (potentially infinite) grammar over potential constraint definitions. We present results on a small morphophonological system, English regular plurals, as a test case. The inferred constraints, based on a grammar of articulatory features, perform as well as theoretically-defined constraints on both observed and novel forms of English regular plurals. The learned constraint values and definitions also closely resemble standard constraints defined within phonological theory. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,627 |
inproceedings | huang-etal-2016-chinese-tense | {C}hinese Tense Labelling and Causal Analysis | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1210/ | Huang, Hen-Hsen and Yang, Chang-Rui and Chen, Hsin-Hsi | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2227--2237 | This paper explores the role of tense information in Chinese causal analysis. Both tasks of causal type classification and causal directionality identification are experimented to show the significant improvement gained from tense features. To automatically extract the tense features, a Chinese tense predictor is proposed. Based on large amount of parallel data, our semi-supervised approach improves the dependency-based convolutional neural network (DCNN) models for Chinese tense labelling and thus the causal analysis. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,628 |
inproceedings | yang-etal-2016-exploring | Exploring Topic Discriminating Power of Words in {L}atent {D}irichlet {A}llocation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1211/ | Yang, Kai and Cai, Yi and Chen, Zhenhong and Leung, Ho-fung and Lau, Raymond | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2238--2247 | Latent Dirichlet Allocation (LDA) and its variants have been widely used to discover latent topics in textual documents. However, some of topics generated by LDA may be noisy with irrelevant words scattering across these topics. We name this kind of words as topic-indiscriminate words, which tend to make topics more ambiguous and less interpretable by humans. In our work, we propose a new topic model named TWLDA, which assigns low weights to words with low topic discriminating power (ability). Our experimental results show that the proposed approach, which effectively reduces the number of topic-indiscriminate words in discovered topics, improves the effectiveness of LDA. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,629 |
inproceedings | zhao-etal-2016-textual | Textual Entailment with Structured Attentions and Composition | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1212/ | Zhao, Kai and Huang, Liang and Ma, Mingbo | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2248--2258 | Deep learning techniques are increasingly popular in the textual entailment task, overcoming the fragility of traditional discrete models with hard alignments and logics. In particular, the recently proposed attention models (Rockt{\"aschel et al., 2015; Wang and Jiang, 2015) achieves state-of-the-art accuracy by computing soft word alignments between the premise and hypothesis sentences. However, there remains a major limitation: this line of work completely ignores syntax and recursion, which is helpful in many traditional efforts. We show that it is beneficial to extend the attention model to tree nodes between premise and hypothesis. More importantly, this subtree-level attention reveals information about entailment relation. We study the recursive composition of this subtree-level entailment relation, which can be viewed as a soft version of the Natural Logic framework (MacCartney and Manning, 2009). Experiments show that our structured attention and entailment composition model can correctly identify and infer entailment relations from the bottom up, and bring significant improvements in accuracy. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,630 |
inproceedings | maziarz-etal-2016-plwordnet | pl{W}ord{N}et 3.0 {--} a Comprehensive Lexical-Semantic Resource | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1213/ | Maziarz, Marek and Piasecki, Maciej and Rudnicka, Ewa and Szpakowicz, Stan and K{\k{e}}dzia, Pawe{\l} | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2259--2268 | We have released plWordNet 3.0, a very large wordnet for Polish. In addition to what is expected in wordnets {--} richly interrelated synsets {--} it contains sentiment and emotion annotations, a large set of multi-word expressions, and a mapping onto WordNet 3.1. Part of the release is enWordNet 1.0, a substantially enlarged copy of WordNet 3.1, with material added to allow for a more complete mapping. The paper discusses the design principles of plWordNet, its content, its statistical portrait, a comparison with similar resources, and a partial list of applications. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,631 |
inproceedings | londhe-etal-2016-time | Time-Independent and Language-Independent Extraction of Multiword Expressions From {T}witter | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1214/ | Londhe, Nikhil and Srihari, Rohini and Gopalakrishnan, Vishrawas | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2269--2278 | Multiword Expressions (MWEs) are crucial lexico-semantic units in any language. However, most work on MWEs has been focused on standard monolingual corpora. In this work, we examine MWE usage on Twitter - an inherently multilingual medium with an extremely short average text length that is often replete with grammatical errors. In this work we present a new graph based, language agnostic method for automatically extracting MWEs from tweets. We show how our method outperforms standard Association Measures. We also present a novel unsupervised evaluation technique to ascertain the accuracy of MWE extraction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,632 |
inproceedings | judea-strube-2016-incremental | Incremental Global Event Extraction | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1215/ | Judea, Alex and Strube, Michael | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2279--2289 | Event extraction is a difficult information extraction task. Li et al. (2014) explore the benefits of modeling event extraction and two related tasks, entity mention and relation extraction, jointly. This joint system achieves state-of-the-art performance in all tasks. However, as a system operating only at the sentence level, it misses valuable information from other parts of the document. In this paper, we present an incremental easy-first approach to make the global context of the entire document available to the intra-sentential, state-of-the-art event extractor. We show that our method robustly increases performance on two datasets, namely ACE 2005 and TAC 2015. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,633 |
inproceedings | xu-etal-2016-hierarchical | Hierarchical Memory Networks for Answer Selection on Unknown Words | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1216/ | Xu, Jiaming and Shi, Jing and Yao, Yiqun and Zheng, Suncong and Xu, Bo and Xu, Bo | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2290--2299 | Recently, end-to-end memory networks have shown promising results on Question Answering task, which encode the past facts into an explicit memory and perform reasoning ability by making multiple computational steps on the memory. However, memory networks conduct the reasoning on sentence-level memory to output coarse semantic vectors and do not further take any attention mechanism to focus on words, which may lead to the model lose some detail information, especially when the answers are rare or unknown words. In this paper, we propose a novel Hierarchical Memory Networks, dubbed HMN. First, we encode the past facts into sentence-level memory and word-level memory respectively. Then, $k$-max pooling is exploited following reasoning module on the sentence-level memory to sample the $k$ most relevant sentences to a question and feed these sentences into attention mechanism on the word-level memory to focus the words in the selected sentences. Finally, the prediction is jointly learned over the outputs of the sentence-level reasoning module and the word-level attention mechanism. The experimental results demonstrate that our approach successfully conducts answer selection on unknown words and achieves a better performance than memory networks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,634 |
inproceedings | gupta-etal-2016-revisiting | Revisiting Taxonomy Induction over {W}ikipedia | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1217/ | Gupta, Amit and Piccinno, Francesco and Kozhevnikov, Mikhail and Pa{\c{s}}ca, Marius and Pighin, Daniele | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2300--2309 | Guided by multiple heuristics, a unified taxonomy of entities and categories is distilled from the Wikipedia category network. A comprehensive evaluation, based on the analysis of upward generalization paths, demonstrates that the taxonomy supports generalizations which are more than twice as accurate as the state of the art. The taxonomy is available at \url{http://headstaxonomy.com}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,635 |
inproceedings | nguyen-etal-2016-joint | Joint Learning of Local and Global Features for Entity Linking via Neural Networks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1218/ | Nguyen, Thien Huu and Fauceglia, Nicolas and Rodriguez Muro, Mariano and Hassanzadeh, Oktie and Massimiliano Gliozzo, Alfio and Sadoghi, Mohammad | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2310--2320 | Previous studies have highlighted the necessity for entity linking systems to capture the local entity-mention similarities and the global topical coherence. We introduce a novel framework based on convolutional neural networks and recurrent neural networks to simultaneously model the local and global features for entity linking. The proposed model benefits from the capacity of convolutional neural networks to induce the underlying representations for local contexts and the advantage of recurrent neural networks to adaptively compress variable length sequences of predictions for global constraints. Our evaluation on multiple datasets demonstrates the effectiveness of the model and yields the state-of-the-art performance on such datasets. In addition, we examine the entity linking systems on the domain adaptation setting that further demonstrates the cross-domain robustness of the proposed model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,636 |
inproceedings | gunes-etal-2016-structured | Structured Aspect Extraction | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1219/ | Gunes, Omer and Furche, Tim and Orsi, Giorgio | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2321--2332 | Aspect extraction identifies relevant features from a textual description of an entity, e.g., a phone, and is typically targeted to product descriptions, reviews, and other short texts as an enabling task for, e.g., opinion mining and information retrieval. Current aspect extraction methods mostly focus on aspect terms and often neglect interesting modifiers of the term or embed them in the aspect term without proper distinction. Moreover, flat syntactic structures are often assumed, resulting in inaccurate extractions of complex aspects. This paper studies the problem of structured aspect extraction, a variant of traditional aspect extraction aiming at a fine-grained extraction of complex (i.e., hierarchical) aspects. We propose an unsupervised and scalable method for structured aspect extraction consisting of statistical noun phrase clustering, cPMI-based noun phrase segmentation, and hierarchical pattern induction. Our evaluation shows a substantial improvement over existing methods in terms of both quality and computational efficiency. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,637 |
inproceedings | baker-etal-2016-robust | Robust Text Classification for Sparsely Labelled Data Using Multi-level Embeddings | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1220/ | Baker, Simon and Kiela, Douwe and Korhonen, Anna | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2333--2343 | The conventional solution for handling sparsely labelled data is extensive feature engineering. This is time consuming and task and domain specific. We present a novel approach for learning embedded features that aims to alleviate this problem. Our approach jointly learns embeddings at different levels of granularity (word, sentence and document) along with the class labels. The intuition is that topic semantics represented by embeddings at multiple levels results in better classification. We evaluate this approach in unsupervised and semi-supervised settings on two sparsely labelled classification tasks, outperforming the handcrafted models and several embedding baselines. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,638 |
inproceedings | stathopoulos-teufel-2016-mathematical | Mathematical Information Retrieval based on Type Embeddings and Query Expansion | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1221/ | Stathopoulos, Yiannos and Teufel, Simone | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2344--2355 | We present an approach to mathematical information retrieval (MIR) that exploits a special kind of technical terminology, referred to as a mathematical type. In this paper, we present and evaluate a type detection mechanism and show its positive effect on the retrieval of research-level mathematics. Our best model, which performs query expansion with a type-aware embedding space, strongly outperforms standard IR models with state-of-the-art query expansion (vector space-based and language modelling-based), on a relatively new corpus of research-level queries. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,639 |
inproceedings | sneiders-2016-text | Text Retrieval by Term Co-occurrences in a Query-based Vector Space | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1222/ | Sneiders, Eriks | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2356--2365 | Term co-occurrence in a sentence or paragraph is a powerful and often overlooked feature for text matching in document retrieval. In our experiments with matching email-style query messages to webpages, such term co-occurrence helped greatly to filter and rank documents, compared to matching document-size bags-of-words. The paper presents the results of the experiments as well as a text-matching model where the query shapes the vector space, a document is modelled by two or three vectors in this vector space, and the query-document similarity score depends on the length of the vectors and the relationships between them. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,640 |
inproceedings | yu-jiang-2016-pairwise | Pairwise Relation Classification with Mirror Instances and a Combined Convolutional Neural Network | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1223/ | Yu, Jianfei and Jiang, Jing | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2366--2377 | Relation classification is the task of classifying the semantic relations between entity pairs in text. Observing that existing work has not fully explored using different representations for relation instances, especially in order to better handle the asymmetry of relation types, in this paper, we propose a neural network based method for relation classification that combines the raw sequence and the shortest dependency path representations of relation instances and uses mirror instances to perform pairwise relation classification. We evaluate our proposed models on the SemEval-2010 Task 8 dataset. The empirical results show that with two additional features, our model achieves the state-of-the-art result of F1 score of 85.7. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,641 |
inproceedings | wang-etal-2016-fasthybrid | {F}ast{H}ybrid: A Hybrid Model for Efficient Answer Selection | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1224/ | Wang, Lidan and Tan, Ming and Han, Jiawei | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2378--2388 | Answer selection is a core component in any question-answering systems. It aims to select correct answer sentences for a given question from a pool of candidate sentences. In recent years, many deep learning methods have been proposed and shown excellent results for this task. However, these methods typically require extensive parameter (and hyper-parameter) tuning, which give rise to efficiency issues for large-scale datasets, and potentially make them less portable across new datasets and domains (as re-tuning is usually required). In this paper, we propose an extremely efficient hybrid model (FastHybrid) that tackles the problem from both an accuracy and scalability point of view. FastHybrid is a light-weight model that requires little tuning and adaptation across different domains. It combines a fast deep model (which will be introduced in the method section) with an initial information retrieval model to effectively and efficiently handle answer selection. We introduce a new efficient attention mechanism in the hybrid model and demonstrate its effectiveness on several QA datasets. Experimental results show that although the hybrid uses no training data, its accuracy is often on-par with supervised deep learning techniques, while significantly reducing training and tuning costs across different domains. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,642 |
inproceedings | kim-lee-2016-extracting | Extracting Spatial Entities and Relations in {K}orean Text | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1225/ | Kim, Bogyum and Lee, Jae Sung | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2389--2396 | A spatial information extraction system retrieves spatial entities and their relationships for geological searches and reasoning. Spatial information systems have been developed mainly for English text, e.g., through the SpaceEval competition. Some of the techniques are useful but not directly applicable to Korean text, because of linguistic differences and the lack of language resources. In this paper, we propose a Korean spatial entity extraction model and a spatial relation extraction model; the spatial entity extraction model uses word vectors to alleviate the over generation and the spatial relation extraction mod-el uses dependency parse labels to find the proper arguments in relations. Experiments with Korean text show that the two models are effective for spatial information extraction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,643 |
inproceedings | xu-etal-2016-hybrid | Hybrid Question Answering over Knowledge Base and Free Text | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1226/ | Xu, Kun and Feng, Yansong and Huang, Songfang and Zhao, Dongyan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2397--2407 | Recent trend in question answering (QA) systems focuses on using structured knowledge bases (KBs) to find answers. While these systems are able to provide more precise answers than information retrieval (IR) based QA systems, the natural incompleteness of KB inevitably limits the question scope that the system can answer. In this paper, we present a hybrid question answering (hybrid-QA) system which exploits both structured knowledge base and free text to answer a question. The main challenge is to recognize the meaning of a question using these two resources, i.e., structured KB and free text. To address this, we map relational phrases to KB predicates and textual relations simultaneously, and further develop an integer linear program (ILP) model to infer on these candidates and provide a globally optimal solution. Experiments on benchmark datasets show that our system can benefit from both structured KB and free text, outperforming the state-of-the-art systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,644 |
inproceedings | shen-liu-2016-improved | Improved Word Embeddings with Implicit Structure Information | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1227/ | Shen, Jie and Liu, Cong | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2408--2417 | Distributed word representation is an efficient method for capturing semantic and syntactic word relations. In this work, we introduce an extension to the continuous bag-of-words model for learning word representations efficiently by using implicit structure information. Instead of relying on a syntactic parser which might be noisy and slow to build, we compute weights representing probabilities of syntactic relations based on the Huffman softmax tree in an efficient heuristic. The constructed {\textquotedblleft}implicit graphs{\textquotedblright} from these weights show that these weights contain useful implicit structure information. Extensive experiments performed on several word similarity and word analogy tasks show gains compared to the basic continuous bag-of-words model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,645 |
inproceedings | dahou-etal-2016-word | Word Embeddings and Convolutional Neural Network for {A}rabic Sentiment Classification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1228/ | Dahou, Abdelghani and Xiong, Shengwu and Zhou, Junwei and Haddoud, Mohamed Houcine and Duan, Pengfei | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2418--2427 | With the development and the advancement of social networks, forums, blogs and online sales, a growing number of Arabs are expressing their opinions on the web. In this paper, a scheme of Arabic sentiment classification, which evaluates and detects the sentiment polarity from Arabic reviews and Arabic social media, is studied. We investigated in several architectures to build a quality neural word embeddings using a 3.4 billion words corpus from a collected 10 billion words web-crawled corpus. Moreover, a convolutional neural network trained on top of pre-trained Arabic word embeddings is used for sentiment classification to evaluate the quality of these word embeddings. The simulation results show that the proposed scheme outperforms the existed methods on 4 out of 5 balanced and unbalanced datasets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,646 |
inproceedings | wang-etal-2016-combination | Combination of Convolutional and Recurrent Neural Network for Sentiment Analysis of Short Texts | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1229/ | Wang, Xingyou and Jiang, Weijie and Luo, Zhiyong | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2428--2437 | Sentiment analysis of short texts is challenging because of the limited contextual information they usually contain. In recent years, deep learning models such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been applied to text sentiment analysis with comparatively remarkable results. In this paper, we describe a jointed CNN and RNN architecture, taking advantage of the coarse-grained local features generated by CNN and long-distance dependencies learned via RNN for sentiment analysis of short texts. Experimental results show an obvious improvement upon the state-of-the-art on three benchmark corpora, MR, SST1 and SST2, with 82.28{\%}, 51.50{\%} and 89.95{\%} accuracy, respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,647 |
inproceedings | zubiaga-etal-2016-stance | Stance Classification in Rumours as a Sequential Task Exploiting the Tree Structure of Social Media Conversations | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1230/ | Zubiaga, Arkaitz and Kochkina, Elena and Liakata, Maria and Procter, Rob and Lukasik, Michal | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2438--2448 | Rumour stance classification, the task that determines if each tweet in a collection discussing a rumour is supporting, denying, questioning or simply commenting on the rumour, has been attracting substantial interest. Here we introduce a novel approach that makes use of the sequence of transitions observed in tree-structured conversation threads in Twitter. The conversation threads are formed by harvesting users' replies to one another, which results in a nested tree-like structure. Previous work addressing the stance classification task has treated each tweet as a separate unit. Here we analyse tweets by virtue of their position in a sequence and test two sequential classifiers, Linear-Chain CRF and Tree CRF, each of which makes different assumptions about the conversational structure. We experiment with eight Twitter datasets, collected during breaking news, and show that exploiting the sequential structure of Twitter conversations achieves significant improvements over the non-sequential methods. Our work is the first to model Twitter conversations as a tree structure in this manner, introducing a novel way of tackling NLP tasks on Twitter conversations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,648 |
inproceedings | zhang-etal-2016-tweet | Tweet Sarcasm Detection Using Deep Neural Network | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1231/ | Zhang, Meishan and Zhang, Yue and Fu, Guohong | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2449--2460 | Sarcasm detection has been modeled as a binary document classification task, with rich features being defined manually over input documents. Traditional models employ discrete manual features to address the task, with much research effect being devoted to the design of effective feature templates. We investigate the use of neural network for tweet sarcasm detection, and compare the effects of the continuous automatic features with discrete manual features. In particular, we use a bi-directional gated recurrent neural network to capture syntactic and semantic information over tweets locally, and a pooling neural network to extract contextual features automatically from history tweets. Results show that neural features give improved accuracies for sarcasm detection, with different error distributions compared with discrete manual features. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,649 |
inproceedings | menini-tonelli-2016-agreement | Agreement and Disagreement: Comparison of Points of View in the Political Domain | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1232/ | Menini, Stefano and Tonelli, Sara | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2461--2470 | The automated comparison of points of view between two politicians is a very challenging task, due not only to the lack of annotated resources, but also to the different dimensions participating to the definition of agreement and disagreement. In order to shed light on this complex task, we first carry out a pilot study to manually annotate the components involved in detecting agreement and disagreement. Then, based on these findings, we implement different features to capture them automatically via supervised classification. We do not focus on debates in dialogical form, but we rather consider sets of documents, in which politicians may express their position with respect to different topics in an implicit or explicit way, like during an electoral campaign. We create and make available three different datasets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,650 |
inproceedings | welch-mihalcea-2016-targeted | Targeted Sentiment to Understand Student Comments | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1233/ | Welch, Charles and Mihalcea, Rada | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2471--2481 | We address the task of targeted sentiment as a means of understanding the sentiment that students hold toward courses and instructors, as expressed by students in their comments. We introduce a new dataset consisting of student comments annotated for targeted sentiment and describe a system that can both identify the courses and instructors mentioned in student comments, as well as label the students' sentiment toward those entities. Through several comparative evaluations, we show that our system outperforms previous work on a similar task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,651 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.