entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | joshi-etal-2016-towards | Towards Sub-Word Level Compositions for Sentiment Analysis of {H}indi-{E}nglish Code Mixed Text | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1234/ | Joshi, Aditya and Prabhu, Ameya and Shrivastava, Manish and Varma, Vasudeva | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2482--2491 | Sentiment analysis (SA) using code-mixed data from social media has several applications in opinion mining ranging from customer satisfaction to social campaign analysis in multilingual societies. Advances in this area are impeded by the lack of a suitable annotated dataset. We introduce a Hindi-English (Hi-En) code-mixed dataset for sentiment analysis and perform empirical analysis comparing the suitability and performance of various state-of-the-art SA methods in social media. In this paper, we introduce learning sub-word level representations in our LSTM (Subword-LSTM) architecture instead of character-level or word-level representations. This linguistic prior in our architecture enables us to learn the information about sentiment value of important morphemes. This also seems to work well in highly noisy text containing misspellings as shown in our experiments which is demonstrated in morpheme-level feature maps learned by our model. Also, we hypothesize that encoding this linguistic prior in the Subword-LSTM architecture leads to the superior performance. Our system attains accuracy 4-5{\%} greater than traditional approaches on our dataset, and also outperforms the available system for sentiment analysis in Hi-En code-mixed text by 18{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,652 |
inproceedings | xiong-etal-2016-distance | Distance Metric Learning for Aspect Phrase Grouping | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1235/ | Xiong, Shufeng and Zhang, Yue and Ji, Donghong and Lou, Yinxia | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2492--2502 | Aspect phrase grouping is an important task in aspect-level sentiment analysis. It is a challenging problem due to polysemy and context dependency. We propose an Attention-based Deep Distance Metric Learning (ADDML) method, by considering aspect phrase representation as well as context representation. First, leveraging the characteristics of the review text, we automatically generate aspect phrase sample pairs for distant supervision. Second, we feed word embeddings of aspect phrases and their contexts into an attention-based neural network to learn feature representation of contexts. Both aspect phrase embedding and context embedding are used to learn a deep feature subspace for measure the distances between aspect phrases for K-means clustering. Experiments on four review datasets show that the proposed method outperforms state-of-the-art strong baseline methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,653 |
inproceedings | bao-etal-2016-constraint | Constraint-Based Question Answering with Knowledge Graph | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1236/ | Bao, Junwei and Duan, Nan and Yan, Zhao and Zhou, Ming and Zhao, Tiejun | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2503--2514 | WebQuestions and SimpleQuestions are two benchmark data-sets commonly used in recent knowledge-based question answering (KBQA) work. Most questions in them are {\textquoteleft}simple' questions which can be answered based on a single relation in the knowledge base. Such data-sets lack the capability of evaluating KBQA systems on complicated questions. Motivated by this issue, we release a new data-set, namely ComplexQuestions, aiming to measure the quality of KBQA systems on {\textquoteleft}multi-constraint' questions which require multiple knowledge base relations to get the answer. Beside, we propose a novel systematic KBQA approach to solve multi-constraint questions. Compared to state-of-the-art methods, our approach not only obtains comparable results on the two existing benchmark data-sets, but also achieves significant improvements on the ComplexQuestions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,654 |
inproceedings | barron-cedeno-etal-2016-selecting | Selecting Sentences versus Selecting Tree Constituents for Automatic Question Ranking | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1237/ | Barr{\'o}n-Cede{\~n}o, Alberto and Da San Martino, Giovanni and Romeo, Salvatore and Moschitti, Alessandro | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2515--2525 | Community question answering (cQA) websites are focused on users who query questions onto an online forum, expecting for other users to provide them answers or suggestions. Unlike other social media, the length of the posted queries has no limits and queries tend to be multi-sentence elaborations combining context, actual questions, and irrelevant information. We approach the problem of question ranking: given a user`s new question, to retrieve those previously-posted questions which could be equivalent, or highly relevant. This could prevent the posting of nearly-duplicate questions and provide the user with instantaneous answers. For the first time in cQA, we address the selection of relevant text {---}both at sentence- and at constituent-level{---} for parse tree-based representations. Our supervised models for text selection boost the performance of a tree kernel-based machine learning model, allowing it to overtake the current state of the art on a recently released cQA evaluation framework. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,655 |
inproceedings | shen-huang-2016-attention | Attention-Based Convolutional Neural Network for Semantic Relation Extraction | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1238/ | Shen, Yatian and Huang, Xuanjing | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2526--2536 | Nowadays, neural networks play an important role in the task of relation classification. In this paper, we propose a novel attention-based convolutional neural network architecture for this task. Our model makes full use of word embedding, part-of-speech tag embedding and position embedding information. Word level attention mechanism is able to better determine which parts of the sentence are most influential with respect to the two entities of interest. This architecture enables learning some important features from task-specific labeled data, forgoing the need for external knowledge such as explicit dependency structures. Experiments on the SemEval-2010 Task 8 benchmark dataset show that our model achieves better performances than several state-of-the-art neural network models and can achieve a competitive performance just with minimal feature engineering. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,656 |
inproceedings | gupta-etal-2016-table | Table Filling Multi-Task Recurrent Neural Network for Joint Entity and Relation Extraction | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1239/ | Gupta, Pankaj and Sch{\"utze, Hinrich and Andrassy, Bernt | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2537--2547 | This paper proposes a novel context-aware joint entity and word-level relation extraction approach through semantic composition of words, introducing a Table Filling Multi-Task Recurrent Neural Network (TF-MTRNN) model that reduces the entity recognition and relation classification tasks to a table-filling problem and models their interdependencies. The proposed neural network architecture is capable of modeling multiple relation instances without knowing the corresponding relation arguments in a sentence. The experimental results show that a simple approach of piggybacking candidate entities to model the label dependencies from relations to entities improves performance. We present state-of-the-art results with improvements of 2.0{\%} and 2.7{\%} for entity recognition and relation classification, respectively on CoNLL04 dataset. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,657 |
inproceedings | zhang-etal-2016-bilingual | Bilingual Autoencoders with Global Descriptors for Modeling Parallel Sentences | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1240/ | Zhang, Biao and Xiong, Deyi and Su, Jinsong and Duan, Hong and Zhang, Min | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2548--2558 | Parallel sentence representations are important for bilingual and cross-lingual tasks in natural language processing. In this paper, we explore a bilingual autoencoder approach to model parallel sentences. We extract sentence-level global descriptors (e.g. min, max) from word embeddings, and construct two monolingual autoencoders over these descriptors on the source and target language. In order to tightly connect the two autoencoders with bilingual correspondences, we force them to share the same decoding parameters and minimize a corpus-level semantic distance between the two languages. Being optimized towards a joint objective function of reconstruction and semantic errors, our bilingual antoencoder is able to learn continuous-valued latent representations for parallel sentences. Experiments on both intrinsic and extrinsic evaluations on statistical machine translation tasks show that our autoencoder achieves substantial improvements over the baselines. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,658 |
inproceedings | pal-etal-2016-multi | Multi-Engine and Multi-Alignment Based Automatic Post-Editing and its Impact on Translation Productivity | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1241/ | Pal, Santanu and Naskar, Sudip Kumar and van Genabith, Josef | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2559--2570 | In this paper we combine two strands of machine translation (MT) research: automatic post-editing (APE) and multi-engine (system combination) MT. APE systems learn a target-language-side second stage MT system from the data produced by human corrected output of a first stage MT system, to improve the output of the first stage MT in what is essentially a sequential MT system combination architecture. At the same time, there is a rich research literature on parallel MT system combination where the same input is fed to multiple engines and the best output is selected or smaller sections of the outputs are combined to obtain improved translation output. In the paper we show that parallel system combination in the APE stage of a sequential MT-APE combination yields substantial translation improvements both measured in terms of automatic evaluation metrics as well as in terms of productivity improvements measured in a post-editing experiment. We also show that system combination on the level of APE alignments yields further improvements. Overall our APE system yields statistically significant improvement of 5.9{\%} relative BLEU over a strong baseline (English{--}Italian Google MT) and 21.76{\%} productivity increase in a human post-editing experiment with professional translators. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,659 |
inproceedings | van-der-wees-etal-2016-measuring | Measuring the Effect of Conversational Aspects on Machine Translation Quality | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1242/ | van der Wees, Marlies and Bisazza, Arianna and Monz, Christof | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2571--2581 | Research in statistical machine translation (SMT) is largely driven by formal translation tasks, while translating informal text is much more challenging. In this paper we focus on SMT for the informal genre of dialogues, which has rarely been addressed to date. Concretely, we investigate the effect of dialogue acts, speakers, gender, and text register on SMT quality when translating fictional dialogues. We first create and release a corpus of multilingual movie dialogues annotated with these four dialogue-specific aspects. When measuring translation performance for each of these variables, we find that BLEU fluctuations between their categories are often significantly larger than randomly expected. Following this finding, we hypothesize and show that SMT of fictional dialogues benefits from adaptation towards dialogue acts and registers. Finally, we find that male speakers are harder to translate and use more vulgar language than female speakers, and that vulgarity is often not preserved during translation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,660 |
inproceedings | passban-etal-2016-enriching | Enriching Phrase Tables for Statistical Machine Translation Using Mixed Embeddings | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1243/ | Passban, Peyman and Liu, Qun and Way, Andy | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2582--2591 | The phrase table is considered to be the main bilingual resource for the phrase-based statistical machine translation (PBSMT) model. During translation, a source sentence is decomposed into several phrases. The best match of each source phrase is selected among several target-side counterparts within the phrase table, and processed by the decoder to generate a sentence-level translation. The best match is chosen according to several factors, including a set of bilingual features. PBSMT engines by default provide four probability scores in phrase tables which are considered as the main set of bilingual features. Our goal is to enrich that set of features, as a better feature set should yield better translations. We propose new scores generated by a Convolutional Neural Network (CNN) which indicate the semantic relatedness of phrase pairs. We evaluate our model in different experimental settings with different language pairs. We observe significant improvements when the proposed features are incorporated into the PBSMT pipeline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,661 |
inproceedings | song-etal-2016-anecdote | Anecdote Recognition and Recommendation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1244/ | Song, Wei and Fu, Ruiji and Liu, Lizhen and Wang, Hanshi and Liu, Ting | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2592--2602 | We introduce a novel task Anecdote Recognition and Recommendation. An anecdote is a story with a point revealing account of an individual person. Recommending proper anecdotes can be used as evidence to support argumentative writing or as a clue for further reading. We represent an anecdote as a structured tuple {---} {\ensuremath{<}} person, story, implication {\ensuremath{>}}. Anecdote recognition runs on archived argumentative essays. We extract narratives containing events of a person as the anecdote story. More importantly, we uncover the anecdote implication, which reveals the meaning and topic of an anecdote. Our approach depends on discourse role identification. Discourse roles such as thesis, main ideas and support help us locate stories and their implications in essays. The experiments show that informative and interpretable anecdotes can be recognized. These anecdotes are used for anecdote recommendation. The anecdote recommender can recommend proper anecdotes in response to given topics. The anecdote implication contributes most for bridging user interested topics and relevant anecdotes. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,662 |
inproceedings | jiang-etal-2016-training | Training Data Enrichment for Infrequent Discourse Relations | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1245/ | Jiang, Kailang and Carenini, Giuseppe and Ng, Raymond | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2603--2614 | Discourse parsing is a popular technique widely used in text understanding, sentiment analysis and other NLP tasks. However, for most discourse parsers, the performance varies significantly across different discourse relations. In this paper, we first validate the underfitting hypothesis, i.e., the less frequent a relation is in the training data, the poorer the performance on that relation. We then explore how to increase the number of positive training instances, without resorting to manually creating additional labeled data. We propose a training data enrichment framework that relies on co-training of two different discourse parsers on unlabeled documents. Importantly, we show that co-training alone is not sufficient. The framework requires a filtering step to ensure that only {\textquotedblleft}good quality{\textquotedblright} unlabeled documents can be used for enrichment and re-training. We propose and evaluate two ways to perform the filtering. The first is to use an agreement score between the two parsers. The second is to use only the confidence score of the faster parser. Our empirical results show that agreement score can help to boost the performance on infrequent relations, and that the confidence score is a viable approximation of the agreement score for infrequent relations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,663 |
inproceedings | zhang-etal-2016-inferring | Inferring Discourse Relations from {PDTB}-style Discourse Labels for Argumentative Revision Classification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1246/ | Zhang, Fan and Litman, Diane and Forbes Riley, Katherine | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2615--2624 | Penn Discourse Treebank (PDTB)-style annotation focuses on labeling local discourse relations between text spans and typically ignores larger discourse contexts. In this paper we propose two approaches to infer discourse relations in a paragraph-level context from annotated PDTB labels. We investigate the utility of inferring such discourse information using the task of revision classification. Experimental results demonstrate that the inferred information can significantly improve classification performance compared to baselines, not only when PDTB annotation comes from humans but also from automatic parsers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,664 |
inproceedings | kabbara-etal-2016-capturing | Capturing Pragmatic Knowledge in Article Usage Prediction using {LSTM}s | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1247/ | Kabbara, Jad and Feng, Yulan and Cheung, Jackie Chi Kit | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2625--2634 | We examine the potential of recurrent neural networks for handling pragmatic inferences involving complex contextual cues for the task of article usage prediction. We train and compare several variants of Long Short-Term Memory (LSTM) networks with an attention mechanism. Our model outperforms a previous state-of-the-art system, achieving up to 96.63{\%} accuracy on the WSJ/PTB corpus. In addition, we perform a series of analyses to understand the impact of various model choices. We find that the gain in performance can be attributed to the ability of LSTMs to pick up on contextual cues, both local and further away in distance, and that the model is able to solve cases involving reasoning about coreference and synonymy. We also show how the attention mechanism contributes to the interpretability of the model`s effectiveness. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,665 |
inproceedings | pateria-2016-aspect | Aspect Based Sentiment Analysis using Sentiment Flow with Local and Non-local Neighbor Information | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1248/ | Pateria, Shubham | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2635--2646 | Aspect-level analysis of sentiments contained in a review text is important to reveal a detailed picture of consumer opinions. While a plethora of methods have been traditionally employed for this task, majority focus has been on analyzing only aspect-centered local information. However, incorporating context information from non-local aspect neighbors may capture richer structure in review text and enhance prediction. This may especially be helpful to resolve ambiguous predictions. The context around an aspect can be incorporated using semantic relations within text and inter-label dependencies in the output. On the output side, this becomes a structured prediction task. However, non-local label correlations are computationally heavy and intractable to infer for structured prediction models like Conditional Random Fields (CRF). Moreover, some prior intuition is required to incorporate non-local context. Thus, inspired by previous research on multi-stage prediction, we propose a two-level model for aspect-based analysis. The proposed model uses predicted probability estimates from first level to incorporate neighbor information in the second level. The model is evaluated on data taken from SemEval Workshops and Bing Liu`s review collection. It shows comparatively better performance against few existing methods. Overall, we get prediction accuracy in a range of 83-88{\%} and almost 3-4 point increment against baseline (first level only) scores. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,666 |
inproceedings | li-etal-2016-two | Two-View Label Propagation to Semi-supervised Reader Emotion Classification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1249/ | Li, Shoushan and Xu, Jian and Zhang, Dong and Zhou, Guodong | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2647--2655 | In the literature, various supervised learning approaches have been adopted to address the task of reader emotion classification. However, the classification performance greatly suffers when the size of the labeled data is limited. In this paper, we propose a two-view label propagation approach to semi-supervised reader emotion classification by exploiting two views, namely source text and response text in a label propagation algorithm. Specifically, our approach depends on two word-document bipartite graphs to model the relationship among the samples in the two views respectively. Besides, the two bipartite graphs are integrated by linking each source text sample with its corresponding response text sample via a length-sensitive transition probability. In this way, our two-view label propagation approach to semi-supervised reader emotion classification largely alleviates the reliance on the strong sufficiency and independence assumptions of the two views, as required in co-training. Empirical evaluation demonstrates the effectiveness of our two-view label propagation approach to semi-supervised reader emotion classification. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,667 |
inproceedings | ebrahimi-etal-2016-joint | A Joint Sentiment-Target-Stance Model for Stance Classification in Tweets | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1250/ | Ebrahimi, Javid and Dou, Dejing and Lowd, Daniel | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2656--2665 | Classifying the stance expressed in online microblogging social media is an emerging problem in opinion mining. We propose a probabilistic approach to stance classification in tweets, which models stance, target of stance, and sentiment of tweet, jointly. Instead of simply conjoining the sentiment or target variables as extra variables to the feature space, we use a novel formulation to incorporate three-way interactions among sentiment-stance-input variables and three-way interactions among target-stance-input variables. The proposed specification intuitively aims to discriminate sentiment features from target features for stance classification. In addition, regularizing a single stance classifier, which handles all targets, acts as a soft weight-sharing among them. We demonstrate that discriminative training of this model achieves the state-of-the-art results in supervised stance classification, and its generative training obtains competitive results in the weakly supervised setting. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,668 |
inproceedings | cambria-etal-2016-senticnet | {S}entic{N}et 4: A Semantic Resource for Sentiment Analysis Based on Conceptual Primitives | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1251/ | Cambria, Erik and Poria, Soujanya and Bajpai, Rajiv and Schuller, Bjoern | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2666--2677 | An important difference between traditional AI systems and human intelligence is the human ability to harness commonsense knowledge gleaned from a lifetime of learning and experience to make informed decisions. This allows humans to adapt easily to novel situations where AI fails catastrophically due to a lack of situation-specific rules and generalization capabilities. Commonsense knowledge also provides background information that enables humans to successfully operate in social situations where such knowledge is typically assumed. Since commonsense consists of information that humans take for granted, gathering it is an extremely difficult task. Previous versions of SenticNet were focused on collecting this kind of knowledge for sentiment analysis but they were heavily limited by their inability to generalize. SenticNet 4 overcomes such limitations by leveraging on conceptual primitives automatically generated by means of hierarchical clustering and dimensionality reduction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,669 |
inproceedings | li-etal-2016-joint | Joint Embedding of Hierarchical Categories and Entities for Concept Categorization and Dataless Classification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1252/ | Li, Yuezhang and Zheng, Ronghuo and Tian, Tian and Hu, Zhiting and Iyer, Rahul and Sycara, Katia | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2678--2688 | Existing work learning distributed representations of knowledge base entities has largely failed to incorporate rich categorical structure, and is unable to induce category representations. We propose a new framework that embeds entities and categories jointly into a semantic space, by integrating structured knowledge and taxonomy hierarchy from large knowledge bases. Our framework enables to compute meaningful semantic relatedness between entities and categories in a principled way, and can handle both single-word and multiple-word concepts. Our method shows significant improvement on the tasks of concept categorization and dataless hierarchical classification. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,670 |
inproceedings | jiang-etal-2016-latent | Latent Topic Embedding | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1253/ | Jiang, Di and Shi, Lei and Lian, Rongzhong and Wu, Hua | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2689--2698 | Topic modeling and word embedding are two important techniques for deriving latent semantics from data. General-purpose topic models typically work in coarse granularity by capturing word co-occurrence at the document/sentence level. In contrast, word embedding models usually work in much finer granularity by modeling word co-occurrence within small sliding windows. With the aim of deriving latent semantics by considering word co-occurrence at different levels of granularity, we propose a novel model named \textit{Latent Topic Embedding} (LTE), which seamlessly integrates topic generation and embedding learning in one unified framework. We further propose an efficient Monte Carlo EM algorithm to estimate the parameters of interest. By retaining the individual advantages of topic modeling and word embedding, LTE results in better latent topics and word embedding. Extensive experiments verify the superiority of LTE over the state-of-the-arts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,671 |
inproceedings | nguyen-etal-2016-neural | Neural-based Noise Filtering from Word Embeddings | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1254/ | Nguyen, Kim Anh and Schulte im Walde, Sabine and Vu, Ngoc Thang | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2699--2707 | Word embeddings have been demonstrated to benefit NLP tasks impressively. Yet, there is room for improvements in the vector representations, because current word embeddings typically contain unnecessary information, i.e., noise. We propose two novel models to improve word embeddings by unsupervised learning, in order to yield word denoising embeddings. The word denoising embeddings are obtained by strengthening salient information and weakening noise in the original word embeddings, based on a deep feed-forward neural network filter. Results from benchmark tasks show that the filtered word denoising embeddings outperform the original word embeddings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,672 |
inproceedings | aga-etal-2016-integrating | Integrating Distributional and Lexical Information for Semantic Classification of Words using {MRMF} | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1255/ | Aga, Rosa Tsegaye and Drumond, Lucas and Wartena, Christian and Schmidt-Thieme, Lars | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2708--2717 | Semantic classification of words using distributional features is usually based on the semantic similarity of words. We show on two different datasets that a trained classifier using the distributional features directly gives better results. We use Support Vector Machines (SVM) and Multi-relational Matrix Factorization (MRMF) to train classifiers. Both give similar results. However, MRMF, that was not used for semantic classification with distributional features before, can easily be extended with more matrices containing more information from different sources on the same problem. We demonstrate the effectiveness of the novel approach by including information from WordNet. Thus we show, that MRMF provides an interesting approach for building semantic classifiers that (1) gives better results than unsupervised approaches based on vector similarity, (2) gives similar results as other supervised methods and (3) can naturally be extended with other sources of information in order to improve the results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,673 |
inproceedings | gonen-goldberg-2016-semi | Semi Supervised Preposition-Sense Disambiguation using Multilingual Data | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1256/ | Gonen, Hila and Goldberg, Yoav | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2718--2729 | Prepositions are very common and very ambiguous, and understanding their sense is critical for understanding the meaning of the sentence. Supervised corpora for the preposition-sense disambiguation task are small, suggesting a semi-supervised approach to the task. We show that signals from unannotated multilingual data can be used to improve supervised preposition-sense disambiguation. Our approach pre-trains an LSTM encoder for predicting the translation of a preposition, and then incorporates the pre-trained encoder as a component in a supervised classification system, and fine-tunes it for the task. The multilingual signals consistently improve results on two preposition-sense datasets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,674 |
inproceedings | van-hee-etal-2016-monday | {M}onday mornings are my fave :) {\#}not Exploring the Automatic Recognition of Irony in {E}nglish tweets | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1257/ | Van Hee, Cynthia and Lefever, Els and Hoste, V{\'e}ronique | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2730--2739 | Recognising and understanding irony is crucial for the improvement natural language processing tasks including sentiment analysis. In this study, we describe the construction of an English Twitter corpus and its annotation for irony based on a newly developed fine-grained annotation scheme. We also explore the feasibility of automatic irony recognition by exploiting a varied set of features including lexical, syntactic, sentiment and semantic (Word2Vec) information. Experiments on a held-out test set show that our irony classifier benefits from this combined information, yielding an F1-score of 67.66{\%}. When explicit hashtag information like {\#}irony is included in the data, the system even obtains an F1-score of 92.77{\%}. A qualitative analysis of the output reveals that recognising irony that results from a polarity clash appears to be (much) more feasible than recognising other forms of ironic utterances (e.g., descriptions of situational irony). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,675 |
inproceedings | guggilla-etal-2016-cnn | {CNN}- and {LSTM}-based Claim Classification in Online User Comments | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1258/ | Guggilla, Chinnappa and Miller, Tristan and Gurevych, Iryna | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2740--2751 | When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes propositions according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,676 |
inproceedings | peng-feldman-2016-experiments | Experiments in Idiom Recognition | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1259/ | Peng, Jing and Feldman, Anna | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2752--2761 | Some expressions can be ambiguous between idiomatic and literal interpretations depending on the context they occur in, e.g., {\textquoteleft}sales hit the roof' vs. {\textquoteleft}hit the roof of the car'. We present a novel method of classifying whether a given instance is literal or idiomatic, focusing on verb-noun constructions. We report state-of-the-art results on this task using an approach based on the hypothesis that the distributions of the contexts of the idiomatic phrases will be different from the contexts of the literal usages. We measure contexts by using projections of the words into vector space. For comparison, we implement Fazly et al. (2009)`s, Sporleder and Li (2009)`s, and Li and Sporleder (2010b)`s methods and apply them to our data. We provide experimental results validating the proposed techniques. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,677 |
inproceedings | laha-raykar-2016-empirical | An Empirical Evaluation of various Deep Learning Architectures for Bi-Sequence Classification Tasks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1260/ | Laha, Anirban and Raykar, Vikas | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2762--2773 | Several tasks in argumentation mining and debating, question-answering, and natural language inference involve classifying a sequence in the context of another sequence (referred as bi-sequence classification). For several single sequence classification tasks, the current state-of-the-art approaches are based on recurrent and convolutional neural networks. On the other hand, for bi-sequence classification problems, there is not much understanding as to the best deep learning architecture. In this paper, we attempt to get an understanding of this category of problems by extensive empirical evaluation of 19 different deep learning architectures (specifically on different ways of handling context) for various problems originating in natural language processing like debating, textual entailment and question-answering. Following the empirical evaluation, we offer our insights and conclusions regarding the architectures we have considered. We also establish the first deep learning baselines for three argumentation mining tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,678 |
inproceedings | senuma-aizawa-2016-learning | Learning Succinct Models: Pipelined Compression with {L}1-Regularization, Hashing, {E}lias-{F}ano Indices, and Quantization | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1261/ | Senuma, Hajime and Aizawa, Akiko | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2774--2784 | The recent proliferation of smart devices necessitates methods to learn small-sized models. This paper demonstrates that if there are $m$ features in total but only $n = o(\sqrt{m})$ features are required to distinguish examples, with $\Omega(\log m)$ training examples and reasonable settings, it is possible to obtain a good model in a \textit{succinct} representation using $n \log_2 \frac{m}{n} + o(m)$ bits, by using a pipeline of existing compression methods: L1-regularized logistic regression, feature hashing, Elias{--}Fano indices, and randomized quantization. An experiment shows that a noun phrase chunking task for which an existing library requires 27 megabytes can be compressed to less than 13 \textit{kilo}bytes without notable loss of accuracy. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,679 |
inproceedings | hellrich-hahn-2016-bad | Bad {C}ompany{---}{N}eighborhoods in Neural Embedding Spaces Considered Harmful | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1262/ | Hellrich, Johannes and Hahn, Udo | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2785--2796 | We assess the reliability and accuracy of (neural) word embeddings for both modern and historical English and German. Our research provides deeper insights into the empirically justified choice of optimal training methods and parameters. The overall low reliability we observe, nevertheless, casts doubt on the suitability of word neighborhoods in embedding spaces as a basis for qualitative conclusions on synchronic and diachronic lexico-semantic matters, an issue currently high up in the agenda of Digital Humanities. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,680 |
inproceedings | thorat-choudhari-2016-implementing | Implementing a Reverse Dictionary, based on word definitions, using a Node-Graph Architecture | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1263/ | Thorat, Sushrut and Choudhari, Varad | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2797--2806 | In this paper, we outline an approach to build graph-based reverse dictionaries using word definitions. A reverse dictionary takes a phrase as an input and outputs a list of words semantically similar to that phrase. It is a solution to the Tip-of-the-Tongue problem. We use a distance-based similarity measure, computed on a graph, to assess the similarity between a word and the input phrase. We compare the performance of our approach with the Onelook Reverse Dictionary and a distributional semantics method based on word2vec, and show that our approach is much better than the distributional semantics method, and as good as Onelook, on a 3k lexicon. This simple approach sets a new performance baseline for reverse dictionaries. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,681 |
inproceedings | collell-moens-2016-image | Is an Image Worth More than a Thousand Words? On the Fine-Grain Semantic Differences between Visual and Linguistic Representations | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1264/ | Collell, Guillem and Moens, Marie-Francine | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2807--2817 | Human concept representations are often grounded with visual information, yet some aspects of meaning cannot be visually represented or are better described with language. Thus, vision and language provide complementary information that, properly combined, can potentially yield more complete concept representations. Recently, state-of-the-art distributional semantic models and convolutional neural networks have achieved great success in representing linguistic and visual knowledge respectively. In this paper, we compare both, visual and linguistic representations in their ability to capture different types of fine-grain semantic knowledge{---}or attributes{---}of concepts. Humans often describe objects using attributes, that is, properties such as shape, color or functionality, which often transcend the linguistic and visual modalities. In our setting, we evaluate how well attributes can be predicted by using the unimodal representations as inputs. We are interested in first, finding out whether attributes are generally better captured by either the vision or by the language modality; and second, if none of them is clearly superior (as we hypothesize), what type of attributes or semantic knowledge are better encoded from each modality. Ultimately, our study sheds light on the potential of combining visual and textual representations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,682 |
inproceedings | mirza-tonelli-2016-contribution | On the contribution of word embeddings to temporal relation classification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1265/ | Mirza, Paramita and Tonelli, Sara | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2818--2828 | Temporal relation classification is a challenging task, especially when there are no explicit markers to characterise the relation between temporal entities. This occurs frequently in inter-sentential relations, whose entities are not connected via direct syntactic relations making classification even more difficult. In these cases, resorting to features that focus on the semantic content of the event words may be very beneficial for inferring implicit relations. Specifically, while morpho-syntactic and context features are considered sufficient for classifying event-timex pairs, we believe that exploiting distributional semantic information about event words can benefit supervised classification of other types of pairs. In this work, we assess the impact of using word embeddings as features for event words in classifying temporal relations of event-event pairs and event-DCT (document creation time) pairs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,683 |
inproceedings | inoue-etal-2016-modeling | Modeling Context-sensitive Selectional Preference with Distributed Representations | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1266/ | Inoue, Naoya and Matsubayashi, Yuichiroh and Ono, Masayuki and Okazaki, Naoaki and Inui, Kentaro | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2829--2838 | This paper proposes a novel problem setting of selectional preference (SP) between a predicate and its arguments, called as context-sensitive SP (CSP). CSP models the narrative consistency between the predicate and preceding contexts of its arguments, in addition to the conventional SP based on semantic types. Furthermore, we present a novel CSP model that extends the neural SP model (Van de Cruys, 2014) to incorporate contextual information into the distributed representations of arguments. Experimental results demonstrate that the proposed CSP model successfully learns CSP and outperforms the conventional SP model in coreference cluster ranking. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,684 |
inproceedings | petersen-hellwig-2016-exploring | Exploring the value space of attributes: Unsupervised bidirectional clustering of adjectives in {G}erman | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1267/ | Petersen, Wiebke and Hellwig, Oliver | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2839--2848 | The paper presents an iterative bidirectional clustering of adjectives and nouns based on a co-occurrence matrix. The clustering method combines a Vector Space Models (VSM) and the results of a Latent Dirichlet Allocation (LDA), whose results are merged in each iterative step. The aim is to derive a clustering of German adjectives that reflects latent semantic classes of adjectives, and that can be used to induce frame-based representations of nouns in a later step. We are able to show that the method induces meaningful groups of adjectives, and that it outperforms a baseline k-means algorithm. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,685 |
inproceedings | kartsaklis-sadrzadeh-2016-distributional | Distributional Inclusion Hypothesis for Tensor-based Composition | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1268/ | Kartsaklis, Dimitri and Sadrzadeh, Mehrnoosh | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2849--2860 | According to the distributional inclusion hypothesis, entailment between words can be measured via the feature inclusions of their distributional vectors. In recent work, we showed how this hypothesis can be extended from words to phrases and sentences in the setting of compositional distributional semantics. This paper focuses on inclusion properties of tensors; its main contribution is a theoretical and experimental analysis of how feature inclusion works in different concrete models of verb tensors. We present results for relational, Frobenius, projective, and holistic methods and compare them to the simple vector addition, multiplication, min, and max models. The degrees of entailment thus obtained are evaluated via a variety of existing word-based measures, such as Weed`s and Clarke`s, KL-divergence, APinc, balAPinc, and two of our previously proposed metrics at the phrase/sentence level. We perform experiments on three entailment datasets, investigating which version of tensor-based composition achieves the highest performance when combined with the sentence-level measures. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,686 |
inproceedings | maki-etal-2016-parameter | Parameter estimation of {J}apanese predicate argument structure analysis model using eye gaze information | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1269/ | Maki, Ryosuke and Nishikawa, Hitoshi and Tokunaga, Takenobu | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2861--2869 | In this paper, we propose utilising eye gaze information for estimating parameters of a Japanese predicate argument structure (PAS) analysis model. We employ not only linguistic information in the text, but also the information of annotator eye gaze during their annotation process. We hypothesise that annotator`s frequent looks at certain candidates imply their plausibility of being the argument of the predicate. Based on this hypothesis, we consider annotator eye gaze for estimating the model parameters of the PAS analysis. The evaluation experiment showed that introducing eye gaze information increased the accuracy of the PAS analysis by 0.05 compared with the conventional methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,687 |
inproceedings | sha-etal-2016-reading | Reading and Thinking: Re-read {LSTM} Unit for Textual Entailment Recognition | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1270/ | Sha, Lei and Chang, Baobao and Sui, Zhifang and Li, Sujian | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2870--2879 | Recognizing Textual Entailment (RTE) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate deep neural network methods for the RTE task. Previous neural network based methods usually try to encode the two sentences (premise and hypothesis) and send them together into a multi-layer perceptron to get their entailment type, or use LSTM-RNN to link two sentences together while using attention mechanic to enhance the model`s ability. In this paper, we propose to use the re-read mechanic, which means to read the premise again and again while reading the hypothesis. After read the premise again, the model can get a better understanding of the premise, which can also affect the understanding of the hypothesis. On the contrary, a better understanding of the hypothesis can also affect the understanding of the premise. With the alternative re-read process, the model can {\textquotedblleft}think{\textquotedblright} of a better decision of entailment type. We designed a new LSTM unit called re-read LSTM (rLSTM) to implement this {\textquotedblleft}thinking{\textquotedblright} process. Experiments show that we achieve results better than current state-of-the-art equivalents. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,688 |
inproceedings | dey-etal-2016-paraphrase | A Paraphrase and Semantic Similarity Detection System for User Generated Short-Text Content on Microblogs | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1271/ | Dey, Kuntal and Shrivastava, Ritvik and Kaushik, Saroj | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2880--2890 | Existing systems deliver high accuracy and F1-scores for detecting paraphrase and semantic similarity on traditional clean-text corpus. For instance, on the clean-text Microsoft Paraphrase benchmark database, the existing systems attain an accuracy as high as 0:8596. However, existing systems for detecting paraphrases and semantic similarity on user-generated short-text content on microblogs such as Twitter, comprising of noisy and ad hoc short-text, needs significant research attention. In this paper, we propose a machine learning based approach towards this. We propose a set of features that, although well-known in the NLP literature for solving other problems, have not been explored for detecting paraphrase or semantic similarity, on noisy user-generated short-text data such as Twitter. We apply support vector machine (SVM) based learning. We use the benchmark Twitter paraphrase data, released as a part of SemEval 2015, for experiments. Our system delivers a paraphrase detection F1-score of 0.717 and semantic similarity detection F1-score of 0.741, thereby significantly outperforming the existing systems, that deliver F1-scores of 0.696 and 0.724 for the two problems respectively. Our features also allow us to obtain a rank among the top-10, when trained on the Microsoft Paraphrase corpus and tested on the corresponding test data, thereby empirically establishing our approach as ubiquitous across the different paraphrase detection databases. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,689 |
inproceedings | levy-etal-2016-modeling | Modeling Extractive Sentence Intersection via Subtree Entailment | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1272/ | Levy, Omer and Dagan, Ido and Stanovsky, Gabriel and Eckle-Kohler, Judith and Gurevych, Iryna | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2891--2901 | Sentence intersection captures the semantic overlap of two texts, generalizing over paradigms such as textual entailment and semantic text similarity. Despite its modeling power, it has received little attention because it is difficult for non-experts to annotate. We analyze 200 pairs of similar sentences and identify several underlying properties of sentence intersection. We leverage these insights to design an algorithm that decomposes the sentence intersection task into several simpler annotation tasks, facilitating the construction of a high quality dataset via crowdsourcing. We implement this approach and provide an annotated dataset of 1,764 sentence intersections. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,690 |
inproceedings | han-sun-2016-context | Context-Sensitive Inference Rule Discovery: A Graph-Based Method | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1273/ | Han, Xianpei and Sun, Le | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2902--2911 | Inference rule discovery aims to identify entailment relations between predicates, e.g., {\textquoteleft}X acquire Y {--}{\ensuremath{>}} X purchase Y' and {\textquoteleft}X is author of Y {--}{\ensuremath{>}} X write Y'. Traditional methods dis-cover inference rules by computing distributional similarities between predicates, with each predicate is represented as one or more feature vectors of its instantiations. These methods, however, have two main drawbacks. Firstly, these methods are mostly context-insensitive, cannot accurately measure the similarity between two predicates in a specific context. Secondly, traditional methods usually model predicates independently, ignore the rich inter-dependencies between predicates. To address the above two issues, this pa-per proposes a graph-based method, which can discover inference rules by effectively modelling and exploiting both the context and the inter-dependencies between predicates. Specifically, we propose a graph-based representation{---}Predicate Graph, which can capture the semantic relevance between predicates using both the predicate-feature co-occurrence statistics and the inter-dependencies between predicates. Based on the predicate graph, we propose a context-sensitive random walk algorithm, which can learn con-text-specific predicate representations by distinguishing context-relevant information from context-irrelevant information. Experimental results show that our method significantly outperforms traditional inference rule discovery methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,691 |
inproceedings | zhou-etal-2016-modelling | Modelling Sentence Pairs with Tree-structured Attentive Encoder | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1274/ | Zhou, Yao and Liu, Cong and Pan, Yan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2912--2922 | We describe an attentive encoder that combines tree-structured recursive neural networks and sequential recurrent neural networks for modelling sentence pairs. Since existing attentive models exert attention on the sequential structure, we propose a way to incorporate attention into the tree topology. Specially, given a pair of sentences, our attentive encoder uses the representation of one sentence, which generated via an RNN, to guide the structural encoding of the other sentence on the dependency parse tree. We evaluate the proposed attentive encoder on three tasks: semantic similarity, paraphrase identification and true-false question selection. Experimental results show that our encoder outperforms all baselines and achieves state-of-the-art results on two tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,692 |
inproceedings | prakash-etal-2016-neural | Neural Paraphrase Generation with Stacked Residual {LSTM} Networks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1275/ | Prakash, Aaditya and Hasan, Sadid A. and Lee, Kathy and Datla, Vivek and Qadir, Ashequl and Liu, Joey and Farri, Oladimeji | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2923--2934 | In this paper, we propose a novel neural approach for paraphrase generation. Conventional paraphrase generation methods either leverage hand-written rules and thesauri-based alignments, or use statistical machine learning principles. To the best of our knowledge, this work is the first to explore deep learning models for paraphrase generation. Our primary contribution is a stacked residual LSTM network, where we add residual connections between LSTM layers. This allows for efficient training of deep LSTMs. We evaluate our model and other state-of-the-art deep learning models on three different datasets: PPDB, WikiAnswers, and MSCOCO. Evaluation results demonstrate that our model outperforms sequence to sequence, attention-based, and bi-directional LSTM models on BLEU, METEOR, TER, and an embedding-based sentence similarity metric. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,693 |
inproceedings | feng-etal-2016-english | {E}nglish-{C}hinese Knowledge Base Translation with Neural Network | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1276/ | Feng, Xiaocheng and Tang, Duyu and Qin, Bing and Liu, Ting | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2935--2944 | Knowledge base (KB) such as Freebase plays an important role for many natural language processing tasks. English knowledge base is obviously larger and of higher quality than low resource language like Chinese. To expand Chinese KB by leveraging English KB resources, an effective way is to translate English KB (source) into Chinese (target). In this direction, two major challenges are to model triple semantics and to build a robust KB translator. We address these challenges by presenting a neural network approach, which learns continuous triple representation with a gated neural network. Accordingly, source triples and target triples are mapped in the same semantic vector space. We build a new dataset for English-Chinese KB translation from Freebase, and compare with several baselines on it. Experimental results show that the proposed method improves translation accuracy compared with baseline methods. We show that adaptive composition model improves standard solution such as neural tensor network in terms of translation accuracy. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,694 |
inproceedings | bougouin-etal-2016-keyphrase | Keyphrase Annotation with Graph Co-Ranking | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1277/ | Bougouin, Adrien and Boudin, Florian and Daille, B{\'e}atrice | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2945--2955 | Keyphrase annotation is the task of identifying textual units that represent the main content of a document. Keyphrase annotation is either carried out by extracting the most important phrases from a document, keyphrase extraction, or by assigning entries from a controlled domain-specific vocabulary, keyphrase assignment. Assignment methods are generally more reliable. They provide better-formed keyphrases, as well as keyphrases that do not occur in the document. But they are often silent on the contrary of extraction methods that do not depend on manually built resources. This paper proposes a new method to perform both keyphrase extraction and keyphrase assignment in an integrated and mutual reinforcing manner. Experiments have been carried out on datasets covering different domains of humanities and social sciences. They show statistically significant improvements compared to both keyphrase extraction and keyphrase assignment state-of-the art methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,695 |
inproceedings | jansen-etal-2016-whats | What`s in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1278/ | Jansen, Peter and Balasubramanian, Niranjan and Surdeanu, Mihai and Clark, Peter | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2956--2965 | QA systems have been making steady advances in the challenging elementary science exam domain. In this work, we develop an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges. In particular, we model the requirements based on appropriate sources of evidence to be used for the QA task. We create requirements by first identifying suitable sentences in a knowledge base that support the correct answer, then use these to build explanations, filling in any necessary missing information. These explanations are used to create a fine-grained categorization of the requirements. Using these requirements, we compare a retrieval and an inference solver on 212 questions. The analysis validates the gains of the inference solver, demonstrating that it answers more questions requiring complex inference, while also providing insights into the relative strengths of the solvers and knowledge sources. We release the annotated questions and explanations as a resource with broad utility for science exam QA, including determining knowledge base construction targets, as well as supporting information aggregation in automated inference. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,696 |
inproceedings | johnson-goldwasser-2016-know | {\textquotedblleft}All {I} know about politics is what {I} read in {T}witter{\textquotedblright}: Weakly Supervised Models for Extracting Politicians' Stances From {T}witter | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1279/ | Johnson, Kristen and Goldwasser, Dan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2966--2977 | During the 2016 United States presidential election, politicians have increasingly used Twitter to express their beliefs, stances on current political issues, and reactions concerning national and international events. Given the limited length of tweets and the scrutiny politicians face for what they choose or neglect to say, they must craft and time their tweets carefully. The content and delivery of these tweets is therefore highly indicative of a politician`s stances. We present a weakly supervised method for extracting how issues are framed and temporal activity patterns on Twitter for popular politicians and issues of the 2016 election. These behavioral components are combined into a global model which collectively infers the most likely stance and agreement patterns among politicians, with respective accuracies of 86.44{\%} and 84.6{\%} on average. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,697 |
inproceedings | yang-etal-2016-leveraging | Leveraging Multiple Domains for Sentiment Classification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1280/ | Yang, Fan and Mukherjee, Arjun and Zhang, Yifan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2978--2988 | Sentiment classification becomes more and more important with the rapid growth of user generated content. However, sentiment classification task usually comes with two challenges: first, sentiment classification is highly domain-dependent and training sentiment classifier for every domain is inefficient and often impractical; second, since the quantity of labeled data is important for assessing the quality of classifier, it is hard to evaluate classifiers when labeled data is limited for certain domains. To address the challenges mentioned above, we focus on learning high-level features that are able to generalize across domains, so a global classifier can benefit with a simple combination of documents from multiple domains. In this paper, the proposed model incorporates both sentiment polarity and unlabeled data from multiple domains and learns new feature representations. Our model doesn`t require labels from every domain, which means the learned feature representation can be generalized for sentiment domain adaptation. In addition, the learned feature representation can be used as classifier since our model defines the meaning of feature value and arranges high-level features in a prefixed order, so it is not necessary to train another classifier on top of the new features. Empirical evaluations demonstrate our model outperforms baselines and yields competitive results to other state-of-the-art works on benchmark datasets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,698 |
inproceedings | bakken-etal-2016-political | Political News Sentiment Analysis for Under-resourced Languages | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1281/ | Bakken, Patrik F. and Bratlie, Terje A. and Marco, Cristina and Gulla, Jon Atle | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2989--2996 | This paper presents classification results for the analysis of sentiment in political news articles. The domain of political news is particularly challenging, as journalists are presumably objective, whilst at the same time opinions can be subtly expressed. To deal with this challenge, in this work we conduct a two-step classification model, distinguishing first subjective and second positive and negative sentiment texts. More specifically, we propose a shallow machine learning approach where only minimal features are needed to train the classifier, including sentiment-bearing Co-Occurring Terms (COTs) and negation words. This approach yields close to state-of-the-art results. Contrary to results in other domains, the use of negations as features does not have a positive impact in the evaluation results. This method is particularly suited for languages that suffer from a lack of resources, such as sentiment lexicons or parsers, and for those systems that need to function in real-time. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,699 |
inproceedings | lund-etal-2016-fast | Fast Inference for Interactive Models of Text | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1282/ | Lund, Jeffrey and Felt, Paul and Seppi, Kevin and Ringger, Eric | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 2997--3006 | Probabilistic models are a useful means for analyzing large text corpora. Integrating such models with human interaction enables many new use cases. However, adding human interaction to probabilistic models requires inference algorithms which are both fast and accurate. We explore the use of Iterated Conditional Modes as a fast alternative to Gibbs sampling or variational EM. We demonstrate superior performance both in run time and model quality on three different models of text including a DP Mixture of Multinomials for web search result clustering, the Interactive Topic Model, and M OM R ESP , a multinomial crowdsourcing model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,700 |
inproceedings | tsakalidis-etal-2016-combining | Combining Heterogeneous User Generated Data to Sense Well-being | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1283/ | Tsakalidis, Adam and Liakata, Maria and Damoulas, Theo and Jellinek, Brigitte and Guo, Weisi and Cristea, Alexandra | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3007--3018 | In this paper we address a new problem of predicting affect and well-being scales in a real-world setting of heterogeneous, longitudinal and non-synchronous textual as well as non-linguistic data that can be harvested from on-line media and mobile phones. We describe the method for collecting the heterogeneous longitudinal data, how features are extracted to address missing information and differences in temporal alignment, and how the latter are combined to yield promising predictions of affect and well-being on the basis of widely used psychological scales. We achieve a coefficient of determination ($R^2$) of 0.71-0.76 and a correlation coefficient of 0.68-0.87 which is higher than the state-of-the art in equivalent multi-modal tasks for affect. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,701 |
inproceedings | li-etal-2016-hashtag | Hashtag Recommendation with Topical Attention-Based {LSTM} | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1284/ | Li, Yang and Liu, Ting and Jiang, Jing and Zhang, Liang | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3019--3029 | Microblogging services allow users to create hashtags to categorize their posts. In recent years, the task of recommending hashtags for microblogs has been given increasing attention. However, most of existing methods depend on hand-crafted features. Motivated by the successful use of long short-term memory (LSTM) for many natural language processing tasks, in this paper, we adopt LSTM to learn the representation of a microblog post. Observing that hashtags indicate the primary topics of microblog posts, we propose a novel attention-based LSTM model which incorporates topic modeling into the LSTM architecture through an attention mechanism. We evaluate our model using a large real-world dataset. Experimental results show that our model significantly outperforms various competitive baseline methods. Furthermore, the incorporation of topical attention mechanism gives more than 7.4{\%} improvement in F1 score compared with standard LSTM method. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,702 |
inproceedings | kordjamshidi-etal-2016-better | Better call {S}aul: Flexible Programming for Learning and Inference in {NLP} | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1285/ | Kordjamshidi, Parisa and Khashabi, Daniel and Christodoulopoulos, Christos and Mangipudi, Bhargav and Singh, Sameer and Roth, Dan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3030--3040 | We present a novel way for designing complex joint inference and learning models using Saul (Kordjamshidi et al., 2015), a recently-introduced declarative learning-based programming language (DeLBP). We enrich Saul with components that are necessary for a broad range of learning based Natural Language Processing tasks at various levels of granularity. We illustrate these advances using three different, well-known NLP problems, and show how these generic learning and inference modules can directly exploit Saul`s graph-based data representation. These properties allow the programmer to easily switch between different model formulations and configurations, and consider various kinds of dependencies and correlations among variables of interest with minimal programming effort. We argue that Saul provides an extremely useful paradigm both for the design of advanced NLP systems and for supporting advanced research in NLP. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,703 |
inproceedings | guillaume-etal-2016-crowdsourcing | Crowdsourcing Complex Language Resources: Playing to Annotate Dependency Syntax | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1286/ | Guillaume, Bruno and Fort, Kar{\"en and Lefebvre, Nicolas | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3041--3052 | This article presents the results we obtained on a complex annotation task (that of dependency syntax) using a specifically designed Game with a Purpose, ZombiLingo. We show that with suitable mechanisms (decomposition of the task, training of the players and regular control of the annotation quality during the game), it is possible to obtain annotations whose quality is significantly higher than that obtainable with a parser, provided that enough players participate. The source code of the game and the resulting annotated corpora (for French) are freely available. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,704 |
inproceedings | singhal-bhattacharyya-2016-borrow | Borrow a Little from your Rich Cousin: Using Embeddings and Polarities of {E}nglish Words for Multilingual Sentiment Classification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1287/ | Singhal, Prerana and Bhattacharyya, Pushpak | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3053--3062 | In this paper, we provide a solution to multilingual sentiment classification using deep learning. Given input text in a language, we use word translation into English and then the embeddings of these English words to train a classifier. This projection into the English space plus word embeddings gives a simple and uniform framework for multilingual sentiment analysis. A novel idea is augmentation of the training data with polar words, appearing in these sentences, along with their polarities. This approach leads to a performance gain of 7-10{\%} over traditional classifiers on many languages, irrespective of text genre, despite the scarcity of resources in most languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,705 |
inproceedings | yang-etal-2016-character | A Character-Aware Encoder for Neural Machine Translation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1288/ | Yang, Zhen and Chen, Wei and Wang, Feng and Xu, Bo | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3063--3070 | This article proposes a novel character-aware neural machine translation (NMT) model that views the input sequences as sequences of characters rather than words. On the use of row convolution (Amodei et al., 2015), the encoder of the proposed model composes word-level information from the input sequences of characters automatically. Since our model doesn`t rely on the boundaries between each word (as the whitespace boundaries in English), it is also applied to languages without explicit word segmentations (like Chinese). Experimental results on Chinese-English translation tasks show that the proposed character-aware NMT model can achieve comparable translation performance with the traditional word based NMT models. Despite the target side is still word based, the proposed model is able to generate much less unknown words. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,706 |
inproceedings | su-etal-2016-convolution | Convolution-Enhanced Bilingual Recursive Neural Network for Bilingual Semantic Modeling | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1289/ | Su, Jinsong and Zhang, Biao and Xiong, Deyi and Li, Ruochen and Yin, Jianmin | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3071--3081 | Estimating similarities at different levels of linguistic units, such as words, sub-phrases and phrases, is helpful for measuring semantic similarity of an entire bilingual phrase. In this paper, we propose a convolution-enhanced bilingual recursive neural network (ConvBRNN), which not only exploits word alignments to guide the generation of phrase structures but also integrates multiple-level information of the generated phrase structures into bilingual semantic modeling. In order to accurately learn the semantic hierarchy of a bilingual phrase, we develop a recursive neural network to constrain the learned bilingual phrase structures to be consistent with word alignments. Upon the generated source and target phrase structures, we stack a convolutional neural network to integrate vector representations of linguistic units on the structures into bilingual phrase embeddings. After that, we fully incorporate information of different linguistic units into a bilinear semantic similarity model. We introduce two max-margin losses to train the ConvBRNN model: one for the phrase structure inference and the other for the semantic similarity model. Experiments on NIST Chinese-English translation tasks demonstrate the high quality of the generated bilingual phrase structures with respect to word alignments and the effectiveness of learned semantic similarities on machine translation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,707 |
inproceedings | feng-etal-2016-improving | Improving Attention Modeling with Implicit Distortion and Fertility for Machine Translation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1290/ | Feng, Shi and Liu, Shujie and Yang, Nan and Li, Mu and Zhou, Ming and Zhu, Kenny Q. | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3082--3092 | In neural machine translation, the attention mechanism facilitates the translation process by producing a soft alignment between the source sentence and the target sentence. However, without dedicated distortion and fertility models seen in traditional SMT systems, the learned alignment may not be accurate, which can lead to low translation quality. In this paper, we propose two novel models to improve attention-based neural machine translation. We propose a recurrent attention mechanism as an implicit distortion model, and a fertility conditioned decoder as an implicit fertility model. We conduct experiments on large-scale Chinese{--}English translation tasks. The results show that our models significantly improve both the alignment and translation quality compared to the original attention mechanism and several other variations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,708 |
inproceedings | liu-etal-2016-neural | Neural Machine Translation with Supervised Attention | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1291/ | Liu, Lemao and Utiyama, Masao and Finch, Andrew and Sumita, Eiichiro | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3093--3102 | The attention mechanism is appealing for neural machine translation, since it is able to dynamically encode a source sentence by generating a alignment between a target word and source words. Unfortunately, it has been proved to be worse than conventional alignment models in alignment accuracy. In this paper, we analyze and explain this issue from the point view of reordering, and propose a supervised attention which is learned with guidance from conventional alignment models. Experiments on two Chinese-to-English translation tasks show that the supervised attention mechanism yields better alignments leading to substantial gains over the standard attention based NMT. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,709 |
inproceedings | sperber-etal-2016-lightly | Lightly Supervised Quality Estimation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1292/ | Sperber, Matthias and Neubig, Graham and Niehues, Jan and St{\"uker, Sebastian and Waibel, Alex | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3103--3113 | Evaluating the quality of output from language processing systems such as machine translation or speech recognition is an essential step in ensuring that they are sufficient for practical use. However, depending on the practical requirements, evaluation approaches can differ strongly. Often, reference-based evaluation measures (such as BLEU or WER) are appealing because they are cheap and allow rapid quantitative comparison. On the other hand, practitioners often focus on manual evaluation because they must deal with frequently changing domains and quality standards requested by customers, for which reference-based evaluation is insufficient or not possible due to missing in-domain reference data (Harris et al., 2016). In this paper, we attempt to bridge this gap by proposing a framework for lightly supervised quality estimation. We collect manually annotated scores for a small number of segments in a test corpus or document, and combine them with automatically predicted quality scores for the remaining segments to predict an overall quality estimate. An evaluation shows that our framework estimates quality more reliably than using fully automatic quality estimation approaches, while keeping annotation effort low by not requiring full references to be available for the particular domain. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,710 |
inproceedings | tang-etal-2016-improving-translation | Improving Translation Selection with Supersenses | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1293/ | Tang, Haiqing and Xiong, Deyi and Lopez de Lacalle, Oier and Agirre, Eneko | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3114--3123 | Selecting appropriate translations for source words with multiple meanings still remains a challenge for statistical machine translation (SMT). One reason for this is that most SMT systems are not good at detecting the proper sense for a polysemic word when it appears in different contexts. In this paper, we adopt a supersense tagging method to annotate source words with coarse-grained ontological concepts. In order to enable the system to choose an appropriate translation for a word or phrase according to the annotated supersense of the word or phrase, we propose two translation models with supersense knowledge: a maximum entropy based model and a supersense embedding model. The effectiveness of our proposed models is validated on a large-scale English-to-Spanish translation task. Results indicate that our method can significantly improve translation quality via correctly conveying the meaning of the source language to the target language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,711 |
inproceedings | graham-etal-2016-glitters | Is all that Glitters in Machine Translation Quality Estimation really Gold? | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1294/ | Graham, Yvette and Baldwin, Timothy and Dowling, Meghan and Eskevich, Maria and Lynn, Teresa and Tounsi, Lamia | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3124--3134 | Human-targeted metrics provide a compromise between human evaluation of machine translation, where high inter-annotator agreement is difficult to achieve, and fully automatic metrics, such as BLEU or TER, that lack the validity of human assessment. Human-targeted translation edit rate (HTER) is by far the most widely employed human-targeted metric in machine translation, commonly employed, for example, as a gold standard in evaluation of quality estimation. Original experiments justifying the design of HTER, as opposed to other possible formulations, were limited to a small sample of translations and a single language pair, however, and this motivates our re-evaluation of a range of human-targeted metrics on a substantially larger scale. Results show significantly stronger correlation with human judgment for HBLEU over HTER for two of the nine language pairs we include and no significant difference between correlations achieved by HTER and HBLEU for the remaining language pairs. Finally, we evaluate a range of quality estimation systems employing HTER and direct assessment (DA) of translation adequacy as gold labels, resulting in a divergence in system rankings, and propose employment of DA for future quality estimation evaluations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,712 |
inproceedings | wang-etal-2016-connecting | Connecting Phrase based Statistical Machine Translation Adaptation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1295/ | Wang, Rui and Zhao, Hai and Lu, Bao-Liang and Utiyama, Masao and Sumita, Eiichiro | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3135--3145 | Although more additional corpora are now available for Statistical Machine Translation (SMT), only the ones which belong to the same or similar domains of the original corpus can indeed enhance SMT performance directly. A series of SMT adaptation methods have been proposed to select these similar-domain data, and most of them focus on sentence selection. In comparison, phrase is a smaller and more fine grained unit for data selection, therefore we propose a straightforward and efficient connecting phrase based adaptation method, which is applied to both bilingual phrase pair and monolingual n-gram adaptation. The proposed method is evaluated on IWSLT/NIST data sets, and the results show that phrase based SMT performances are significantly improved (up to +1.6 in comparison with phrase based SMT baseline system and +0.9 in comparison with existing methods). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,713 |
inproceedings | schulz-aziz-2016-fast | Fast Collocation-Based {B}ayesian {HMM} Word Alignment | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1296/ | Schulz, Philip and Aziz, Wilker | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3146--3155 | We present a new Bayesian HMM word alignment model for statistical machine translation. The model is a mixture of an alignment model and a language model. The alignment component is a Bayesian extension of the standard HMM. The language model component is responsible for the generation of words needed for source fluency reasons from source language context. This allows for untranslatable source words to remain unaligned and at the same time avoids the introduction of artificial NULL words which introduces unusually long alignment jumps. Existing Bayesian word alignment models are unpractically slow because they consider each target position when resampling a given alignment link. The sampling complexity therefore grows linearly in the target sentence length. In order to make our model useful in practice, we devise an auxiliary variable Gibbs sampler that allows us to resample alignment links in constant time independently of the target sentence length. This leads to considerable speed improvements. Experimental results show that our model performs as well as existing word alignment toolkits in terms of resulting BLEU score. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,714 |
inproceedings | jehl-riezler-2016-learning | Learning to translate from graded and negative relevance information | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1297/ | Jehl, Laura and Riezler, Stefan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3156--3166 | We present an approach for learning to translate by exploiting cross-lingual link structure in multilingual document collections. We propose a new learning objective based on structured ramp loss, which learns from graded relevance, explicitly including negative relevance information. Our results on English German translation of Wikipedia entries show small, but significant, improvements of our method over an unadapted baseline, even when only a weak relevance signal is used. We also compare our method to monolingual language model adaptation and automatic pseudo-parallel data extraction and find small improvements even over these strong baselines. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,715 |
inproceedings | daiber-etal-2016-universal | Universal Reordering via Linguistic Typology | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1298/ | Daiber, Joachim and Stanojevi{\'c}, Milo{\v{s}} and Sima{'}an, Khalil | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3167--3176 | In this paper we explore the novel idea of building a single universal reordering model from English to a large number of target languages. To build this model we exploit typological features of word order for a large number of target languages together with source (English) syntactic features and we train this model on a single combined parallel corpus representing all (22) involved language pairs. We contribute experimental evidence for the usefulness of linguistically defined typological features for building such a model. When the universal reordering model is used for preordering followed by monotone translation (no reordering inside the decoder), our experiments show that this pipeline gives comparable or improved translation performance with a phrase-based baseline for a large number of language pairs (12 out of 22) from diverse language families. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,716 |
inproceedings | durrani-etal-2016-deep | A Deep Fusion Model for Domain Adaptation in Phrase-based {MT} | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1299/ | Durrani, Nadir and Sajjad, Hassan and Joty, Shafiq and Abdelali, Ahmed | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3177--3187 | We present a novel fusion model for domain adaptation in Statistical Machine Translation. Our model is based on the joint source-target neural network Devlin et al., 2014, and is learned by fusing in- and out-domain models. The adaptation is performed by backpropagating errors from the output layer to the word embedding layer of each model, subsequently adjusting parameters of the composite model towards the in-domain data. On the standard tasks of translating English-to-German and Arabic-to-English TED talks, we observed average improvements of +0.9 and +0.7 BLEU points, respectively over a competition grade phrase-based system. We also demonstrate improvements over existing adaptation methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,717 |
inproceedings | zhang-etal-2016-inducing | Inducing Bilingual Lexica From Non-Parallel Data With Earth Mover`s Distance Regularization | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1300/ | Zhang, Meng and Liu, Yang and Luan, Huanbo and Liu, Yiqun and Sun, Maosong | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3188--3198 | Being able to induce word translations from non-parallel data is often a prerequisite for cross-lingual processing in resource-scarce languages and domains. Previous endeavors typically simplify this task by imposing the one-to-one translation assumption, which is too strong to hold for natural languages. We remove this constraint by introducing the Earth Mover`s Distance into the training of bilingual word embeddings. In this way, we take advantage of its capability to handle multiple alternative word translations in a natural form of regularization. Our approach shows significant and consistent improvements across four language pairs. We also demonstrate that our approach is particularly preferable in resource-scarce settings as it only requires a minimal seed lexicon. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,718 |
inproceedings | hirschmann-etal-2016-makes | What Makes Word-level Neural Machine Translation Hard: A Case Study on {E}nglish-{G}erman Translation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1301/ | Hirschmann, Fabian and Nam, Jinseok and F{\"urnkranz, Johannes | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3199--3208 | Traditional machine translation systems often require heavy feature engineering and the combination of multiple techniques for solving different subproblems. In recent years, several end-to-end learning architectures based on recurrent neural networks have been proposed. Unlike traditional systems, Neural Machine Translation (NMT) systems learn the parameters of the model and require only minimal preprocessing. Memory and time constraints allow to take only a fixed number of words into account, which leads to the out-of-vocabulary (OOV) problem. In this work, we analyze why the OOV problem arises and why it is considered a serious problem in German. We study the effectiveness of compound word splitters for alleviating the OOV problem, resulting in a 2.5+ BLEU points improvement over a baseline on the WMT`14 German-to-English translation task. For English-to-German translation, we use target-side compound splitting through a special syntax during training that allows the model to merge compound words and gain 0.2 BLEU points. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,719 |
inproceedings | jalili-sabet-etal-2016-improving | Improving Word Alignment of Rare Words with Word Embeddings | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1302/ | Jalili Sabet, Masoud and Faili, Heshaam and Haffari, Gholamreza | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3209--3215 | We address the problem of inducing word alignment for language pairs by developing an unsupervised model with the capability of getting applied to other generative alignment models. We approach the task by: i)proposing a new alignment model based on the IBM alignment model 1 that uses vector representation of words, and ii)examining the use of similar source words to overcome the problem of rare source words and improving the alignments. We apply our method to English-French corpora and run the experiments with different sizes of sentence pairs. Our results show competitive performance against the baseline and in some cases improve the results up to 6.9{\%} in terms of precision. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,720 |
inproceedings | chang-etal-2016-measuring | Measuring the Information Content of Financial News | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1303/ | Chang, Ching-Yun and Zhang, Yue and Teng, Zhiyang and Bozanic, Zahn and Ke, Bin | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3216--3225 | Measuring the information content of news text is useful for decision makers in their investments since news information can influence the intrinsic values of companies. We propose a model to automatically measure the information content given news text, trained using news and corresponding cumulative abnormal returns of listed companies. Existing methods in finance literature exploit sentiment signal features, which are limited by not considering factors such as events. We address this issue by leveraging deep neural models to extract rich semantic features from news text. In particular, a novel tree-structured LSTM is used to find target-specific representations of news text given syntax structures. Empirical results show that the neural models can outperform sentiment-based models, demonstrating the effectiveness of recent NLP technology advances for computational finance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,721 |
inproceedings | godea-etal-2016-automatic | Automatic Generation and Classification of Minimal Meaningful Propositions in Educational Systems | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1304/ | Godea, Andreea and Bulgarov, Florin and Nielsen, Rodney | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3226--3236 | Truly effective and practical educational systems will only be achievable when they have the ability to fully recognize deep relationships between a learner`s interpretation of a subject and the desired conceptual understanding. In this paper, we take important steps in this direction by introducing a new representation of sentences {--} Minimal Meaningful Propositions (MMPs), which will allow us to significantly improve the mapping between a learner`s answer and the ideal response. Using this technique, we make significant progress towards highly scalable and domain independent educational systems, that will be able to operate without human intervention. Even though this is a new task, we show very good results both for the extraction of MMPs and for classification with respect to their importance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,722 |
inproceedings | panagiotou-etal-2016-first | First Story Detection using Entities and Relations | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1305/ | Panagiotou, Nikolaos and Akkaya, Cem and Tsioutsiouliklis, Kostas and Kalogeraki, Vana and Gunopulos, Dimitrios | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3237--3244 | News portals, such as Yahoo News or Google News, collect large amounts of documents from a variety of sources on a daily basis. Only a small portion of these documents can be selected and displayed on the homepage. Thus, there is a strong preference for major, recent events. In this work, we propose a scalable and accurate First Story Detection (FSD) pipeline that identifies fresh news. In comparison to other FSD systems, our method relies on relation extraction methods exploiting entities and their relations. We evaluate our pipeline using two distinct datasets from Yahoo News and Google News. Experimental results demonstrate that our method improves over the state-of-the-art systems on both datasets with constant space and time requirements. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,723 |
inproceedings | loukina-etal-2016-textual | Textual complexity as a predictor of difficulty of listening items in language proficiency tests | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1306/ | Loukina, Anastassia and Yoon, Su-Youn and Sakano, Jennifer and Wei, Youhua and Sheehan, Kathy | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3245--3253 | In this paper we explore to what extent the difficulty of listening items in an English language proficiency test can be predicted by the textual properties of the prompt. We show that a system based on multiple text complexity features can predict item difficulty for several different item types and for some items achieves higher accuracy than human estimates of item difficulty. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,724 |
inproceedings | hu-etal-2016-construction | The Construction of a {C}hinese Collocational Knowledge Resource and Its Application for Second Language Acquisition | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1307/ | Hu, Renfen and Chen, Jiayong and Chen, Kuang-hua | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3254--3263 | The appropriate use of collocations is a challenge for second language acquisition. However, high quality and easily accessible Chinese collocation resources are not available for both teachers and students. This paper presents the design and construction of a large scale resource of Chinese collocational knowledge, and a web-based application (OCCA, Online Chinese Collocation Assistant) which offers free and convenient collocation search service to end users. We define and classify collocations based on practical language acquisition needs and utilize a syntax based method to extract nine types of collocations. Totally 37 extraction rules are compiled with word, POS and dependency relation features, 1,750,000 collocations are extracted from a corpus for L2 learning and complementary Wikipedia data, and OCCA is implemented based on these extracted collocations. By comparing OCCA with two traditional collocation dictionaries, we find OCCA has higher entry coverage and collocation quantity, and our method achieves quite low error rate at less than 5{\%}. We also discuss how to apply collocational knowledge to grammatical error detection and demonstrate comparable performance to the best results in 2015 NLP-TEA CGED shared task. The preliminary experiment shows that the collocation knowledge is helpful in detecting all the four types of grammatical errors. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,725 |
inproceedings | lu-etal-2016-joint | Joint Inference for Event Coreference Resolution | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1308/ | Lu, Jing and Venugopal, Deepak and Gogate, Vibhav and Ng, Vincent | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3264--3275 | Event coreference resolution is a challenging problem since it relies on several components of the information extraction pipeline that typically yield noisy outputs. We hypothesize that exploiting the inter-dependencies between these components can significantly improve the performance of an event coreference resolver, and subsequently propose a novel joint inference based event coreference resolver using Markov Logic Networks (MLNs). However, the rich features that are important for this task are typically very hard to explicitly encode as MLN formulas since they significantly increase the size of the MLN, thereby making joint inference and learning infeasible. To address this problem, we propose a novel solution where we implicitly encode rich features into our model by augmenting the MLN distribution with low dimensional unit clauses. Our approach achieves state-of-the-art results on two standard evaluation corpora. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,726 |
inproceedings | ge-etal-2016-event | Event Detection with Burst Information Networks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1309/ | Ge, Tao and Cui, Lei and Chang, Baobao and Sui, Zhifang and Zhou, Ming | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3276--3286 | Retrospective event detection is an important task for discovering previously unidentified events in a text stream. In this paper, we propose two fast centroid-aware event detection models based on a novel text stream representation {--} Burst Information Networks (BINets) for addressing the challenge. The BINets are time-aware, efficient and can be easily analyzed for identifying key information (centroids). These advantages allow the BINet-based approaches to achieve the state-of-the-art performance on multiple datasets, demonstrating the efficacy of BINets for the task of event detection. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,727 |
inproceedings | zhu-etal-2016-corpus | Corpus Fusion for Emotion Classification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1310/ | Zhu, Suyang and Li, Shoushan and Chen, Ying and Zhou, Guodong | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3287--3297 | Machine learning-based methods have obtained great progress on emotion classification. However, in most previous studies, the models are learned based on a single corpus which often suffers from insufficient labeled data. In this paper, we propose a corpus fusion approach to address emotion classification across two corpora which use different emotion taxonomies. The objective of this approach is to utilize the annotated data from one corpus to help the emotion classification on another corpus. An Integer Linear Programming (ILP) optimization is proposed to refine the classification results. Empirical studies show the effectiveness of the proposed approach to corpus fusion for emotion classification. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,728 |
inproceedings | tang-etal-2016-effective | Effective {LSTM}s for Target-Dependent Sentiment Classification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1311/ | Tang, Duyu and Qin, Bing and Feng, Xiaocheng and Liu, Ting | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3298--3307 | Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence. Different context words have different influences on determining the sentiment polarity of a sentence towards the target. Therefore, it is desirable to integrate the connections between target word and context words when building a learning system. In this paper, we develop two target dependent long short-term memory (LSTM) models, where target information is automatically taken into account. We evaluate our methods on a benchmark dataset from Twitter. Empirical results show that modeling sentence representation with standard LSTM does not perform well. Incorporating target information into LSTM can significantly boost the classification accuracy. The target-dependent LSTM models achieve state-of-the-art performances without using syntactic parser or external sentiment lexicons. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,729 |
inproceedings | stede-2016-towards | Towards assessing depth of argumentation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1312/ | Stede, Manfred | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3308--3317 | For analyzing argumentative text, we propose to study the {\textquoteleft}depth' of argumentation as one important component, which we distinguish from argument quality. In a pilot study with German newspaper commentary texts, we asked students to rate the degree of argumentativeness, and then looked for correlations with features of the annotated argumentation structure and the rhetorical structure (in terms of RST). The results indicate that the human judgements correlate with our operationalization of depth and with certain structural features of RST trees. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,730 |
inproceedings | phan-etal-2016-video | Video Event Detection by Exploiting Word Dependencies from Image Captions | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1313/ | Phan, Sang and Miyao, Yusuke and Le, Duy-Dinh and Satoh, Shin{'}ichi | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3318--3327 | Video event detection is a challenging problem in information and multimedia retrieval. Different from single action detection, event detection requires a richer level of semantic information from video. In order to overcome this challenge, existing solutions often represent videos using high level features such as concepts. However, concept-based representation can be confusing because it does not encode the relationship between concepts. This issue can be addressed by exploiting the co-occurrences of the concepts, however, it often leads to a very huge number of possible combinations. In this paper, we propose a new approach to obtain the relationship between concepts by exploiting the syntactic dependencies between words in the image captions. The main advantage of this approach is that it significantly reduces the number of informative combinations between concepts. We conduct extensive experiments to analyze the effectiveness of using the new dependency representation for event detection on two large-scale TRECVID Multimedia Event Detection 2013 and 2014 datasets. Experimental results show that i) Dependency features are more discriminative than concept-based features. ii) Dependency features can be combined with our current event detection system to further improve the performance. For instance, the relative improvement can be as far as 8.6{\%} on the MEDTEST14 10Ex setting. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,731 |
inproceedings | xiao-etal-2016-predicting | Predicting Restaurant Consumption Level through Social Media Footprints | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1314/ | Xiao, Yang and Wang, Yuan and Mao, Hangyu and Xiao, Zhen | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3328--3338 | Accurate prediction of user attributes from social media is valuable for both social science analysis and consumer targeting. In this paper, we propose a systematic method to leverage user online social media content for predicting offline restaurant consumption level. We utilize the social login as a bridge and construct a dataset of 8,844 users who have been linked across Dianping (similar to Yelp) and Sina Weibo. More specifically, we construct consumption level ground truth based on user self report spending. We build predictive models using both raw features and, especially, latent features, such as topic distributions and celebrities clusters. The employed methods demonstrate that online social media content has strong predictive power for offline spending. Finally, combined with qualitative feature analysis, we present the differences in words usage, topic interests and following behavior between different consumption level groups. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,732 |
inproceedings | mao-etal-2016-novel | A Novel Fast Framework for Topic Labeling Based on Similarity-preserved Hashing | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1315/ | Mao, Xian-Ling and Hao, Yi-Jing and Zhou, Qiang and Yuan, Wen-Qing and Yang, Liner and Huang, Heyan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3339--3348 | Recently, topic modeling has been widely applied in data mining due to its powerful ability. A common, major challenge in applying such topic models to other tasks is to accurately interpret the meaning of each topic. Topic labeling, as a major interpreting method, has attracted significant attention recently. However, most of previous works only focus on the effectiveness of topic labeling, and less attention has been paid to quickly creating good topic descriptors; meanwhile, it`s hard to assign labels for new emerging topics by using most of existing methods. To solve the problems above, in this paper, we propose a novel fast topic labeling framework that casts the labeling problem as a k-nearest neighbor (KNN) search problem in a probability vector set. Our experimental results show that the proposed sequential interleaving method based on locality sensitive hashing (LSH) technology is efficient in boosting the comparison speed among probability distributions, and the proposed framework can generate meaningful labels to interpret topics, including new emerging topics. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,733 |
inproceedings | mou-etal-2016-sequence | Sequence to Backward and Forward Sequences: A Content-Introducing Approach to Generative Short-Text Conversation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1316/ | Mou, Lili and Song, Yiping and Yan, Rui and Li, Ge and Zhang, Lu and Jin, Zhi | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3349--3358 | Using neural networks to generate replies in human-computer dialogue systems is attracting increasing attention over the past few years. However, the performance is not satisfactory: the neural network tends to generate safe, universally relevant replies which carry little meaning. In this paper, we propose a content-introducing approach to neural network-based generative dialogue systems. We first use pointwise mutual information (PMI) to predict a noun as a keyword, reflecting the main gist of the reply. We then propose seq2BF, a {\textquotedblleft}sequence to backward and forward sequences{\textquotedblright} model, which generates a reply containing the given keyword. Experimental results show that our approach significantly outperforms traditional sequence-to-sequence models in terms of human evaluation and the entropy measure, and that the predicted keyword can appear at an appropriate position in the reply. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,734 |
inproceedings | gervits-etal-2016-disfluent | Disfluent but effective? A quantitative study of disfluencies and conversational moves in team discourse | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1317/ | Gervits, Felix and Eberhard, Kathleen and Scheutz, Matthias | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3359--3369 | Situated dialogue systems that interact with humans as part of a team (e.g., robot teammates) need to be able to use information from communication channels to gauge the coordination level and effectiveness of the team. Currently, the feasibility of this end goal is limited by several gaps in both the empirical and computational literature. The purpose of this paper is to address those gaps in the following ways: (1) investigate which properties of task-oriented discourse correspond with effective performance in human teams, and (2) discuss how and to what extent these properties can be utilized in spoken dialogue systems. To this end, we analyzed natural language data from a unique corpus of spontaneous, task-oriented dialogue (CReST corpus), which was annotated for disfluencies and conversational moves. We found that effective teams made more self-repair disfluencies and used specific communication strategies to facilitate grounding and coordination. Our results indicate that truly robust and natural dialogue systems will need to interpret highly disfluent utterances and also utilize specific collaborative mechanisms to facilitate grounding. These data shed light on effective communication in performance scenarios and directly inform the development of robust dialogue systems for situated artificial agents. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,735 |
inproceedings | vougiouklis-etal-2016-neural | A Neural Network Approach for Knowledge-Driven Response Generation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1318/ | Vougiouklis, Pavlos and Hare, Jonathon and Simperl, Elena | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3370--3380 | We present a novel response generation system. The system assumes the hypothesis that participants in a conversation base their response not only on previous dialog utterances but also on their background knowledge. Our model is based on a Recurrent Neural Network (RNN) that is trained over concatenated sequences of comments, a Convolution Neural Network that is trained over Wikipedia sentences and a formulation that couples the two trained embeddings in a multimodal space. We create a dataset of aligned Wikipedia sentences and sequences of Reddit utterances, which we we use to train our model. Given a sequence of past utterances and a set of sentences that represent the background knowledge, our end-to-end learnable model is able to generate context-sensitive and knowledge-driven responses by leveraging the alignment of two different data sources. Our approach achieves up to 55{\%} improvement in perplexity compared to purely sequential models based on RNNs that are trained only on sequences of utterances. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,736 |
inproceedings | poostchi-etal-2016-personer | {P}erso{NER}: {P}ersian Named-Entity Recognition | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1319/ | Poostchi, Hanieh and Zare Borzeshi, Ehsan and Abdous, Mohammad and Piccardi, Massimo | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3381--3389 | Named-Entity Recognition (NER) is still a challenging task for languages with low digital resources. The main difficulties arise from the scarcity of annotated corpora and the consequent problematic training of an effective NER pipeline. To abridge this gap, in this paper we target the Persian language that is spoken by a population of over a hundred million people world-wide. We first present and provide ArmanPerosNERCorpus, the first manually-annotated Persian NER corpus. Then, we introduce PersoNER, an NER pipeline for Persian that leverages a word embedding and a sequential max-margin classifier. The experimental results show that the proposed approach is capable of achieving interesting MUC7 and CoNNL scores while outperforming two alternatives based on a CRF and a recurrent neural network. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,737 |
inproceedings | singh-etal-2016-ocr | {OCR}++: A Robust Framework For Information Extraction from Scholarly Articles | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1320/ | Singh, Mayank and Barua, Barnopriyo and Palod, Priyank and Garg, Manvi and Satapathy, Sidhartha and Bushi, Samuel and Ayush, Kumar and Sai Rohith, Krishna and Gamidi, Tulasi and Goyal, Pawan and Mukherjee, Animesh | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3390--3400 | This paper proposes OCR++, an open-source framework designed for a variety of information extraction tasks from scholarly articles including metadata (title, author names, affiliation and e-mail), structure (section headings and body text, table and figure headings, URLs and footnotes) and bibliography (citation instances and references). We analyze a diverse set of scientific articles written in English to understand generic writing patterns and formulate rules to develop this hybrid framework. Extensive evaluations show that the proposed framework outperforms the existing state-of-the-art tools by a large margin in structural information extraction along with improved performance in metadata and bibliography extraction tasks, both in terms of accuracy (around 50{\%} improvement) and processing time (around 52{\%} improvement). A user experience study conducted with the help of 30 researchers reveals that the researchers found this system to be very helpful. As an additional objective, we discuss two novel use cases including automatically extracting links to public datasets from the proceedings, which would further accelerate the advancement in digital libraries. The result of the framework can be exported as a whole into structured TEI-encoded documents. Our framework is accessible online at \url{http://www.cnergres.iitkgp.ac.in/OCR++/home/}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,738 |
inproceedings | hazem-morin-2016-efficient | Efficient Data Selection for Bilingual Terminology Extraction from Comparable Corpora | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1321/ | Hazem, Amir and Morin, Emmanuel | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3401--3411 | Comparable corpora are the main alternative to the use of parallel corpora to extract bilingual lexicons. Although it is easier to build comparable corpora, specialized comparable corpora are often of modest size in comparison with corpora issued from the general domain. Consequently, the observations of word co-occurrences which are the basis of context-based methods are unreliable. We propose in this article to improve word co-occurrences of specialized comparable corpora and thus context representation by using general-domain data. This idea, which has been already used in machine translation task for more than a decade, is not straightforward for the task of bilingual lexicon extraction from specific-domain comparable corpora. We go against the mainstream of this task where many studies support the idea that adding out-of-domain documents decreases the quality of lexicons. Our empirical evaluation shows the advantages of this approach which induces a significant gain in the accuracy of extracted lexicons. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,739 |
inproceedings | ljubesic-etal-2016-tweetgeo | {T}weet{G}eo - A Tool for Collecting, Processing and Analysing Geo-encoded Linguistic Data | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1322/ | Ljube{\v{s}}i{\'c}, Nikola and Samard{\v{z}}i{\'c}, Tanja and Derungs, Curdin | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3412--3421 | In this paper we present a newly developed tool that enables researchers interested in spatial variation of language to define a geographic perimeter of interest, collect data from the Twitter streaming API published in that perimeter, filter the obtained data by language and country, define and extract variables of interest and analyse the extracted variables by one spatial statistic and two spatial visualisations. We showcase the tool on the area and a selection of languages spoken in former Yugoslavia. By defining the perimeter, languages and a series of linguistic variables of interest we demonstrate the data collection, processing and analysis capabilities of the tool. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,740 |
inproceedings | espinosa-anke-etal-2016-extending | Extending {W}ord{N}et with Fine-Grained Collocational Information via Supervised Distributional Learning | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1323/ | Espinosa-Anke, Luis and Camacho-Collados, Jose and Rodr{\'i}guez-Fern{\'a}ndez, Sara and Saggion, Horacio and Wanner, Leo | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3422--3432 | WordNet is probably the best known lexical resource in Natural Language Processing. While it is widely regarded as a high quality repository of concepts and semantic relations, updating and extending it manually is costly. One important type of relation which could potentially add enormous value to WordNet is the inclusion of collocational information, which is paramount in tasks such as Machine Translation, Natural Language Generation and Second Language Learning. In this paper, we present ColWordNet (CWN), an extended WordNet version with fine-grained collocational information, automatically introduced thanks to a method exploiting linear relations between analogous sense-level embeddings spaces. We perform both intrinsic and extrinsic evaluations, and release CWN for the use and scrutiny of the community. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,741 |
inproceedings | al-khatib-etal-2016-news | A News Editorial Corpus for Mining Argumentation Strategies | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1324/ | Al-Khatib, Khalid and Wachsmuth, Henning and Kiesel, Johannes and Hagen, Matthias and Stein, Benno | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3433--3443 | Many argumentative texts, and news editorials in particular, follow a specific strategy to persuade their readers of some opinion or attitude. This includes decisions such as when to tell an anecdote or where to support an assumption with statistics, which is reflected by the composition of different types of argumentative discourse units in a text. While several argument mining corpora have recently been published, they do not allow the study of argumentation strategies due to incomplete or coarse-grained unit annotations. This paper presents a novel corpus with 300 editorials from three diverse news portals that provides the basis for mining argumentation strategies. Each unit in all editorials has been assigned one of six types by three annotators with a high Fleiss' Kappa agreement of 0.56. We investigate various challenges of the annotation process and we conduct a first corpus analysis. Our results reveal different strategies across the news portals, exemplifying the benefit of studying editorials{---}a so far underresourced text genre in argument mining. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,742 |
inproceedings | sulubacak-etal-2016-universal | {U}niversal {D}ependencies for {T}urkish | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1325/ | Sulubacak, Umut and Gokirmak, Memduh and Tyers, Francis and {\c{C{\"oltekin, {\c{Ca{\u{gr{\i and Nivre, Joakim and Eryi{\u{git, G{\"ul{\c{sen | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3444--3454 | The Universal Dependencies (UD) project was conceived after the substantial recent interest in unifying annotation schemes across languages. With its own annotation principles and abstract inventory for parts of speech, morphosyntactic features and dependency relations, UD aims to facilitate multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective. This paper presents the Turkish IMST-UD Treebank, the first Turkish treebank to be in a UD release. The IMST-UD Treebank was automatically converted from the IMST Treebank, which was also recently released. We describe this conversion procedure in detail, complete with mapping tables. We also present our evaluation of the parsing performances of both versions of the IMST Treebank. Our findings suggest that the UD framework is at least as viable for Turkish as the original annotation framework of the IMST Treebank. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,743 |
inproceedings | eskander-etal-2016-creating | Creating Resources for Dialectal {A}rabic from a Single Annotation: A Case Study on {E}gyptian and {L}evantine | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1326/ | Eskander, Ramy and Habash, Nizar and Rambow, Owen and Pasha, Arfath | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3455--3465 | Arabic dialects present a special problem for natural language processing because there are few resources, they have no standard orthography, and have not been studied much. However, as more and more written dialectal Arabic is found in social media, NLP for Arabic dialects becomes an important goal. We present a methodology for creating a morphological analyzer and a morphological tagger for dialectal Arabic, and we illustrate it on Egyptian and Levantine Arabic. To our knowledge, these are the first analyzer and tagger for Levantine. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,744 |
inproceedings | akbik-etal-2016-multilingual | Multilingual Aliasing for Auto-Generating Proposition {B}anks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1327/ | Akbik, Alan and Guan, Xinyu and Li, Yunyao | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3466--3474 | Semantic Role Labeling (SRL) is the task of identifying the predicate-argument structure in sentences with semantic frame and role labels. For the English language, the Proposition Bank provides both a lexicon of all possible semantic frames and large amounts of labeled training data. In order to expand SRL beyond English, previous work investigated automatic approaches based on parallel corpora to automatically generate Proposition Banks for new target languages (TLs). However, this approach heuristically produces the frame lexicon from word alignments, leading to a range of lexicon-level errors and inconsistencies. To address these issues, we propose to manually alias TL verbs to existing English frames. For instance, the German verb drehen may evoke several meanings, including {\textquotedblleft}turn something{\textquotedblright} and {\textquotedblleft}film something{\textquotedblright}. Accordingly, we alias the former to the frame TURN.01 and the latter to a group of frames that includes FILM.01 and SHOOT.03. We execute a large-scale manual aliasing effort for three target languages and apply the new lexicons to automatically generate large Proposition Banks for Chinese, French and German with manually curated frames. We present a detailed evaluation in which we find that our proposed approach significantly increases the quality and consistency of the generated Proposition Banks. We release these resources to the research community. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,745 |
inproceedings | mortensen-etal-2016-panphon | {P}an{P}hon: A Resource for Mapping {IPA} Segments to Articulatory Feature Vectors | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1328/ | Mortensen, David R. and Littell, Patrick and Bharadwaj, Akash and Goyal, Kartik and Dyer, Chris and Levin, Lori | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3475--3484 | This paper contributes to a growing body of evidence that{---}when coupled with appropriate machine-learning techniques{--}linguistically motivated, information-rich representations can outperform one-hot encodings of linguistic data. In particular, we show that phonological features outperform character-based models. PanPhon is a database relating over 5,000 IPA segments to 21 subsegmental articulatory features. We show that this database boosts performance in various NER-related tasks. Phonologically aware, neural CRF models built on PanPhon features are able to perform better on monolingual Spanish and Turkish NER tasks that character-based models. They have also been shown to work well in transfer models (as between Uzbek and Turkish). PanPhon features also contribute measurably to Orthography-to-IPA conversion tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,746 |
inproceedings | zhou-etal-2016-text | Text Classification Improved by Integrating Bidirectional {LSTM} with Two-dimensional Max Pooling | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1329/ | Zhou, Peng and Qi, Zhenyu and Zheng, Suncong and Xu, Jiaming and Bao, Hongyun and Xu, Bo | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3485--3495 | Recurrent Neural Network (RNN) is one of the most popular architectures used in Natural Language Processsing (NLP) tasks because its recurrent structure is very suitable to process variable-length text. RNN can utilize distributed representations of words by first converting the tokens comprising each text into vectors, which form a matrix. And this matrix includes two dimensions: the time-step dimension and the feature vector dimension. Then most existing models usually utilize one-dimensional (1D) max pooling operation or attention-based operation only on the time-step dimension to obtain a fixed-length vector. However, the features on the feature vector dimension are not mutually independent, and simply applying 1D pooling operation over the time-step dimension independently may destroy the structure of the feature representation. On the other hand, applying two-dimensional (2D) pooling operation over the two dimensions may sample more meaningful features for sequence modeling tasks. To integrate the features on both dimensions of the matrix, this paper explores applying 2D max pooling operation to obtain a fixed-length representation of the text. This paper also utilizes 2D convolution to sample more meaningful information of the matrix. Experiments are conducted on six text classification tasks, including sentiment analysis, question classification, subjectivity classification and newsgroup classification. Compared with the state-of-the-art models, the proposed models achieve excellent performance on 4 out of 6 tasks. Specifically, one of the proposed models achieves highest accuracy on Stanford Sentiment Treebank binary classification and fine-grained classification tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,747 |
inproceedings | postma-etal-2016-always | More is not always better: balancing sense distributions for all-words Word Sense Disambiguation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1330/ | Postma, Marten and Izquierdo Bevia, Ruben and Vossen, Piek | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3496--3506 | Current Word Sense Disambiguation systems show an extremely poor performance on low frequent senses, which is mainly caused by the difference in sense distributions between training and test data. The main focus in tackling this problem has been on acquiring more data or selecting a single predominant sense and not necessarily on the meta properties of the data itself. We demonstrate that these properties, such as the volume, provenance, and balancing, play an important role with respect to system performance. In this paper, we describe a set of experiments to analyze these meta properties in the framework of a state-of-the-art WSD system when evaluated on the SemEval-2013 English all-words dataset. We show that volume and provenance are indeed important, but that approximating the perfect balancing of the selected training data leads to an improvement of 21 points and exceeds state-of-the-art systems by 14 points while using only simple features. We therefore conclude that unsupervised acquisition of training data should be guided by strategies aimed at matching meta properties. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,748 |
inproceedings | eger-etal-2016-language | Language classification from bilingual word embedding graphs | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1331/ | Eger, Steffen and Hoenen, Armin and Mehler, Alexander | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3507--3518 | We study the role of the second language in bilingual word embeddings in monolingual semantic evaluation tasks. We find strongly and weakly positive correlations between down-stream task performance and second language similarity to the target language. Additionally, we show how bilingual word embeddings can be employed for the task of semantic language classification and that joint semantic spaces vary in meaningful ways across second languages. Our results support the hypothesis that semantic language similarity is influenced by both structural similarity as well as geography/contact. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,749 |
inproceedings | drozd-etal-2016-word | Word Embeddings, Analogies, and Machine Learning: Beyond king - man + woman = queen | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1332/ | Drozd, Aleksandr and Gladkova, Anna and Matsuoka, Satoshi | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3519--3530 | Solving word analogies became one of the most popular benchmarks for word embeddings on the assumption that linear relations between word pairs (such as \textit{king}:\textit{man} :: \textit{woman}:\textit{queen}) are indicative of the quality of the embedding. We question this assumption by showing that the information not detected by linear offset may still be recoverable by a more sophisticated search method, and thus is actually encoded in the embedding. The general problem with linear offset is its sensitivity to the idiosyncrasies of individual words. We show that simple averaging over multiple word pairs improves over the state-of-the-art. A further improvement in accuracy (up to 30{\%} for some embeddings and relations) is achieved by combining cosine similarity with an estimation of the extent to which a candidate answer belongs to the correct word class. In addition to this practical contribution, this work highlights the problem of the interaction between word embeddings and analogy retrieval algorithms, and its implications for the evaluation of word embeddings and the use of analogies in extrinsic tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,750 |
inproceedings | bjerva-etal-2016-semantic | Semantic Tagging with Deep Residual Networks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1333/ | Bjerva, Johannes and Plank, Barbara and Bos, Johan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 3531--3541 | We propose a novel semantic tagging task, semtagging, tailored for the purpose of multilingual semantic parsing, and present the first tagger using deep residual networks (ResNets). Our tagger uses both word and character representations, and includes a novel residual bypass architecture. We evaluate the tagset both intrinsically on the new task of semantic tagging, as well as on Part-of-Speech (POS) tagging. Our system, consisting of a ResNet and an auxiliary loss function predicting our semantic tags, significantly outperforms prior results on English Universal Dependencies POS tagging (95.71{\%} accuracy on UD v1.2 and 95.67{\%} accuracy on UD v1.3). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,751 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.