entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | gyanendro-singh-etal-2016-automatic | Automatic Syllabification for {M}anipuri language | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1034/ | Gyanendro Singh, Loitongbam and Laitonjam, Lenin and Ranbir Singh, Sanasam | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 349--357 | Development of hand crafted rule for syllabifying words of a language is an expensive task. This paper proposes several data-driven methods for automatic syllabification of words written in Manipuri language. Manipuri is one of the scheduled Indian languages. First, we propose a language-independent rule-based approach formulated using entropy based phonotactic segmentation. Second, we project the syllabification problem as a sequence labeling problem and investigate its effect using various sequence labeling approaches. Third, we combine the effect of sequence labeling and rule-based method and investigate the performance of the hybrid approach. From various experimental observations, it is evident that the proposed methods outperform the baseline rule-based method. The entropy based phonotactic segmentation provides a word accuracy of 96{\%}, CRF (sequence labeling approach) provides 97{\%} and hybrid approach provides 98{\%} word accuracy. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,452 |
inproceedings | chen-etal-2016-learning | Learning to Distill: The Essence Vector Modeling Framework | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1035/ | Chen, Kuan-Yu and Liu, Shih-Hung and Chen, Berlin and Wang, Hsin-Min | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 358--368 | In the context of natural language processing, representation learning has emerged as a newly active research subject because of its excellent performance in many applications. Learning representations of words is a pioneering study in this school of research. However, paragraph (or sentence and document) embedding learning is more suitable/reasonable for some tasks, such as sentiment classification and document summarization. Nevertheless, as far as we are aware, there is only a dearth of research focusing on launching unsupervised paragraph embedding methods. Classic paragraph embedding methods infer the representation of a given paragraph by considering all of the words occurring in the paragraph. Consequently, those stop or function words that occur frequently may mislead the embedding learning process to produce a misty paragraph representation. Motivated by these observations, our major contributions are twofold. First, we propose a novel unsupervised paragraph embedding method, named the essence vector (EV) model, which aims at not only distilling the most representative information from a paragraph but also excluding the general background information to produce a more informative low-dimensional vector representation for the paragraph. We evaluate the proposed EV model on benchmark sentiment classification and multi-document summarization tasks. The experimental results demonstrate the effectiveness and applicability of the proposed embedding method. Second, in view of the increasing importance of spoken content processing, an extension of the EV model, named the denoising essence vector (D-EV) model, is proposed. The D-EV model not only inherits the advantages of the EV model but also can infer a more robust representation for a given spoken paragraph against imperfect speech recognition. The utility of the D-EV model is evaluated on a spoken document summarization task, confirming the effectiveness of the proposed embedding method in relation to several well-practiced and state-of-the-art summarization methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,453 |
inproceedings | lorenzo-trueba-etal-2016-continuous | Continuous Expressive Speaking Styles Synthesis based on {CVSM} and {MR}-{HMM} | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1036/ | Lorenzo-Trueba, Jaime and Barra-Chicote, Roberto and Gallardo-Antolin, Ascension and Yamagishi, Junichi and Montero, Juan M. | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 369--376 | This paper introduces a continuous system capable of automatically producing the most adequate speaking style to synthesize a desired target text. This is done thanks to a joint modeling of the acoustic and lexical parameters of the speaker models by adapting the CVSM projection of the training texts using MR-HMM techniques. As such, we consider that as long as sufficient variety in the training data is available, we should be able to model a continuous lexical space into a continuous acoustic space. The proposed continuous automatic text to speech system was evaluated by means of a perceptual evaluation in order to compare them with traditional approaches to the task. The system proved to be capable of conveying the correct expressiveness (average adequacy of 3.6) with an expressive strength comparable to oracle traditional expressive speech synthesis (average of 3.6) although with a drop in speech quality mainly due to the semi-continuous nature of the data (average quality of 2.9). This means that the proposed system is capable of improving traditional neutral systems without requiring any additional user interaction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,454 |
inproceedings | dominguez-etal-2016-automatic | An Automatic Prosody Tagger for Spontaneous Speech | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1037/ | Dom{\'i}nguez, M{\'o}nica and Farr{\'u}s, Mireia and Wanner, Leo | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 377--386 | Speech prosody is known to be central in advanced communication technologies. However, despite the advances of theoretical studies in speech prosody, so far, no large scale prosody annotated resources that would facilitate empirical research and the development of empirical computational approaches are available. This is to a large extent due to the fact that current common prosody annotation conventions offer a descriptive framework of intonation contours and phrasing based on labels. This makes it difficult to reach a satisfactory inter-annotator agreement during the annotation of gold standard annotations and, subsequently, to create consistent large scale annotations. To address this problem, we present an annotation schema for prominence and boundary labeling of prosodic phrases based upon acoustic parameters and a tagger for prosody annotation at the prosodic phrase level. Evaluation proves that inter-annotator agreement reaches satisfactory values, from 0.60 to 0.80 Cohen`s kappa, while the prosody tagger achieves acceptable recall and f-measure figures for five spontaneous samples used in the evaluation of monologue and dialogue formats in English and Spanish. The work presented in this paper is a first step towards a semi-automatic acquisition of large corpora for empirical prosodic analysis. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,455 |
inproceedings | kim-etal-2016-frustratingly | Frustratingly Easy Neural Domain Adaptation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1038/ | Kim, Young-Bum and Stratos, Karl and Sarikaya, Ruhi | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 387--396 | Popular techniques for domain adaptation such as the feature augmentation method of Daum{\'e} III (2009) have mostly been considered for sparse binary-valued features, but not for dense real-valued features such as those used in neural networks. In this paper, we describe simple neural extensions of these techniques. First, we propose a natural generalization of the feature augmentation method that uses K + 1 LSTMs where one model captures global patterns across all K domains and the remaining K models capture domain-specific information. Second, we propose a novel application of the framework for learning shared structures by Ando and Zhang (2005) to domain adaptation, and also provide a neural extension of their approach. In experiments on slot tagging over 17 domains, our methods give clear performance improvement over Daum{\'e} III (2009) applied on feature-rich CRFs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,456 |
inproceedings | bhat-etal-2016-house | A House United: Bridging the Script and Lexical Barrier between {H}indi and {U}rdu | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1039/ | Bhat, Riyaz A. and Bhat, Irshad A. and Jain, Naman and Sharma, Dipti Misra | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 397--408 | In Computational Linguistics, Hindi and Urdu are not viewed as a monolithic entity and have received separate attention with respect to their text processing. From part-of-speech tagging to machine translation, models are separately trained for both Hindi and Urdu despite the fact that they represent the same language. The reasons mainly are their divergent literary vocabularies and separate orthographies, and probably also their political status and the social perception that they are two separate languages. In this article, we propose a simple but efficient approach to bridge the lexical and orthographic differences between Hindi and Urdu texts. With respect to text processing, addressing the differences between the Hindi and Urdu texts would be beneficial in the following ways: (a) instead of training separate models, their individual resources can be augmented to train single, unified models for better generalization, and (b) their individual text processing applications can be used interchangeably under varied resource conditions. To remove the script barrier, we learn accurate statistical transliteration models which use sentence-level decoding to resolve word ambiguity. Similarly, we learn cross-register word embeddings from the harmonized Hindi and Urdu corpora to nullify their lexical divergences. As a proof of the concept, we evaluate our approach on the Hindi and Urdu dependency parsing under two scenarios: (a) resource sharing, and (b) resource augmentation. We demonstrate that a neural network-based dependency parser trained on augmented, harmonized Hindi and Urdu resources performs significantly better than the parsing models trained separately on the individual resources. We also show that we can achieve near state-of-the-art results when the parsers are used interchangeably. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,457 |
inproceedings | michalon-etal-2016-deeper | Deeper syntax for better semantic parsing | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1040/ | Michalon, Olivier and Ribeyre, Corentin and Candito, Marie and Nasr, Alexis | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 409--420 | Syntax plays an important role in the task of predicting the semantic structure of a sentence. But syntactic phenomena such as alternations, control and raising tend to obfuscate the relation between syntax and semantics. In this paper we predict the semantic structure of a sentence using a deeper syntax than what is usually done. This deep syntactic representation abstracts away from purely syntactic phenomena and proposes a structural organization of the sentence that is closer to the semantic representation. Experiments conducted on a French corpus annotated with semantic frames showed that a semantic parser reaches better performances with such a deep syntactic input. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,458 |
inproceedings | lee-wang-2016-language | Language Independent Dependency to Constituent Tree Conversion | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1041/ | Lee, Young-Suk and Wang, Zhiguo | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 421--428 | We present a dependency to constituent tree conversion technique that aims to improve constituent parsing accuracies by leveraging dependency treebanks available in a wide variety in many languages. The technique works in two steps. First, a partial constituent tree is derived from a dependency tree with a very simple deterministic algorithm that is both language and dependency type independent. Second, a complete high accuracy constituent tree is derived with a constraint-based parser, which uses the partial constituent tree as external constraints. Evaluated on Section 22 of the WSJ Treebank, the technique achieves the state-of-the-art conversion F-score 95.6. When applied to English Universal Dependency treebank and German CoNLL2006 treebank, the converted treebanks added to the human-annotated constituent parser training corpus improve parsing F-scores significantly for both languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,459 |
inproceedings | waszczuk-etal-2016-promoting | Promoting multiword expressions in {A}* {TAG} parsing | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1042/ | Waszczuk, Jakub and Savary, Agata and Parmentier, Yannick | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 429--439 | Multiword expressions (MWEs) are pervasive in natural languages and often have both idiomatic and compositional readings, which leads to high syntactic ambiguity. We show that for some MWE types idiomatic readings are usually the correct ones. We propose a heuristic for an A* parser for Tree Adjoining Grammars which benefits from this knowledge by promoting MWE-oriented analyses. This strategy leads to a substantial reduction in the parsing search space in case of true positive MWE occurrences, while avoiding parsing failures in case of false positives. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,460 |
inproceedings | ulinski-etal-2016-incrementally | Incrementally Learning a Dependency Parser to Support Language Documentation in Field Linguistics | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1043/ | Ulinski, Morgan and Hirschberg, Julia and Rambow, Owen | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 440--449 | We present experiments in incrementally learning a dependency parser. The parser will be used in the WordsEye Linguistics Tools (WELT) (Ulinski et al., 2014) which supports field linguists documenting a language`s syntax and semantics. Our goal is to make syntactic annotation faster for field linguists. We have created a new parallel corpus of descriptions of spatial relations and motion events, based on pictures and video clips used by field linguists for elicitation of language from native speaker informants. We collected descriptions for each picture and video from native speakers in English, Spanish, German, and Egyptian Arabic. We compare the performance of MSTParser (McDonald et al., 2006) and MaltParser (Nivre et al., 2006) when trained on small amounts of this data. We find that MaltParser achieves the best performance. We also present the results of experiments using the parser to assist with annotation. We find that even when the parser is trained on a single sentence from the corpus, annotation time significantly decreases. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,461 |
inproceedings | zennaki-etal-2016-inducing | Inducing Multilingual Text Analysis Tools Using Bidirectional Recurrent Neural Networks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1044/ | Zennaki, Othman and Semmar, Nasredine and Besacier, Laurent | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 450--460 | This work focuses on the development of linguistic analysis tools for resource-poor languages. We use a parallel corpus to produce a multilingual word representation based only on sentence level alignment. This representation is combined with the annotated source side (resource-rich language) of the parallel corpus to train text analysis tools for resource-poor languages. Our approach is based on Recurrent Neural Networks (RNN) and has the following advantages: (a) it does not use word alignment information, (b) it does not assume any knowledge about foreign languages, which makes it applicable to a wide range of resource-poor languages, (c) it provides truly multilingual taggers. In a previous study, we proposed a method based on Simple RNN to automatically induce a Part-Of-Speech (POS) tagger. In this paper, we propose an improvement of our neural model. We investigate the Bidirectional RNN and the inclusion of external information (for instance low level information from Part-Of-Speech tags) in the RNN to train a more complex tagger (for instance, a multilingual super sense tagger). We demonstrate the validity and genericity of our method by using parallel corpora (obtained by manual or automatic translation). Our experiments are conducted to induce cross-lingual POS and super sense taggers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,462 |
inproceedings | zhang-etal-2016-bitext | Bitext Name Tagging for Cross-lingual Entity Annotation Projection | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1045/ | Zhang, Dongxu and Zhang, Boliang and Pan, Xiaoman and Feng, Xiaocheng and Ji, Heng and Xu, Weiran | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 461--470 | Annotation projection is a practical method to deal with the low resource problem in incident languages (IL) processing. Previous methods on annotation projection mainly relied on word alignment results without any training process, which led to noise propagation caused by word alignment errors. In this paper, we focus on the named entity recognition (NER) task and propose a weakly-supervised framework to project entity annotations from English to IL through bitexts. Instead of directly relying on word alignment results, this framework combines advantages of rule-based methods and deep learning methods by implementing two steps: First, generates a high-confidence entity annotation set on IL side with strict searching methods; Second, uses this high-confidence set to weakly supervise the model training. The model is finally used to accomplish the projecting process. Experimental results on two low-resource ILs show that the proposed method can generate better annotations projected from English-IL parallel corpora. The performance of IL name tagger can also be improved significantly by training on the newly projected IL annotation set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,463 |
inproceedings | salehi-etal-2016-determining | Determining the Multiword Expression Inventory of a Surprise Language | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1046/ | Salehi, Bahar and Cook, Paul and Baldwin, Timothy | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 471--481 | Much previous research on multiword expressions (MWEs) has focused on the token- and type-level tasks of MWE identification and extraction, respectively. Such studies typically target known prevalent MWE types in a given language. This paper describes the first attempt to learn the MWE inventory of a {\textquotedblleft}surprise{\textquotedblright} language for which we have no explicit prior knowledge of MWE patterns, certainly no annotated MWE data, and not even a parallel corpus. Our proposed model is trained on a treebank with MWE relations of a source language, and can be applied to the monolingual corpus of the surprise language to identify its MWE construction types. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,464 |
inproceedings | akhtar-etal-2016-hybrid | A Hybrid Deep Learning Architecture for Sentiment Analysis | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1047/ | Akhtar, Md Shad and Kumar, Ayush and Ekbal, Asif and Bhattacharyya, Pushpak | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 482--493 | In this paper, we propose a novel hybrid deep learning archtecture which is highly efficient for sentiment analysis in resource-poor languages. We learn sentiment embedded vectors from the Convolutional Neural Network (CNN). These are augmented to a set of optimized features selected through a multi-objective optimization (MOO) framework. The sentiment augmented optimized vector obtained at the end is used for the training of SVM for sentiment classification. We evaluate our proposed approach for coarse-grained (i.e. sentence level) as well as fine-grained (i.e. aspect level) sentiment analysis on four Hindi datasets covering varying domains. In order to show that our proposed method is generic in nature we also evaluate it on two benchmark English datasets. Evaluation shows that the results of the proposed method are consistent across all the datasets and often outperforms the state-of-art systems. To the best of our knowledge, this is the very first attempt where such a deep learning model is used for less-resourced languages such as Hindi. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,465 |
inproceedings | krishna-etal-2016-word | Word Segmentation in {S}anskrit Using Path Constrained Random Walks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1048/ | Krishna, Amrith and Santra, Bishal and Satuluri, Pavankumar and Bandaru, Sasi Prasanth and Faldu, Bhumi and Singh, Yajuvendra and Goyal, Pawan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 494--504 | In Sanskrit, the phonemes at the word boundaries undergo changes to form new phonemes through a process called as sandhi. A fused sentence can be segmented into multiple possible segmentations. We propose a word segmentation approach that predicts the most semantically valid segmentation for a given sentence. We treat the problem as a query expansion problem and use the path-constrained random walks framework to predict the correct segments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,466 |
inproceedings | wang-etal-2016-mongolian | {M}ongolian Named Entity Recognition System with Rich Features | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1049/ | Wang, Weihua and Bao, Feilong and Gao, Guanglai | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 505--512 | In this paper, we first build a manually annotated named entity corpus of Mongolian. Then, we propose three morphological processing methods and study comprehensive features, including syllable features, lexical features, context features, morphological features and semantic features in Mongolian named entity recognition. Moreover, we also evaluate the influence of word cluster features on the system and combine all features together eventually. The experimental result shows that segmenting each suffix into an individual token achieves better results than deleting suffixes or using the suffixes as feature. The system based on segmenting suffixes with all proposed features yields benchmark result of F-measure=84.65 on this corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,467 |
inproceedings | shafieibavani-etal-2016-appraising | Appraising {UMLS} Coverage for Summarizing Medical Evidence | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1050/ | ShafieiBavani, Elaheh and Ebrahimi, Mohammad and Wong, Raymond and Chen, Fang | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 513--524 | When making clinical decisions, practitioners need to rely on the most relevant evidence available. However, accessing a vast body of medical evidence and confronting with the issue of information overload can be challenging and time consuming. This paper proposes an effective summarizer for medical evidence by utilizing both UMLS and WordNet. Given a clinical query and a set of relevant abstracts, our aim is to generate a fluent, well-organized, and compact summary that answers the query. Analysis via ROUGE metrics shows that using WordNet as a general-purpose lexicon helps to capture the concepts not covered by the UMLS Metathesaurus, and hence significantly increases the performance. The effectiveness of our proposed approach is demonstrated by conducting a set of experiments over a specialized evidence-based medicine (EBM) corpus - which has been gathered and annotated for the purpose of biomedical text summarization. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,468 |
inproceedings | cevahir-murakami-2016-large | Large-scale Multi-class and Hierarchical Product Categorization for an {E}-commerce Giant | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1051/ | Cevahir, Ali and Murakami, Koji | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 525--535 | In order to organize the large number of products listed in e-commerce sites, each product is usually assigned to one of the multi-level categories in the taxonomy tree. It is a time-consuming and difficult task for merchants to select proper categories within thousands of options for the products they sell. In this work, we propose an automatic classification tool to predict the matching category for a given product title and description. We used a combination of two different neural models, i.e., deep belief nets and deep autoencoders, for both titles and descriptions. We implemented a selective reconstruction approach for the input layer during the training of the deep neural networks, in order to scale-out for large-sized sparse feature vectors. GPUs are utilized in order to train neural networks in a reasonable time. We have trained our models for around 150 million products with a taxonomy tree with at most 5 levels that contains 28,338 leaf categories. Tests with millions of products show that our first predictions matches 81{\%} of merchants' assignments, when {\textquotedblleft}others{\textquotedblright} categories are excluded. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,469 |
inproceedings | gupta-etal-2016-product | Product Classification in {E}-Commerce using Distributional Semantics | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1052/ | Gupta, Vivek and Karnick, Harish and Bansal, Ashendra and Jhala, Pradhuman | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 536--546 | Product classification is the task of automatically predicting a taxonomy path for a product in a predefined taxonomy hierarchy given a textual product description or title. For efficient product classification we require a suitable representation for a document (the textual description of a product) feature vector and efficient and fast algorithms for prediction. To address the above challenges, we propose a new distributional semantics representation for document vector formation. We also develop a new two-level ensemble approach utilising (with respect to the taxonomy tree) path-wise, node-wise and depth-wise classifiers to reduce error in the final product classification task. Our experiments show the effectiveness of the distributional representation and the ensemble approach on data sets from a leading e-commerce platform and achieve improved results on various evaluation metrics compared to earlier approaches. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,470 |
inproceedings | cao-etal-2016-attsum | {A}tt{S}um: Joint Learning of Focusing and Summarization with Neural Attention | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1053/ | Cao, Ziqiang and Li, Wenjie and Li, Sujian and Wei, Furu and Li, Yanran | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 547--556 | Query relevance ranking and sentence saliency ranking are the two main tasks in extractive query-focused summarization. Previous supervised summarization systems often perform the two tasks in isolation. However, since reference summaries are the trade-off between relevance and saliency, using them as supervision, neither of the two rankers could be trained well. This paper proposes a novel summarization system called AttSum, which tackles the two tasks jointly. It automatically learns distributed representations for sentences as well as the document cluster. Meanwhile, it applies the attention mechanism to simulate the attentive reading of human behavior when a query is given. Extensive experiments are conducted on DUC query-focused summarization benchmark datasets. Without using any hand-crafted features, AttSum achieves competitive performance. We also observe that the sentences recognized to focus on the query indeed meet the query need. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,471 |
inproceedings | li-etal-2016-using | Using Relevant Public Posts to Enhance News Article Summarization | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1054/ | Li, Chen and Wei, Zhongyu and Liu, Yang and Jin, Yang and Huang, Fei | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 557--566 | A news article summary usually consists of 2-3 key sentences that reflect the gist of that news article. In this paper we explore using public posts following a new article to improve automatic summary generation for the news article. We propose different approaches to incorporate information from public posts, including using frequency information from the posts to re-estimate bigram weights in the ILP-based summarization model and to re-weight a dependency tree edge`s importance for sentence compression, directly selecting sentences from posts as the final summary, and finally a strategy to combine the summarization results generated from news articles and posts. Our experiments on data collected from Facebook show that relevant public posts provide useful information and can be effectively leveraged to improve news article summarization results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,472 |
inproceedings | fang-etal-2016-proposition | A Proposition-Based Abstractive Summariser | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1055/ | Fang, Yimai and Zhu, Haoyue and Muszy{\'n}ska, Ewa and Kuhnle, Alexander and Teufel, Simone | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 567--578 | Abstractive summarisation is not yet common amongst today`s deployed and research systems. Most existing systems either extract sentences or compress individual sentences. In this paper, we present a summariser that works by a different paradigm. It is a further development of an existing summariser that has an incremental, proposition-based content selection process but lacks a natural language (NL) generator for the final output. Using an NL generator, we can now produce the summary text to directly reflect the selected propositions. Our evaluation compares textual quality of our system to the earlier preliminary output method, and also uses ROUGE to compare to various summarisers that use the traditional method of sentence extraction, followed by compression. Our results suggest that cutting out the middle-man of sentence extraction can lead to better abstractive summaries. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,473 |
inproceedings | evang-bos-2016-cross | Cross-lingual Learning of an Open-domain Semantic Parser | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1056/ | Evang, Kilian and Bos, Johan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 579--588 | We propose a method for learning semantic CCG parsers by projecting annotations via a parallel corpus. The method opens an avenue towards cheaply creating multilingual semantic parsers mapping open-domain text to formal meaning representations. A first cross-lingually learned Dutch (from English) semantic parser obtains f-scores ranging from 42.99{\%} to 69.22{\%} depending on the level of label informativity taken into account, compared to 58.40{\%} to 78.88{\%} for the underlying source-language system. These are promising numbers compared to state-of-the-art semantic parsing in open domains. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,474 |
inproceedings | zhao-liu-2016-subtree | A subtree-based factorization of dependency parsing | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1057/ | Zhao, Qiuye and Liu, Qun | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 589--598 | We propose a dependency parsing pipeline, in which the parsing of long-distance projections and localized dependencies are explicitly decomposed at the input level. A chosen baseline dependency parsing model performs only on {\textquoteleft}carved' sequences at the second stage, which are transformed from coarse constituent parsing outputs at the first stage. When k-best constituent parsing outputs are kept, a third-stage is required to search for an optimal combination of the overlapped dependency subtrees. In this sense, our dependency model is subtree-factored. We explore alternative approaches for scoring subtrees, including feature-based models as well as continuous representations. The search for optimal subset to combine is formulated as an ILP problem. This framework especially benefits the models poor on long sentences, generally improving baselines by 0.75-1.28 (UAS) on English, achieving comparable performance with high-order models but faster. For Chinese, the most notable increase is as high as 3.63 (UAS) when the proposed framework is applied to first-order parsing models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,475 |
inproceedings | akbik-li-2016-k | K-{SRL}: Instance-based Learning for Semantic Role Labeling | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1058/ | Akbik, Alan and Li, Yunyao | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 599--608 | Semantic role labeling (SRL) is the task of identifying and labeling predicate-argument structures in sentences with semantic frame and role labels. A known challenge in SRL is the large number of low-frequency exceptions in training data, which are highly context-specific and difficult to generalize. To overcome this challenge, we propose the use of instance-based learning that performs no explicit generalization, but rather extrapolates predictions from the most similar instances in the training data. We present a variant of k-nearest neighbors (kNN) classification with composite features to identify nearest neighbors for SRL. We show that high-quality predictions can be derived from a very small number of similar instances. In a comparative evaluation we experimentally demonstrate that our instance-based learning approach significantly outperforms current state-of-the-art systems on both in-domain and out-of-domain data, reaching F1-scores of 89,28{\%} and 79.91{\%} respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,476 |
inproceedings | plank-2016-keystroke | Keystroke dynamics as signal for shallow syntactic parsing | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1059/ | Plank, Barbara | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 609--619 | Keystroke dynamics have been extensively used in psycholinguistic and writing research to gain insights into cognitive processing. But do keystroke logs contain actual signal that can be used to learn better natural language processing models? We postulate that keystroke dynamics contain information about syntactic structure that can inform shallow syntactic parsing. To test this hypothesis, we explore labels derived from keystroke logs as auxiliary task in a multi-task bidirectional Long Short-Term Memory (bi-LSTM). Our results show promising results on two shallow syntactic parsing tasks, chunking and CCG supertagging. Our model is simple, has the advantage that data can come from distinct sources, and produces models that are significantly better than models trained on the text annotations alone. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,477 |
inproceedings | ostling-2016-bayesian | A {B}ayesian model for joint word alignment and part-of-speech transfer | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1060/ | {\"Ostling, Robert | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 620--629 | Current methods for word alignment require considerable amounts of parallel text to deliver accurate results, a requirement which is met only for a small minority of the world`s approximately 7,000 languages. We show that by jointly performing word alignment and annotation transfer in a novel Bayesian model, alignment accuracy can be improved for language pairs where annotations are available for only one of the languages{---}a finding which could facilitate the study and processing of a vast number of low-resource languages. We also present an evaluation where our method is used to perform single-source and multi-source part-of-speech transfer with 22 translations of the same text in four different languages. This allows us to quantify the considerable variation in accuracy depending on the specific source text(s) used, even with different translations into the same language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,478 |
inproceedings | shapiro-2016-splitting | Splitting compounds with ngrams | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1061/ | Shapiro, Naomi Tachikawa | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 630--640 | Compound words with unmarked word boundaries are problematic for many tasks in NLP and computational linguistics, including information extraction, machine translation, and syllabification. This paper introduces a simple, proof-of-concept language modeling approach to automatic compound segmentation, as applied to Finnish. This approach utilizes an off-the-shelf morphological analyzer to split training words into their constituent morphemes. A language model is subsequently trained on ngrams composed of morphemes, morpheme boundaries, and word boundaries. Linguistic constraints are then used to weed out phonotactically ill-formed segmentations, thereby allowing the language model to select the best grammatical segmentation. This approach achieves an accuracy of {\textasciitilde}97{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,479 |
inproceedings | feng-etal-2016-gake | {GAKE}: Graph Aware Knowledge Embedding | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1062/ | Feng, Jun and Huang, Minlie and Yang, Yang and Zhu, Xiaoyan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 641--651 | Knowledge embedding, which projects triples in a given knowledge base to d-dimensional vectors, has attracted considerable research efforts recently. Most existing approaches treat the given knowledge base as a set of triplets, each of whose representation is then learned separately. However, as a fact, triples are connected and depend on each other. In this paper, we propose a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph`s structural information. We introduce three types of graph context for embedding: neighbor context, path context, and edge context, each reflects properties of knowledge from different perspectives. We also design an attention mechanism to learn representative power of different vertices or edges. To validate our method, we conduct several experiments on two tasks. Experimental results suggest that our method outperforms several state-of-art knowledge embedding models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,480 |
inproceedings | wu-etal-2016-ranking | Ranking Responses Oriented to Conversational Relevance in Chat-bots | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1063/ | Wu, Bowen and Wang, Baoxun and Xue, Hui | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 652--662 | For automatic chatting systems, it is indeed a great challenge to reply the given query considering the conversation history, rather than based on the query only. This paper proposes a deep neural network to address the context-aware response ranking problem by end-to-end learning, so as to help to select conversationally relevant candidate. By combining the multi-column convolutional layer and the recurrent layer, our model is able to model the semantics of the utterance sequence by grasping the semantic clue within the conversation, on the basis of the effective representation for each sentence. Especially, the network utilizes attention pooling to further emphasis the importance of essential words in conversations, thus the representations of contexts tend to be more meaningful and the performance of candidate ranking is notably improved. Meanwhile, due to the adoption of attention pooling, it is possible to visualize the semantic clues. The experimental results on the large amount of conversation data from social media have shown that our approach is promising for quantifying the conversational relevance of responses, and indicated its good potential for building practical IR based chat-bots. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,481 |
inproceedings | lee-etal-2016-probabilistic | Probabilistic Prototype Model for Serendipitous Property Mining | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1064/ | Lee, Taesung and Hwang, Seung-won and Wang, Zhongyuan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 663--673 | Besides providing the relevant information, amusing users has been an important role of the web. Many web sites provide serendipitous (unexpected but relevant) information to draw user traffic. In this paper, we study the representative scenario of mining an amusing quiz. An existing approach leverages a knowledge base to mine an unexpected property then find quiz questions on such property, based on prototype theory in cognitive science. However, existing deterministic model is vulnerable to noise in the knowledge base. Therefore, we instead propose to leverage probabilistic approach to build a prototype that can overcome noise. Our extensive empirical study shows that our approach not only significantly outperforms baselines by 0.06 in accuracy, and 0.11 in serendipity but also shows higher relevance than the traditional relevance-pursuing baseline using TF-IDF. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,482 |
inproceedings | garimella-etal-2016-identifying | Identifying Cross-Cultural Differences in Word Usage | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1065/ | Garimella, Aparna and Mihalcea, Rada and Pennebaker, James | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 674--683 | Personal writings have inspired researchers in the fields of linguistics and psychology to study the relationship between language and culture to better understand the psychology of people across different cultures. In this paper, we explore this relation by developing cross-cultural word models to identify words with cultural bias {--} i.e., words that are used in significantly different ways by speakers from different cultures. Focusing specifically on two cultures: United States and Australia, we identify a set of words with significant usage differences, and further investigate these words through feature analysis and topic modeling, shedding light on the attributes of language that contribute to these differences. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,483 |
inproceedings | asahara-etal-2016-reading | Reading-Time Annotations for {\textquotedblleft}{B}alanced {C}orpus of {C}ontemporary {W}ritten {J}apanese{\textquotedblright} | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1066/ | Asahara, Masayuki and Ono, Hajime and Miyamoto, Edson T. | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 684--694 | The Dundee Eyetracking Corpus contains eyetracking data collected while native speakers of English and French read newspaper editorial articles. Similar resources for other languages are still rare, especially for languages in which words are not overtly delimited with spaces. This is a report on a project to build an eyetracking corpus for Japanese. Measurements were collected while 24 native speakers of Japanese read excerpts from the Balanced Corpus of Contemporary Written Japanese Texts were presented with or without segmentation (i.e. with or without space at the boundaries between bunsetsu segmentations) and with two types of methodologies (eyetracking and self-paced reading presentation). Readers' background information including vocabulary-size estimation and Japanese reading-span score were also collected. As an example of the possible uses for the corpus, we also report analyses investigating the phenomena of anti-locality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,484 |
inproceedings | nand-etal-2016-bullying | {\textquotedblleft}How Bullying is this Message?{\textquotedblright}: A Psychometric Thermometer for Bullying | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1067/ | Nand, Parma and Perera, Rivindu and Kasture, Abhijeet | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 695--706 | Cyberbullying statistics are shocking, the number of affected young people is increasing dramatically with the affordability of mobile technology devices combined with a growing number of social networks. This paper proposes a framework to analyse Tweets with the goal to identify cyberharassment in social networks as an important step to protect people from cyberbullying. The proposed framework incorporates latent or hidden variables with supervised learning to determine potential bullying cases resembling short blogging type texts such as Tweets. It uses the LIWC2007 - tool that translates Tweet messages into 67 numeric values, representing 67 word categories. The output vectors are then used as features for four different classifiers implemented in Weka. Tests on all four classifiers delivered encouraging predictive capability of Tweet messages. Overall it was found that the use of numeric psychometric values outperformed the same algorithms using both filtered and unfiltered words as features. The best performing algorithms was Random Forest with an F1-value of 0.947 using psychometric features compared to a value of 0.847 for the same algorithm using words as features. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,485 |
inproceedings | yatbaz-etal-2016-learning | Learning grammatical categories using paradigmatic representations: Substitute words for language acquisition | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1068/ | Yatbaz, Mehmet Ali and Cirik, Volkan and K{\"untay, Aylin and Yuret, Deniz | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 707--716 | Learning syntactic categories is a fundamental task in language acquisition. Previous studies show that co-occurrence patterns of preceding and following words are essential to group words into categories. However, the neighboring words, or frames, are rarely repeated exactly in the data. This creates data sparsity and hampers learning for frame based models. In this work, we propose a paradigmatic representation of word context which uses probable substitutes instead of frames. Our experiments on child-directed speech show that models based on probable substitutes learn more accurate categories with fewer examples compared to models based on frames. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,486 |
inproceedings | paetzold-specia-2016-understanding | Understanding the Lexical Simplification Needs of Non-Native Speakers of {E}nglish | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1069/ | Paetzold, Gustavo and Specia, Lucia | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 717--727 | We report three user studies in which the Lexical Simplification needs of non-native English speakers are investigated. Our analyses feature valuable new insight on the relationship between the non-natives' notion of complexity and various morphological, semantic and lexical word properties. Some of our findings contradict long-standing misconceptions about word simplicity. The data produced in our studies consists of 211,564 annotations made by 1,100 volunteers, which we hope will guide forthcoming research on Text Simplification for non-native speakers of English. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,487 |
inproceedings | alam-etal-2016-interlocutors | How Interlocutors Coordinate with each other within Emotional Segments? | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1070/ | Alam, Firoj and Chowdhury, Shammur Absar and Danieli, Morena and Riccardi, Giuseppe | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 728--738 | In this paper, we aim to investigate the coordination of interlocutors behavior in different emotional segments. Conversational coordination between the interlocutors is the tendency of speakers to predict and adjust each other accordingly on an ongoing conversation. In order to find such a coordination, we investigated 1) lexical similarities between the speakers in each emotional segments, 2) correlation between the interlocutors using psycholinguistic features, such as linguistic styles, psychological process, personal concerns among others, and 3) relation of interlocutors turn-taking behaviors such as competitiveness. To study the degree of coordination in different emotional segments, we conducted our experiments using real dyadic conversations collected from call centers in which agent`s emotional state include empathy and customer`s emotional states include anger and frustration. Our findings suggest that the most coordination occurs between the interlocutors inside anger segments, where as, a little coordination was observed when the agent was empathic, even though an increase in the amount of non-competitive overlaps was observed. We found no significant difference between anger and frustration segment in terms of turn-taking behaviors. However, the length of pause significantly decreases in the preceding segment of anger where as it increases in the preceding segment of frustration. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,488 |
inproceedings | bykh-meurers-2016-advancing | Advancing Linguistic Features and Insights by Label-informed Feature Grouping: An Exploration in the Context of Native Language Identification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1071/ | Bykh, Serhiy and Meurers, Detmar | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 739--749 | We propose a hierarchical clustering approach designed to group linguistic features for supervised machine learning that is inspired by variationist linguistics. The method makes it possible to abstract away from the individual feature occurrences by grouping features together that behave alike with respect to the target class, thus providing a new, more general perspective on the data. On the one hand, it reduces data sparsity, leading to quantitative performance gains. On the other, it supports the formation and evaluation of hypotheses about individual choices of linguistic structures. We explore the method using features based on verb subcategorization information and evaluate the approach in the context of the Native Language Identification (NLI) task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,489 |
inproceedings | rubino-etal-2016-modeling | Modeling Diachronic Change in Scientific Writing with Information Density | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1072/ | Rubino, Raphael and Degaetano-Ortlieb, Stefania and Teich, Elke and van Genabith, Josef | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 750--761 | Previous linguistic research on scientific writing has shown that language use in the scientific domain varies considerably in register and style over time. In this paper we investigate the introduction of information theory inspired features to study long term diachronic change on three levels: lexis, part-of-speech and syntax. Our approach is based on distinguishing between sentences from 19th and 20th century scientific abstracts using supervised classification models. To the best of our knowledge, the introduction of information theoretic features to this task is novel. We show that these features outperform more traditional features, such as token or character n-grams, while leading to more compact models. We present a detailed analysis of feature informativeness in order to gain a better understanding of diachronic change on different linguistic levels. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,490 |
inproceedings | hu-etal-2016-different | Different Contexts Lead to Different Word Embeddings | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1073/ | Hu, Wenpeng and Zhang, Jiajun and Zheng, Nan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 762--771 | Recent work for learning word representations has applied successfully to many NLP applications, such as sentiment analysis and question answering. However, most of these models assume a single vector per word type without considering polysemy and homonymy. In this paper, we present an extension to the CBOW model which not only improves the quality of embeddings but also makes embeddings suitable for polysemy. It differs from most of the related work in that it learns one semantic center embedding and one context bias instead of training multiple embeddings per word type. Different context leads to different bias which is defined as the weighted average embeddings of local context. Experimental results on similarity task and analogy task show that the word representations learned by the proposed method outperform the competitive baselines. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,491 |
inproceedings | agirrezabal-etal-2016-machine | Machine Learning for Metrical Analysis of {E}nglish Poetry | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1074/ | Agirrezabal, Manex and Alegria, I{\~n}aki and Hulden, Mans | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 772--781 | In this work we tackle the challenge of identifying rhythmic patterns in poetry written in English. Although poetry is a literary form that makes use standard meters usually repeated among different authors, we will see in this paper how performing such analyses is a difficult task in machine learning due to the unexpected deviations from such standard patterns. After breaking down some examples of classical poetry, we apply a number of NLP techniques for the scansion of poetry, training and testing our systems against a human-annotated corpus. With these experiments, our purpose is establish a baseline of automatic scansion of poetry using NLP tools in a straightforward manner and to raise awareness of the difficulties of this task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,492 |
inproceedings | moore-etal-2016-automated | Automated speech-unit delimitation in spoken learner {E}nglish | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1075/ | Moore, Russell and Caines, Andrew and Graham, Calbert and Buttery, Paula | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 782--793 | In order to apply computational linguistic analyses and pass information to downstream applications, transcriptions of speech obtained via automatic speech recognition (ASR) need to be divided into smaller meaningful units, in a task we refer to as {\textquoteleft}speech-unit (SU) delimitation'. We closely recreate the automatic delimitation system described by Lee and Glass (2012), {\textquoteleft}Sentence detection using multiple annotations', Proceedings of INTERSPEECH, which combines a prosodic model, language model and speech-unit length model in log-linear fashion. Since state-of-the-art natural language processing (NLP) tools have been developed to deal with written text and its characteristic sentence-like units, SU delimitation helps bridge the gap between ASR and NLP, by normalising spoken data into a more canonical format. Previous work has focused on native speaker recordings; we test the system of Lee and Glass (2012) on non-native speaker (or {\textquoteleft}learner') data, achieving performance above the state-of-the-art. We also consider alternative evaluation metrics which move away from the idea of a single {\textquoteleft}truth' in SU delimitation, and frame this work in the context of downstream NLP applications. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,493 |
inproceedings | song-etal-2016-learning | Learning to Identify Sentence Parallelism in Student Essays | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1076/ | Song, Wei and Liu, Tong and Fu, Ruiji and Liu, Lizhen and Wang, Hanshi and Liu, Ting | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 794--803 | Parallelism is an important rhetorical device. We propose a machine learning approach for automated sentence parallelism identification in student essays. We build an essay dataset with sentence level parallelism annotated. We derive features by combining generalized word alignment strategies and the alignment measures between word sequences. The experimental results show that sentence parallelism can be effectively identified with a F1 score of 82{\%} at pair-wise level and 72{\%} at parallelism chunk level. Based on this approach, we automatically identify sentence parallelism in more than 2000 student essays and study the correlation between the use of sentence parallelism and the types and quality of essays. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,494 |
inproceedings | basaldella-etal-2016-evaluating | Evaluating anaphora and coreference resolution to improve automatic keyphrase extraction | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1077/ | Basaldella, Marco and Chiaradia, Giorgia and Tasso, Carlo | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 804--814 | In this paper we analyze the effectiveness of using linguistic knowledge from coreference and anaphora resolution for improving the performance for supervised keyphrase extraction. In order to verify the impact of these features, we define a baseline keyphrase extraction system and evaluate its performance on a standard dataset using different machine learning algorithms. Then, we consider new sets of features by adding combinations of the linguistic features we propose and we evaluate the new performance of the system. We also use anaphora and coreference resolution to transform the documents, trying to simulate the cohesion process performed by the human mind. We found that our approach has a slightly positive impact on the performance of automatic keyphrase extraction, in particular when considering the ranking of the results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,495 |
inproceedings | ehrlemark-etal-2016-retrieving | Retrieving Occurrences of Grammatical Constructions | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1078/ | Ehrlemark, Anna and Johansson, Richard and Lyngfelt, Benjamin | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 815--824 | Finding authentic examples of grammatical constructions is central in constructionist approaches to linguistics, language processing, and second language learning. In this paper, we address this problem as an information retrieval (IR) task. To facilitate research in this area, we built a benchmark collection by annotating the occurrences of six constructions in a Swedish corpus. Furthermore, we implemented a simple and flexible retrieval system for finding construction occurrences, in which the user specifies a ranking function using lexical-semantic similarities (lexicon-based or distributional). The system was evaluated using standard IR metrics on the new benchmark, and we saw that lexical-semantical rerankers improve significantly over a purely surface-oriented system, but must be carefully tailored for each individual construction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,496 |
inproceedings | felice-etal-2016-automatic | Automatic Extraction of Learner Errors in {ESL} Sentences Using Linguistically Enhanced Alignments | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1079/ | Felice, Mariano and Bryant, Christopher and Briscoe, Ted | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 825--835 | We propose a new method of automatically extracting learner errors from parallel English as a Second Language (ESL) sentences in an effort to regularise annotation formats and reduce inconsistencies. Specifically, given an original and corrected sentence, our method first uses a linguistically enhanced alignment algorithm to determine the most likely mappings between tokens, and secondly employs a rule-based function to decide which alignments should be merged. Our method beats all previous approaches on the tested datasets, achieving state-of-the-art results for automatic error extraction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,497 |
inproceedings | yamauchi-murawaki-2016-contrasting | Contrasting Vertical and Horizontal Transmission of Typological Features | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1080/ | Yamauchi, Kenji and Murawaki, Yugo | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 836--846 | Linguistic typology provides features that have a potential of uncovering deep phylogenetic relations among the world`s languages. One of the key challenges in using typological features for phylogenetic inference is that horizontal (spatial) transmission obscures vertical (phylogenetic) signals. In this paper, we characterize typological features with respect to the relative strength of vertical and horizontal transmission. To do this, we first construct (1) a spatial neighbor graph of languages and (2) a phylogenetic neighbor graph by collapsing known language families. We then develop an autologistic model that predicts a feature`s distribution from these two graphs. In the experiments, we managed to separate vertically and/or horizontally stable features from unstable ones, and the results are largely consistent with previous findings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,498 |
inproceedings | mao-hulden-2016-regular | How Regular is {J}apanese Loanword Adaptation? A Computational Study | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1081/ | Mao, Lingshuang and Hulden, Mans | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 847--856 | The modifications that foreign loanwords undergo when adapted into Japanese have been the subject of much study in linguistics. The scholarly interest of the topic can be attributed to the fact that Japanese loanwords undergo a complex series of phonological adaptations, something which has been puzzling scholars for decades. While previous studies of Japanese loanword accommodation have focused on specific phonological phenomena of limited scope, the current study leverages computational methods to provide a more complete description of all the sound changes that occur when adopting English words into Japanese. To investigate this, we have developed a parallel corpus of 250 English transcriptions and their respective Japanese equivalents. These words were then used to develop a wide-coverage finite state transducer based phonological grammar that mimics the behavior of the Japanese adaption process. By developing rules with the goal of accounting completely for a large number of borrowing and analyzing forms mistakenly generated by the system, we discovered an internal inconsistency inside the loanword phonology of the Japanese language, something arguably underestimated by previous studies. The result of the investigation suggests that there are multiple {\textquoteleft}dimensions' that shape the output form of the current Japanese loanwords. These dimensions include orthography, phonetics, and historical changes. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,499 |
inproceedings | inurrieta-etal-2016-using | Using Linguistic Data for {E}nglish and {S}panish Verb-Noun Combination Identification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1082/ | I{\~n}urrieta, Uxoa and D{\'i}az de Ilarraza, Arantza and Labaka, Gorka and Sarasola, Kepa and Aduriz, Itziar and Carroll, John | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 857--867 | We present a linguistic analysis of a set of English and Spanish verb+noun combinations (VNCs), and a method to use this information to improve VNC identification. Firstly, a sample of frequent VNCs are analysed in-depth and tagged along lexico-semantic and morphosyntactic dimensions, obtaining satisfactory inter-annotator agreement scores. Then, a VNC identification experiment is undertaken, where the analysed linguistic data is combined with chunking information and syntactic dependencies. A comparison between the results of the experiment and the results obtained by a basic detection method shows that VNC identification can be greatly improved by using linguistic information, as a large number of additional occurrences are detected with high precision. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,500 |
inproceedings | terkik-etal-2016-analyzing | Analyzing Gender Bias in Student Evaluations | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1083/ | Terkik, Andamlak and Prud{'}hommeaux, Emily and Ovesdotter Alm, Cecilia and Homan, Christopher and Franklin, Scott | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 868--876 | University students in the United States are routinely asked to provide feedback on the quality of the instruction they have received. Such feedback is widely used by university administrators to evaluate teaching ability, despite growing evidence that students assign lower numerical scores to women and people of color, regardless of the actual quality of instruction. In this paper, we analyze students' written comments on faculty evaluation forms spanning eight years and five STEM disciplines in order to determine whether open-ended comments reflect these same biases. First, we apply sentiment analysis techniques to the corpus of comments to determine the overall affect of each comment. We then use this information, in combination with other features, to explore whether there is bias in how students describe their instructors. We show that while the gender of the evaluated instructor does not seem to affect students' expressed level of overall satisfaction with their instruction, it does strongly influence the language that they use to describe their instructors and their experience in class. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,501 |
inproceedings | huynh-etal-2016-adverse | Adverse Drug Reaction Classification With Deep Neural Networks | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1084/ | Huynh, Trung and He, Yulan and Willis, Alistair and Rueger, Stefan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 877--887 | We study the problem of detecting sentences describing adverse drug reactions (ADRs) and frame the problem as binary classification. We investigate different neural network (NN) architectures for ADR classification. In particular, we propose two new neural network models, Convolutional Recurrent Neural Network (CRNN) by concatenating convolutional neural networks with recurrent neural networks, and Convolutional Neural Network with Attention (CNNA) by adding attention weights into convolutional neural networks. We evaluate various NN architectures on a Twitter dataset containing informal language and an Adverse Drug Effects (ADE) dataset constructed by sampling from MEDLINE case reports. Experimental results show that all the NN architectures outperform the traditional maximum entropy classifiers trained from n-grams with different weighting strategies considerably on both datasets. On the Twitter dataset, all the NN architectures perform similarly. But on the ADE dataset, CNN performs better than other more complex CNN variants. Nevertheless, CNNA allows the visualisation of attention weights of words when making classification decisions and hence is more appropriate for the extraction of word subsequences describing ADRs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,502 |
inproceedings | huang-etal-2016-chinese | {C}hinese Preposition Selection for Grammatical Error Diagnosis | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1085/ | Huang, Hen-Hsen and Shao, Yen-Chi and Chen, Hsin-Hsi | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 888--899 | Misuse of Chinese prepositions is one of common word usage errors in grammatical error diagnosis. In this paper, we adopt the Chinese Gigaword corpus and HSK corpus as L1 and L2 corpora, respectively. We explore gated recurrent neural network model (GRU), and an ensemble of GRU model and maximum entropy language model (GRU-ME) to select the best preposition from 43 candidates for each test sentence. The experimental results show the advantage of the GRU models over simple RNN and n-gram models. We further analyze the effectiveness of linguistic information such as word boundary and part-of-speech tag in this task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,503 |
inproceedings | eskander-etal-2016-extending | Extending the Use of {A}daptor {G}rammars for Unsupervised Morphological Segmentation of Unseen Languages | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1086/ | Eskander, Ramy and Rambow, Owen and Yang, Tianchun | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 900--910 | We investigate using Adaptor Grammars for unsupervised morphological segmentation. Using six development languages, we investigate in detail different grammars, the use of morphological knowledge from outside sources, and the use of a cascaded architecture. Using cross-validation on our development languages, we propose a system which is language-independent. We show that it outperforms two state-of-the-art systems on 5 out of 6 languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,504 |
inproceedings | kuru-etal-2016-charner | {C}har{NER}: Character-Level Named Entity Recognition | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1087/ | Kuru, Onur and Can, Ozan Arkan and Yuret, Deniz | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 911--921 | We describe and evaluate a character-level tagger for language-independent Named Entity Recognition (NER). Instead of words, a sentence is represented as a sequence of characters. The model consists of stacked bidirectional LSTMs which inputs characters and outputs tag probabilities for each character. These probabilities are then converted to consistent word level named entity tags using a Viterbi decoder. We are able to achieve close to state-of-the-art NER performance in seven languages with the same basic model using only labeled NER data and no hand-engineered features or other external resources like syntactic taggers or Gazetteers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,505 |
inproceedings | hardmeier-2016-neural | A Neural Model for Part-of-Speech Tagging in Historical Texts | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1088/ | Hardmeier, Christian | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 922--931 | Historical texts are challenging for natural language processing because they differ linguistically from modern texts and because of their lack of orthographical and grammatical standardisation. We use a character-level neural network to build a part-of-speech (POS) tagger that can process historical data directly without requiring a separate spelling normalisation stage. Its performance in a Swedish verb identification and a German POS tagging task is similar to that of a two-stage model. We analyse the performance of this tagger and a more traditional baseline system, discuss some of the remaining problems for tagging historical data and suggest how the flexibility of our neural tagger could be exploited to address diachronic divergences in morphology and syntax in early modern Swedish with the help of data from closely related languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,506 |
inproceedings | wang-etal-2016-extracting | Extracting Discriminative Keyphrases with Learned Semantic Hierarchies | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1089/ | Wang, Yunli and Jin, Yong and Zhu, Xiaodan and Goutte, Cyril | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 932--942 | The goal of keyphrase extraction is to automatically identify the most salient phrases from documents. The technique has a wide range of applications such as rendering a quick glimpse of a document, or extracting key content for further use. While previous work often assumes keyphrases are a static property of a given documents, in many applications, the appropriate set of keyphrases that should be extracted depends on the set of documents that are being considered together. In particular, good keyphrases should not only accurately describe the content of a document, but also reveal what discriminates it from the other documents. In this paper, we study this problem of extracting discriminative keyphrases. In particularly, we propose to use the hierarchical semantic structure between candidate keyphrases to promote keyphrases that have the right level of specificity to clearly distinguish the target document from others. We show that such knowledge can be used to construct better discriminative keyphrase extraction systems that do not assume a static, fixed set of keyphrases for a document. We show how this helps identify key expertise of authors from their papers, as well as competencies covered by online courses within different domains. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,507 |
inproceedings | huang-etal-2016-hashtag | Hashtag Recommendation Using End-To-End Memory Networks with Hierarchical Attention | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1090/ | Huang, Haoran and Zhang, Qi and Gong, Yeyun and Huang, Xuanjing | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 943--952 | On microblogging services, people usually use hashtags to mark microblogs, which have a specific theme or content, making them easier for users to find. Hence, how to automatically recommend hashtags for microblogs has received much attention in recent years. Previous deep neural network-based hashtag recommendation approaches converted the task into a multi-class classification problem. However, most of these methods only took the microblog itself into consideration. Motivated by the intuition that the history of users should impact the recommendation procedure, in this work, we extend end-to-end memory networks to perform this task. We incorporate the histories of users into the external memory and introduce a hierarchical attention mechanism to select more appropriate histories. To train and evaluate the proposed method, we also construct a dataset based on microblogs collected from Twitter. Experimental results demonstrate that the proposed methods can significantly outperform state-of-the-art methods. By incorporating the hierarchical attention mechanism, the relative improvement in the proposed method over the state-of-the-art method is around 67.9{\%} in the F1-score. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,508 |
inproceedings | bhatia-etal-2016-automatic | Automatic Labelling of Topics with Neural Embeddings | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1091/ | Bhatia, Shraey and Lau, Jey Han and Baldwin, Timothy | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 953--963 | Topics generated by topic models are typically represented as list of terms. To reduce the cognitive overhead of interpreting these topics for end-users, we propose labelling a topic with a succinct phrase that summarises its theme or idea. Using Wikipedia document titles as label candidates, we compute neural embeddings for documents and words to select the most relevant labels for topics. Comparing to a state-of-the-art topic labelling system, our methodology is simpler, more efficient and finds better topic labels. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,509 |
inproceedings | shain-etal-2016-memory | Memory-Bounded Left-Corner Unsupervised Grammar Induction on Child-Directed Input | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1092/ | Shain, Cory and Bryce, William and Jin, Lifeng and Krakovna, Victoria and Doshi-Velez, Finale and Miller, Timothy and Schuler, William and Schwartz, Lane | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 964--975 | This paper presents a new memory-bounded left-corner parsing model for unsupervised raw-text syntax induction, using unsupervised hierarchical hidden Markov models (UHHMM). We deploy this algorithm to shed light on the extent to which human language learners can discover hierarchical syntax through distributional statistics alone, by modeling two widely-accepted features of human language acquisition and sentence processing that have not been simultaneously modeled by any existing grammar induction algorithm: (1) a left-corner parsing strategy and (2) limited working memory capacity. To model realistic input to human language learners, we evaluate our system on a corpus of child-directed speech rather than typical newswire corpora. Results beat or closely match those of three competing systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,510 |
inproceedings | herbelot-kochmar-2016-calling | {\textquoteleft}Calling on the classical phone': a distributional model of adjective-noun errors in learners' {E}nglish | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1093/ | Herbelot, Aur{\'e}lie and Kochmar, Ekaterina | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 976--986 | In this paper we discuss three key points related to error detection (ED) in learners' English. We focus on content word ED as one of the most challenging tasks in this area, illustrating our claims on adjective{--}noun (AN) combinations. In particular, we (1) investigate the role of context in accurately capturing semantic anomalies and implement a system based on distributional topic coherence, which achieves state-of-the-art accuracy on a standard test set; (2) thoroughly investigate our system`s performance across individual adjective classes, concluding that a class-dependent approach is beneficial to the task; (3) discuss the data size bottleneck in this area, and highlight the challenges of automatic error generation for content words. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,511 |
inproceedings | todirascu-etal-2016-cohesive | Are Cohesive Features Relevant for Text Readability Evaluation? | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1094/ | Todirascu, Amalia and Fran{\c{c}}ois, Thomas and Bernhard, Delphine and Gala, N{\'u}ria and Ligozat, Anne-Laure | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 987--997 | This paper investigates the effectiveness of 65 cohesion-based variables that are commonly used in the literature as predictive features to assess text readability. We evaluate the efficiency of these variables across narrative and informative texts intended for an audience of L2 French learners. In our experiments, we use a French corpus that has been both manually and automatically annotated as regards to co-reference and anaphoric chains. The efficiency of the 65 variables for readability is analyzed through a correlational analysis and some modelling experiments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,512 |
inproceedings | littell-etal-2016-named | Named Entity Recognition for Linguistic Rapid Response in Low-Resource Languages: {S}orani {K}urdish and {T}ajik | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1095/ | Littell, Patrick and Goyal, Kartik and Mortensen, David R. and Little, Alexa and Dyer, Chris and Levin, Lori | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 998--1006 | This paper describes our construction of named-entity recognition (NER) systems in two Western Iranian languages, Sorani Kurdish and Tajik, as a part of a pilot study of {\textquotedblleft}Linguistic Rapid Response{\textquotedblright} to potential emergency humanitarian relief situations. In the absence of large annotated corpora, parallel corpora, treebanks, bilingual lexica, etc., we found the following to be effective: exploiting distributional regularities in monolingual data, projecting information across closely related languages, and utilizing human linguist judgments. We show promising results on both a four-month exercise in Sorani and a two-day exercise in Tajik, achieved with minimal annotation costs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,513 |
inproceedings | exner-etal-2016-multilingual | Multilingual Supervision of Semantic Annotation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1096/ | Exner, Peter and Klang, Marcus and Nugues, Pierre | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1007--1017 | In this paper, we investigate the annotation projection of semantic units in a practical setting. Previous approaches have focused on using parallel corpora for semantic transfer. We evaluate an alternative approach using loosely parallel corpora that does not require the corpora to be exact translations of each other. We developed a method that transfers semantic annotations from one language to another using sentences aligned by entities, and we extended it to include alignments by entity-like linguistic units. We conducted our experiments on a large scale using the English, Swedish, and French language editions of Wikipedia. Our results show that the annotation projection using entities in combination with loosely parallel corpora provides a viable approach to extending previous attempts. In addition, it allows the generation of proposition banks upon which semantic parsers can be trained. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,514 |
inproceedings | rama-2016-siamese | {S}iamese Convolutional Networks for Cognate Identification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1097/ | Rama, Taraka | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1018--1027 | In this paper, we present phoneme level Siamese convolutional networks for the task of pair-wise cognate identification. We represent a word as a two-dimensional matrix and employ a siamese convolutional network for learning deep representations. We present siamese architectures that jointly learn phoneme level feature representations and language relatedness from raw words for cognate identification. Compared to previous works, we train and test on larger and realistic datasets; and, show that siamese architectures consistently perform better than traditional linear classifier approach. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,515 |
inproceedings | he-etal-2016-exploring | Exploring Differential Topic Models for Comparative Summarization of Scientific Papers | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1098/ | He, Lei and Li, Wei and Zhuge, Hai | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1028--1038 | This paper investigates differential topic models (dTM) for summarizing the differences among document groups. Starting from a simple probabilistic generative model, we propose dTM-SAGE that explicitly models the deviations on group-specific word distributions to indicate how words are used differen-tially across different document groups from a background word distribution. It is more effective to capture unique characteristics for comparing document groups. To generate dTM-based comparative summaries, we propose two sentence scoring methods for measuring the sentence discriminative capacity. Experimental results on scientific papers dataset show that our dTM-based comparative summari-zation methods significantly outperform the generic baselines and the state-of-the-art comparative summarization methods under ROUGE metrics. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,516 |
inproceedings | benikova-etal-2016-bridging | Bridging the gap between extractive and abstractive summaries: Creation and evaluation of coherent extracts from heterogeneous sources | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1099/ | Benikova, Darina and Mieskes, Margot and Meyer, Christian M. and Gurevych, Iryna | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1039--1050 | Coherent extracts are a novel type of summary combining the advantages of manually created abstractive summaries, which are fluent but difficult to evaluate, and low-quality automatically created extractive summaries, which lack coherence and structure. We use a corpus of heterogeneous documents to address the issue that information seekers usually face {--} a variety of different types of information sources. We directly extract information from these, but minimally redact and meaningfully order it to form a coherent text. Our qualitative and quantitative evaluations show that quantitative results are not sufficient to judge the quality of a summary and that other quality criteria, such as coherence, should also be taken into account. We find that our manually created corpus is of high quality and that it has the potential to bridge the gap between reference corpora of abstracts and automatic methods producing extracts. Our corpus is available to the research community for further development. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,517 |
inproceedings | wang-etal-2016-chinese | {C}hinese Poetry Generation with Planning based Neural Network | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1100/ | Wang, Zhe and He, Wei and Wu, Hua and Wu, Haiyang and Li, Wei and Wang, Haifeng and Chen, Enhong | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1051--1060 | Chinese poetry generation is a very challenging task in natural language processing. In this paper, we propose a novel two-stage poetry generating method which first plans the sub-topics of the poem according to the user`s writing intent, and then generates each line of the poem sequentially, using a modified recurrent neural network encoder-decoder framework. The proposed planning-based method can ensure that the generated poem is coherent and semantically consistent with the user`s intent. A comprehensive evaluation with human judgments demonstrates that our proposed approach outperforms the state-of-the-art poetry generating methods and the poem quality is somehow comparable to human poets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,518 |
inproceedings | chenal-cheung-2016-predicting | Predicting sentential semantic compatibility for aggregation in text-to-text generation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1101/ | Chenal, Victor and Cheung, Jackie Chi Kit | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1061--1070 | We examine the task of aggregation in the context of text-to-text generation. We introduce a new aggregation task which frames the process as grouping input sentence fragments into clusters that are to be expressed as a single output sentence. We extract datasets for this task from a corpus using an automatic extraction process. Based on the results of a user study, we develop two gold-standard clusterings and corresponding evaluation methods for each dataset. We present a hierarchical clustering framework for predicting aggregation decisions on this task, which outperforms several baselines and can serve as a reference in future work. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,519 |
inproceedings | zopf-etal-2016-sequential | Sequential Clustering and Contextual Importance Measures for Incremental Update Summarization | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1102/ | Zopf, Markus and Loza Menc{\'ia, Eneldo and F{\"urnkranz, Johannes | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1071--1082 | Unexpected events such as accidents, natural disasters and terrorist attacks represent an information situation where it is crucial to give users access to important and non-redundant information as early as possible. Incremental update summarization (IUS) aims at summarizing events which develop over time. In this paper, we propose a combination of sequential clustering and contextual importance measures to identify important sentences in a stream of documents in a timely manner. Sequential clustering is used to cluster similar sentences. The created clusters are scored by a contextual importance measure which identifies important information as well as redundant information. Experiments on the TREC Temporal Summarization 2015 shared task dataset show that our system achieves superior results compared to the best participating systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,520 |
inproceedings | goyal-etal-2016-natural | Natural Language Generation through Character-based {RNN}s with Finite-state Prior Knowledge | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1103/ | Goyal, Raghav and Dymetman, Marc and Gaussier, Eric | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1083--1092 | Recently Wen et al. (2015) have proposed a Recurrent Neural Network (RNN) approach to the generation of utterances from dialog acts, and shown that although their model requires less effort to develop than a rule-based system, it is able to improve certain aspects of the utterances, in particular their naturalness. However their system employs generation at the word-level, which requires one to pre-process the data by substituting named entities with placeholders. This pre-processing prevents the model from handling some contextual effects and from managing multiple occurrences of the same attribute. Our approach uses a character-level model, which unlike the word-level model makes it possible to learn to {\textquotedblleft}copy{\textquotedblright} information from the dialog act to the target without having to pre-process the input. In order to avoid generating non-words and inventing information not present in the input, we propose a method for incorporating prior knowledge into the RNN in the form of a weighted finite-state automaton over character sequences. Automatic and human evaluations show improved performance over baselines on several evaluation criteria. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,521 |
inproceedings | chachra-etal-2016-hybrid | A Hybrid Approach to Generation of Missing Abstracts in Biomedical Literature | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1104/ | Chachra, Suchet and Ben Abacha, Asma and Shooshan, Sonya and Rodriguez, Laritza and Demner-Fushman, Dina | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1093--1100 | Readers usually rely on abstracts to identify relevant medical information from scientific articles. Abstracts are also essential to advanced information retrieval methods. More than 50 thousand scientific publications in PubMed lack author-generated abstracts, and the relevancy judgements for these papers have to be based on their titles alone. In this paper, we propose a hybrid summarization technique that aims to select the most pertinent sentences from articles to generate an extractive summary in lieu of a missing abstract. We combine i) health outcome detection, ii) keyphrase extraction, and iii) textual entailment recognition between sentences. We evaluate our hybrid approach and analyze the improvements of multi-factor summarization over techniques that rely on a single method, using a collection of 295 manually generated reference summaries. The obtained results show that the hybrid approach outperforms the baseline techniques with an improvement of 13{\%} in recall and 4{\%} in F1 score. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,522 |
inproceedings | lampouras-vlachos-2016-imitation | Imitation learning for language generation from unaligned data | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1105/ | Lampouras, Gerasimos and Vlachos, Andreas | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1101--1112 | Natural language generation (NLG) is the task of generating natural language from a meaning representation. Current rule-based approaches require domain-specific and manually constructed linguistic resources, while most machine-learning based approaches rely on aligned training data and/or phrase templates. The latter are needed to restrict the search space for the structured prediction task defined by the unaligned datasets. In this work we propose the use of imitation learning for structured prediction which learns an incremental model that handles the large search space by avoiding explicit enumeration of the outputs. We focus on the Locally Optimal Learning to Search framework which allows us to train against non-decomposable loss functions such as the BLEU or ROUGE scores while not assuming gold standard alignments. We evaluate our approach on three datasets using both automatic measures and human judgements and achieve results comparable to the state-of-the-art approaches developed for each of them. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,523 |
inproceedings | yu-etal-2016-product | Product Review Summarization by Exploiting Phrase Properties | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1106/ | Yu, Naitong and Huang, Minlie and Shi, Yuanyuan and Zhu, Xiaoyan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1113--1124 | We propose a phrase-based approach for generating product review summaries. The main idea of our method is to leverage phrase properties to choose a subset of optimal phrases for generating the final summary. Specifically, we exploit two phrase properties, popularity and specificity. Popularity describes how popular the phrase is in the original reviews. Specificity describes how descriptive a phrase is in comparison to generic comments. We formalize the phrase selection procedure as an optimization problem and solve it using integer linear programming (ILP). An aspect-based bigram language model is used for generating the final summary with the selected phrases. Experiments show that our summarizer outperforms the other baselines. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,524 |
inproceedings | araki-etal-2016-generating | Generating Questions and Multiple-Choice Answers using Semantic Analysis of Texts | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1107/ | Araki, Jun and Rajagopal, Dheeraj and Sankaranarayanan, Sreecharan and Holm, Susan and Yamakawa, Yukari and Mitamura, Teruko | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1125--1136 | We present a novel approach to automated question generation that improves upon prior work both from a technology perspective and from an assessment perspective. Our system is aimed at engaging language learners by generating multiple-choice questions which utilize specific inference steps over multiple sentences, namely coreference resolution and paraphrase detection. The system also generates correct answers and semantically-motivated phrase-level distractors as answer choices. Evaluation by human annotators indicates that our approach requires a larger number of inference steps, which necessitate deeper semantic understanding of texts than a traditional single-sentence approach. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,525 |
inproceedings | marques-beuls-2016-evaluation | Evaluation Strategies for Computational Construction Grammars | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1108/ | Marques, T{\^a}nia and Beuls, Katrien | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1137--1146 | Despite the growing number of Computational Construction Grammar implementations, the field is still lacking evaluation methods to compare grammar fragments across different platforms. Moreover, the hand-crafted nature of most grammars requires profiling tools to understand the complex interactions between constructions of different types. This paper presents a number of evaluation measures, partially based on existing measures in the field of semantic parsing, that are especially relevant for reversible grammar formalisms. The measures are tested on a grammar fragment for European Portuguese clitic placement that is currently under development. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,526 |
inproceedings | kajiwara-komachi-2016-building | Building a Monolingual Parallel Corpus for Text Simplification Using Sentence Similarity Based on Alignment between Word Embeddings | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1109/ | Kajiwara, Tomoyuki and Komachi, Mamoru | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1147--1158 | Methods for text simplification using the framework of statistical machine translation have been extensively studied in recent years. However, building the monolingual parallel corpus necessary for training the model requires costly human annotation. Monolingual parallel corpora for text simplification have therefore been built only for a limited number of languages, such as English and Portuguese. To obviate the need for human annotation, we propose an unsupervised method that automatically builds the monolingual parallel corpus for text simplification using sentence similarity based on word embeddings. For any sentence pair comprising a complex sentence and its simple counterpart, we employ a many-to-one method of aligning each word in the complex sentence with the most similar word in the simple sentence and compute sentence similarity by averaging these word similarities. The experimental results demonstrate the excellent performance of the proposed method in a monolingual parallel corpus construction task for English text simplification. The results also demonstrated the superior accuracy in text simplification that use the framework of statistical machine translation trained using the corpus built by the proposed method to that using the existing corpora. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,527 |
inproceedings | servan-etal-2016-word2vec-vs | {W}ord2{V}ec vs {DB}nary: Augmenting {METEOR} using Vector Representations or Lexical Resources? | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1110/ | Servan, Christophe and B{\'e}rard, Alexandre and Elloumi, Zied and Blanchon, Herv{\'e} and Besacier, Laurent | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1159--1168 | This paper presents an approach combining lexico-semantic resources and distributed representations of words applied to the evaluation in machine translation (MT). This study is made through the enrichment of a well-known MT evaluation metric: METEOR. METEOR enables an approximate match (synonymy or morphological similarity) between an automatic and a reference translation. Our experiments are made in the framework of the Metrics task of WMT 2014. We show that distributed representations are a good alternative to lexico-semanticresources for MT evaluation and they can even bring interesting additional information. The augmented versions of METEOR, using vector representations, are made available on our Github page. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,528 |
inproceedings | derczynski-etal-2016-broad | Broad {T}witter Corpus: A Diverse Named Entity Recognition Resource | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1111/ | Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1169--1179 | One of the main obstacles, hampering method development and comparative evaluation of named entity recognition in social media, is the lack of a sizeable, diverse, high quality annotated corpus, analogous to the CoNLL`2003 news dataset. For instance, the biggest Ritter tweet corpus is only 45,000 tokens {--} a mere 15{\%} the size of CoNLL`2003. Another major shortcoming is the lack of temporal, geographic, and author diversity. This paper introduces the Broad Twitter Corpus (BTC), which is not only significantly bigger, but sampled across different regions, temporal periods, and types of Twitter users. The gold-standard named entity annotations are made by a combination of NLP experts and crowd workers, which enables us to harness crowd recall while maintaining high quality. We also measure the entity drift observed in our dataset (i.e. how entity representation varies over time), and compare to newswire. The corpus is released openly, including source text and intermediate annotations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,529 |
inproceedings | ilievski-etal-2016-semantic | Semantic overfitting: what {\textquoteleft}world' do we consider when evaluating disambiguation of text? | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1112/ | Ilievski, Filip and Postma, Marten and Vossen, Piek | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1180--1191 | Semantic text processing faces the challenge of defining the relation between lexical expressions and the world to which they make reference within a period of time. It is unclear whether the current test sets used to evaluate disambiguation tasks are representative for the full complexity considering this time-anchored relation, resulting in semantic overfitting to a specific period and the frequent phenomena within. We conceptualize and formalize a set of metrics which evaluate this complexity of datasets. We provide evidence for their applicability on five different disambiguation tasks. To challenge semantic overfitting of disambiguation systems, we propose a time-based, metric-aware method for developing datasets in a systematic and semi-automated manner, as well as an event-based QA task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,530 |
inproceedings | suzuki-takatsuka-2016-extraction | Extraction of Keywords of Novelties From Patent Claims | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1113/ | Suzuki, Shoko and Takatsuka, Hiromichi | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1192--1200 | There are growing needs for patent analysis using Natural Language Processing (NLP)-based approaches. Although NLP-based approaches can extract various information from patents, there are very few approaches proposed to extract those parts what inventors regard as novel or having an inventive step compared to all existing works ever. To extract such parts is difficult even for human annotators except for well-trained experts. This causes many difficulties in analyzing patents. We propose a novel approach to automatically extract such keywords that relate to novelties or inventive steps from patent claims using the structure of the claims. In addition, we also propose a new framework of evaluating our approach. The experiments show that our approach outperforms the existing keyword extraction methods significantly in many technical fields. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,531 |
inproceedings | hsi-etal-2016-leveraging | Leveraging Multilingual Training for Limited Resource Event Extraction | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1114/ | Hsi, Andrew and Yang, Yiming and Carbonell, Jaime and Xu, Ruochen | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1201--1210 | Event extraction has become one of the most important topics in information extraction, but to date, there is very limited work on leveraging cross-lingual training to boost performance. We propose a new event extraction approach that trains on multiple languages using a combination of both language-dependent and language-independent features, with particular focus on the case where target domain training data is of very limited size. We show empirically that multilingual training can boost performance for the tasks of event trigger extraction and event argument extraction on the Chinese ACE 2005 dataset. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,532 |
inproceedings | al-badrashiny-diab-2016-lili | {LILI}: A Simple Language Independent Approach for Language Identification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1115/ | Al-Badrashiny, Mohamed and Diab, Mona | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1211--1219 | We introduce a generic Language Independent Framework for Linguistic Code Switch Point Detection. The system uses characters level 5-grams and word level unigram language models to train a conditional random fields (CRF) model for classifying input words into various languages. We test our proposed framework and compare it to the state-of-the-art published systems on standard data sets from several language pairs: English-Spanish, Nepali-English, English-Hindi, Arabizi (Refers to Arabic written using the Latin/Roman script)-English, Arabic-Engari (Refers to English written using Arabic script), Modern Standard Arabic(MSA)-Egyptian, Levantine-MSA, Gulf-MSA, one more English-Spanish, and one more MSA-EGY. The overall weighted average F-score of each language pair are 96.4{\%}, 97.3{\%}, 98.0{\%}, 97.0{\%}, 98.9{\%}, 86.3{\%}, 88.2{\%}, 90.6{\%}, 95.2{\%}, and 85.0{\%} respectively. The results show that our approach despite its simplicity, either outperforms or performs at comparable levels to state-of-the-art published systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,533 |
inproceedings | tayyar-madabushi-lee-2016-high | High Accuracy Rule-based Question Classification using Question Syntax and Semantics | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1116/ | Tayyar Madabushi, Harish and Lee, Mark | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1220--1230 | We present in this paper a purely rule-based system for Question Classification which we divide into two parts: The first is the extraction of relevant words from a question by use of its structure, and the second is the classification of questions based on rules that associate these words to Concepts. We achieve an accuracy of 97.2{\%}, close to a 6 point improvement over the previous State of the Art of 91.6{\%}. Additionally, we believe that machine learning algorithms can be applied on top of this method to further improve accuracy. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,534 |
inproceedings | xiang-etal-2016-incorporating | Incorporating Label Dependency for Answer Quality Tagging in Community Question Answering via {CNN}-{LSTM}-{CRF} | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1117/ | Xiang, Yang and Zhou, Xiaoqiang and Chen, Qingcai and Zheng, Zhihui and Tang, Buzhou and Wang, Xiaolong and Qin, Yang | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1231--1241 | In community question answering (cQA), the quality of answers are determined by the matching degree between question-answer pairs and the correlation among the answers. In this paper, we show that the dependency between the answer quality labels also plays a pivotal role. To validate the effectiveness of label dependency, we propose two neural network-based models, with different combination modes of Convolutional Neural Net-works, Long Short Term Memory and Conditional Random Fields. Extensive experi-ments are taken on the dataset released by the SemEval-2015 cQA shared task. The first model is a stacked ensemble of the networks. It achieves 58.96{\%} on macro averaged F1, which improves the state-of-the-art neural network-based method by 2.82{\%} and outper-forms the Top-1 system in the shared task by 1.77{\%}. The second is a simple attention-based model whose input is the connection of the question and its corresponding answers. It produces promising results with 58.29{\%} on overall F1 and gains the best performance on the Good and Bad categories. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,535 |
inproceedings | liebeskind-hacohen-kerner-2016-semantically | Semantically Motivated {H}ebrew Verb-Noun Multi-Word Expressions Identification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1118/ | Liebeskind, Chaya and HaCohen-Kerner, Yaakov | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1242--1253 | Identification of Multi-Word Expressions (MWEs) lies at the heart of many natural language processing applications. In this research, we deal with a particular type of Hebrew MWEs, Verb-Noun MWEs (VN-MWEs), which combine a verb and a noun with or without other words. Most prior work on MWEs classification focused on linguistic and statistical information. In this paper, we claim that it is essential to utilize semantic information. To this end, we propose a semantically motivated indicator for classifying VN-MWE and define features that are related to various semantic spaces and combine them as features in a supervised classification framework. We empirically demonstrate that our semantic feature set yields better performance than the common linguistic and statistical feature sets and that combining semantic features contributes to the VN-MWEs identification task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,536 |
inproceedings | xiao-liu-2016-semantic | Semantic Relation Classification via Hierarchical Recurrent Neural Network with Attention | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1119/ | Xiao, Minguang and Liu, Cong | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1254--1263 | Semantic relation classification remains a challenge in natural language processing. In this paper, we introduce a hierarchical recurrent neural network that is capable of extracting information from raw sentences for relation classification. Our model has several distinctive features: (1) Each sentence is divided into three context subsequences according to two annotated nominals, which allows the model to encode each context subsequence independently so as to selectively focus as on the important context information; (2) The hierarchical model consists of two recurrent neural networks (RNNs): the first one learns context representations of the three context subsequences respectively, and the second one computes semantic composition of these three representations and produces a sentence representation for the relationship classification of the two nominals. (3) The attention mechanism is adopted in both RNNs to encourage the model to concentrate on the important information when learning the sentence representations. Experimental results on the SemEval-2010 Task 8 dataset demonstrate that our model is comparable to the state-of-the-art without using any hand-crafted features. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,537 |
inproceedings | guo-etal-2016-unified | A Unified Architecture for Semantic Role Labeling and Relation Classification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1120/ | Guo, Jiang and Che, Wanxiang and Wang, Haifeng and Liu, Ting and Xu, Jun | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1264--1274 | This paper describes a unified neural architecture for identifying and classifying multi-typed semantic relations between words in a sentence. We investigate two typical and well-studied tasks: semantic role labeling (SRL) which identifies the relations between predicates and arguments, and relation classification (RC) which focuses on the relation between two entities or nominals. While mostly studied separately in prior work, we show that the two tasks can be effectively connected and modeled using a general architecture. Experiments on CoNLL-2009 benchmark datasets show that our SRL models significantly outperform state-of-the-art approaches. Our RC models also yield competitive performance with the best published records. Furthermore, we show that the two tasks can be trained jointly with multi-task learning, resulting in additive significant improvements for SRL. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,538 |
inproceedings | do-etal-2016-facing | Facing the most difficult case of Semantic Role Labeling: A collaboration of word embeddings and co-training | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1121/ | Do, Quynh Ngoc Thi and Bethard, Steven and Moens, Marie-Francine | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1275--1284 | We present a successful collaboration of word embeddings and co-training to tackle in the most difficult test case of semantic role labeling: predicting out-of-domain and unseen semantic frames. Despite the fact that co-training is a successful traditional semi-supervised method, its application in SRL is very limited especially when a huge amount of labeled data is available. In this work, co-training is used together with word embeddings to improve the performance of a system trained on a large training dataset. We also introduce a semantic role labeling system with a simple learning architecture and effective inference that is easily adaptable to semi-supervised settings with new training data and/or new features. On the out-of-domain testing set of the standard benchmark CoNLL 2009 data our simple approach achieves high performance and improves state-of-the-art results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,539 |
inproceedings | pado-etal-2016-predictability | Predictability of Distributional Semantics in Derivational Word Formation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1122/ | Pad{\'o}, Sebastian and Herbelot, Aur{\'e}lie and Kisselew, Max and {\v{S}}najder, Jan | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1285--1296 | Compositional distributional semantic models (CDSMs) have successfully been applied to the task of predicting the meaning of a range of linguistic constructions. Their performance on semi-compositional word formation process of (morphological) derivation, however, has been extremely variable, with no large-scale empirical investigation to date. This paper fills that gap, performing an analysis of CDSM predictions on a large dataset (over 30,000 German derivationally related word pairs). We use linear regression models to analyze CDSM performance and obtain insights into the linguistic factors that influence how predictable the distributional context of a derived word is going to be. We identify various such factors, notably part of speech, argument structure, and semantic regularity. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,540 |
inproceedings | ohoran-etal-2016-survey | Survey on the Use of Typological Information in Natural Language Processing | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1123/ | O{'}Horan, Helen and Berzak, Yevgeni and Vuli{\'c}, Ivan and Reichart, Roi and Korhonen, Anna | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1297--1308 | In recent years linguistic typologies, which classify the world`s languages according to their functional and structural properties, have been widely used to support multilingual NLP. While the growing importance of typologies in supporting multilingual tasks has been recognised, no systematic survey of existing typological resources and their use in NLP has been published. This paper provides such a survey as well as discussion which we hope will both inform and inspire future work in the area. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,541 |
inproceedings | gelderloos-chrupala-2016-phonemes | From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1124/ | Gelderloos, Lieke and Chrupa{\l}a, Grzegorz | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1309--1319 | We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover both structure and meaning from noisy and ambiguous data across modalities. We show that our model indeed learns to predict features of the visual context given phonetically transcribed image descriptions, and show that it represents linguistic information in a hierarchy of levels: lower layers in the stack are comparatively more sensitive to form, whereas higher layers are more sensitive to meaning. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,542 |
inproceedings | vaidya-etal-2016-linguistic | Linguistic features for {H}indi light verb construction identification | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1125/ | Vaidya, Ashwini and Agarwal, Sumeet and Palmer, Martha | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1320--1329 | Light verb constructions (LVC) in Hindi are highly productive. If we can distinguish a case such as nirnay lenaa {\textquoteleft}decision take; decide' from an ordinary verb-argument combination kaagaz lenaa {\textquoteleft}paper take; take (a) paper',it has been shown to aid NLP applications such as parsing (Begum et al., 2011) and machine translation (Pal et al., 2011). In this paper, we propose an LVC identification system using language specific features for Hindi which shows an improvement over previous work(Begum et al., 2011). To build our system, we carry out a linguistic analysis of Hindi LVCs using Hindi Treebank annotations and propose two new features that are aimed at capturing the diversity of Hindi LVCs in the corpus. We find that our model performs robustly across a diverse range of LVCs and our results underscore the importance of semantic features, which is in keeping with the findings for English. Our error analysis also demonstrates that our classifier can be used to further refine LVC annotations in the Hindi Treebank and make them more consistent across the board. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,543 |
inproceedings | barrett-etal-2016-cross | Cross-lingual Transfer of Correlations between Parts of Speech and Gaze Features | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1126/ | Barrett, Maria and Keller, Frank and S{\o}gaard, Anders | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1330--1339 | Several recent studies have shown that eye movements during reading provide information about grammatical and syntactic processing, which can assist the induction of NLP models. All these studies have been limited to English, however. This study shows that gaze and part of speech (PoS) correlations largely transfer across English and French. This means that we can replicate previous studies on gaze-based PoS tagging for French, but also that we can use English gaze data to assist the induction of French NLP models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,544 |
inproceedings | wang-etal-2016-sentence | Sentence Similarity Learning by Lexical Decomposition and Composition | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1127/ | Wang, Zhiguo and Mi, Haitao and Ittycheriah, Abraham | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1340--1349 | Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences. In this work, we propose a model to take into account both the similarities and dissimilarities by decomposing and composing lexical semantics over sentences. The model represents each word as a vector, and calculates a semantic matching vector for each word based on all words in the other sentence. Then, each word vector is decomposed into a similar component and a dissimilar component based on the semantic matching vector. After this, a two-channel CNN model is employed to capture features by composing the similar and dissimilar components. Finally, a similarity score is estimated over the composed feature vectors. Experimental results show that our model gets the state-of-the-art performance on the answer sentence selection task, and achieves a comparable result on the paraphrase identification task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,545 |
inproceedings | wang-he-2016-chinese | {C}hinese Hypernym-Hyponym Extraction from User Generated Categories | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1128/ | Wang, Chengyu and He, Xiaofeng | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1350--1361 | Hypernym-hyponym ({\textquotedblleft}is-a{\textquotedblright}) relations are key components in taxonomies, object hierarchies and knowledge graphs. While there is abundant research on is-a relation extraction in English, it still remains a challenge to identify such relations from Chinese knowledge sources accurately due to the flexibility of language expression. In this paper, we introduce a weakly supervised framework to extract Chinese is-a relations from user generated categories. It employs piecewise linear projection models trained on a Chinese taxonomy and an iterative learning algorithm to update models incrementally. A pattern-based relation selection method is proposed to prevent {\textquotedblleft}semantic drift{\textquotedblright} in the learning process using bi-criteria optimization. Experimental results illustrate that the proposed approach outperforms state-of-the-art methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,546 |
inproceedings | emms-jayapal-2016-dynamic | Dynamic Generative model for Diachronic Sense Emergence Detection | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1129/ | Emms, Martin and Jayapal, Arun Kumar | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1362--1373 | As time passes words can acquire meanings they did not previously have, such as the {\textquoteleft}twitter post' usage of {\textquoteleft}tweet'. We address how this can be detected from time-stamped raw text. We propose a generative model with senses dependent on times and context words dependent on senses but otherwise eternal, and a Gibbs sampler for estimation. We obtain promising parameter estimates for positive (resp. negative) cases of known sense emergence (resp non-emergence) and adapt the {\textquoteleft}pseudo-word' technique (Schutze, 1992) to give a novel further evaluation via {\textquoteleft}pseudo-neologisms'. The question of ground-truth is also addressed and a technique proposed to locate an emergence date for evaluation purposes. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,547 |
inproceedings | yuan-etal-2016-semi | Semi-supervised Word Sense Disambiguation with Neural Models | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1130/ | Yuan, Dayu and Richardson, Julian and Doherty, Ryan and Evans, Colin and Altendorf, Eric | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1374--1385 | Determining the intended sense of words in text {--} word sense disambiguation (WSD) {--} is a long-standing problem in natural language processing. Recently, researchers have shown promising results using word vectors extracted from a neural network language model as features in WSD algorithms. However, a simple average or concatenation of word vectors for each word in a text loses the sequential and syntactic information of the text. In this paper, we study WSD with a sequence learning neural net, LSTM, to better capture the sequential and syntactic patterns of the text. To alleviate the lack of training data in all-words WSD, we employ the same LSTM in a semi-supervised label propagation classifier. We demonstrate state-of-the-art results, especially on verbs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,548 |
inproceedings | zhang-etal-2016-fast | Fast Gated Neural Domain Adaptation: Language Model as a Case Study | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1131/ | Zhang, Jian and Wu, Xiaofeng and Way, Andy and Liu, Qun | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1386--1397 | Neural network training has been shown to be advantageous in many natural language processing applications, such as language modelling or machine translation. In this paper, we describe in detail a novel domain adaptation mechanism in neural network training. Instead of learning and adapting the neural network on millions of training sentences {--} which can be very time-consuming or even infeasible in some cases {--} we design a domain adaptation gating mechanism which can be used in recurrent neural networks and quickly learn the out-of-domain knowledge directly from the word vector representations with little speed overhead. In our experiments, we use the recurrent neural network language model (LM) as a case study. We show that the neural LM perplexity can be reduced by 7.395 and 12.011 using the proposed domain adaptation mechanism on the Penn Treebank and News data, respectively. Furthermore, we show that using the domain-adapted neural LM to re-rank the statistical machine translation n-best list on the French-to-English language pair can significantly improve translation quality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,549 |
inproceedings | guzman-etal-2016-machine | Machine Translation Evaluation for {A}rabic using Morphologically-enriched Embeddings | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1132/ | Guzm{\'a}n, Francisco and Bouamor, Houda and Baly, Ramy and Habash, Nizar | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1398--1408 | Evaluation of machine translation (MT) into morphologically rich languages (MRL) has not been well studied despite posing many challenges. In this paper, we explore the use of embeddings obtained from different levels of lexical and morpho-syntactic linguistic analysis and show that they improve MT evaluation into an MRL. Specifically we report on Arabic, a language with complex and rich morphology. Our results show that using a neural-network model with different input representations produces results that clearly outperform the state-of-the-art for MT evaluation into Arabic, by almost over 75{\%} increase in correlation with human judgments on pairwise MT evaluation quality task. More importantly, we demonstrate the usefulness of morpho-syntactic representations to model sentence similarity for MT evaluation and address complex linguistic phenomena of Arabic. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,550 |
inproceedings | garmash-monz-2016-ensemble | Ensemble Learning for Multi-Source Neural Machine Translation | Matsumoto, Yuji and Prasad, Rashmi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/C16-1133/ | Garmash, Ekaterina and Monz, Christof | Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers | 1409--1418 | In this paper we describe and evaluate methods to perform ensemble prediction in neural machine translation (NMT). We compare two methods of ensemble set induction: sampling parameter initializations for an NMT system, which is a relatively established method in NMT (Sutskever et al., 2014), and NMT systems translating from different source languages into the same target language, i.e., multi-source ensembles, a method recently introduced by Firat et al. (2016). We are motivated by the observation that for different language pairs systems make different types of mistakes. We propose several methods with different degrees of parameterization to combine individual predictions of NMT systems so that they mutually compensate for each other`s mistakes and improve overall performance. We find that the biggest improvements can be obtained from a context-dependent weighting scheme for multi-source ensembles. This result offers stronger support for the linguistic motivation of using multi-source ensembles than previous approaches. Evaluation is carried out for German and French into English translation. The best multi-source ensemble method achieves an improvement of up to 2.2 BLEU points over the strongest single-source ensemble baseline, and a 2 BLEU improvement over a multi-source ensemble baseline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 61,551 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.