entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | kajiwara-fujita-2017-semantic | Semantic Features Based on Word Alignments for Estimating Quality of Text Simplification | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2019/ | Kajiwara, Tomoyuki and Fujita, Atsushi | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 109--115 | This paper examines the usefulness of semantic features based on word alignments for estimating the quality of text simplification. Specifically, we introduce seven types of alignment-based features computed on the basis of word embeddings and paraphrase lexicons. Through an empirical experiment using the QATS dataset, we confirm that we can achieve the state-of-the-art performance only with these features. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,078 |
inproceedings | pandey-etal-2017-injecting | Injecting Word Embeddings with Another Language`s Resource : An Application of Bilingual Embeddings | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2020/ | Pandey, Prakhar and Pudi, Vikram and Shrivastava, Manish | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 116--121 | Word embeddings learned from text corpus can be improved by injecting knowledge from external resources, while at the same time also specializing them for similarity or relatedness. These knowledge resources (like WordNet, Paraphrase Database) may not exist for all languages. In this work we introduce a method to inject word embeddings of a language with knowledge resource of another language by leveraging bilingual embeddings. First we improve word embeddings of German, Italian, French and Spanish using resources of English and test them on variety of word similarity tasks. Then we demonstrate the utility of our method by creating improved embeddings for Urdu and Telugu languages using Hindi WordNet, beating the previously established baseline for Urdu. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,079 |
inproceedings | corona-etal-2017-improving | Improving Black-box Speech Recognition using Semantic Parsing | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2021/ | Corona, Rodolfo and Thomason, Jesse and Mooney, Raymond | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 122--127 | Speech is a natural channel for human-computer interaction in robotics and consumer applications. Natural language understanding pipelines that start with speech can have trouble recovering from speech recognition errors. Black-box automatic speech recognition (ASR) systems, built for general purpose use, are unable to take advantage of in-domain language models that could otherwise ameliorate these errors. In this work, we present a method for re-ranking black-box ASR hypotheses using an in-domain language model and semantic parser trained for a particular task. Our re-ranking method significantly improves both transcription accuracy and semantic understanding over a state-of-the-art ASR`s vanilla output. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,080 |
inproceedings | matsubayashi-inui-2017-revisiting | Revisiting the Design Issues of Local Models for {J}apanese Predicate-Argument Structure Analysis | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2022/ | Matsubayashi, Yuichiroh and Inui, Kentaro | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 128--133 | The research trend in Japanese predicate-argument structure (PAS) analysis is shifting from pointwise prediction models with local features to global models designed to search for globally optimal solutions. However, the existing global models tend to employ only relatively simple local features; therefore, the overall performance gains are rather limited. The importance of designing a local model is demonstrated in this study by showing that the performance of a sophisticated local model can be considerably improved with recent feature embedding methods and a feature combination learning based on a neural network, outperforming the state-of-the-art global models in F1 on a common benchmark dataset. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,081 |
inproceedings | han-etal-2017-natural | Natural Language Informs the Interpretation of Iconic Gestures: A Computational Approach | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2023/ | Han, Ting and Hough, Julian and Schlangen, David | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 134--139 | When giving descriptions, speakers often signify object shape or size with hand gestures. Such so-called {\textquoteleft}iconic' gestures represent their meaning through their relevance to referents in the verbal content, rather than having a conventional form. The gesture form on its own is often ambiguous, and the aspect of the referent that it highlights is constrained by what the language makes salient. We show how the verbal content guides gesture interpretation through a computational model that frames the task as a multi-label classification task that maps multimodal utterances to semantic categories, using annotated human-human data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,082 |
inproceedings | beck-2017-modelling | Modelling Representation Noise in Emotion Analysis using {G}aussian Processes | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2024/ | Beck, Daniel | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 140--145 | Emotion Analysis is the task of modelling latent emotions present in natural language. Labelled datasets for this task are scarce so learning good input text representations is not trivial. Using averaged word embeddings is a simple way to leverage unlabelled corpora to build text representations but this approach can be prone to noise either coming from the embedding themselves or the averaging procedure. In this paper we propose a model for Emotion Analysis using Gaussian Processes and kernels that are better suitable for functions that exhibit noisy behaviour. Empirical evaluations in a emotion prediction task show that our model outperforms commonly used baselines for regression. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,083 |
inproceedings | li-etal-2017-manually | Are Manually Prepared Affective Lexicons Really Useful for Sentiment Analysis | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2025/ | Li, Minglei and Lu, Qin and Long, Yunfei | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 146--150 | In this paper, we investigate the effectiveness of different affective lexicons through sentiment analysis of phrases. We examine how phrases can be represented through manually prepared lexicons, extended lexicons using computational methods, or word embedding. Comparative studies clearly show that word embedding using unsupervised distributional method outperforms manually prepared lexicons no matter what affective models are used in the lexicons. Our conclusion is that although different affective lexicons are cognitively backed by theories, they do not show any advantage over the automatically obtained word embedding. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,084 |
inproceedings | xue-etal-2017-mtna | {MTNA}: A Neural Multi-task Model for Aspect Category Classification and Aspect Term Extraction On Restaurant Reviews | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2026/ | Xue, Wei and Zhou, Wubai and Li, Tao and Wang, Qing | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 151--156 | Online reviews are valuable resources not only for consumers to make decisions before purchase, but also for providers to get feedbacks for their services or commodities. In Aspect Based Sentiment Analysis (ABSA), it is critical to identify aspect categories and extract aspect terms from the sentences of user-generated reviews. However, the two tasks are often treated independently, even though they are closely related. Intuitively, the learned knowledge of one task should inform the other learning task. In this paper, we propose a multi-task learning model based on neural networks to solve them together. We demonstrate the improved performance of our multi-task learning model over the models trained separately on three public dataset released by SemEval workshops. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,085 |
inproceedings | yung-etal-2017-discourse | Can Discourse Relations be Identified Incrementally? | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2027/ | Yung, Frances and Noji, Hiroshi and Matsumoto, Yuji | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 157--162 | Humans process language word by word and construct partial linguistic structures on the fly before the end of the sentence is perceived. Inspired by this cognitive ability, incremental algorithms for natural language processing tasks have been proposed and demonstrated promising performance. For discourse relation (DR) parsing, however, it is not yet clear to what extent humans can recognize DRs incrementally, because the latent {\textquoteleft}nodes' of discourse structure can span clauses and sentences. To answer this question, this work investigates incrementality in discourse processing based on a corpus annotated with DR signals. We find that DRs are dominantly signaled at the boundary between the two constituent discourse units. The findings complement existing psycholinguistic theories on expectation in discourse processing and provide direction for incremental discourse parsing. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,086 |
inproceedings | chi-etal-2017-speaker | Speaker Role Contextual Modeling for Language Understanding and Dialogue Policy Learning | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2028/ | Chi, Ta-Chung and Chen, Po-Chun and Su, Shang-Yu and Chen, Yun-Nung | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 163--168 | Language understanding (LU) and dialogue policy learning are two essential components in conversational systems. Human-human dialogues are not well-controlled and often random and unpredictable due to their own goals and speaking habits. This paper proposes a role-based contextual model to consider different speaker roles independently based on the various speaking patterns in the multi-turn dialogues. The experiments on the benchmark dataset show that the proposed role-based model successfully learns role-specific behavioral patterns for contextual encoding and then significantly improves language understanding and dialogue policy learning tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,087 |
inproceedings | song-etal-2017-diversifying | Diversifying Neural Conversation Model with Maximal Marginal Relevance | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2029/ | Song, Yiping and Tian, Zhiliang and Zhao, Dongyan and Zhang, Ming and Yan, Rui | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 169--174 | Neural conversation systems, typically using sequence-to-sequence (seq2seq) models, are showing promising progress recently. However, traditional seq2seq suffer from a severe weakness: during beam search decoding, they tend to rank universal replies at the top of the candidate list, resulting in the lack of diversity among candidate replies. Maximum Marginal Relevance (MMR) is a ranking algorithm that has been widely used for subset selection. In this paper, we propose the MMR-BS decoding method, which incorporates MMR into the beam search (BS) process of seq2seq. The MMR-BS method improves the diversity of generated replies without sacrificing their high relevance with the user-issued query. Experiments show that our proposed model achieves the best performance among other comparison methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,088 |
inproceedings | chaurasia-mooney-2017-dialog | Dialog for Language to Code | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2030/ | Chaurasia, Shobhit and Mooney, Raymond J. | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 175--180 | Generating computer code from natural language descriptions has been a long-standing problem. Prior work in this domain has restricted itself to generating code in one shot from a single description. To overcome this limitation, we propose a system that can engage users in a dialog to clarify their intent until it has all the information to produce correct code. To evaluate the efficacy of dialog in code generation, we focus on synthesizing conditional statements in the form of IFTTT recipes. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,089 |
inproceedings | sladoljev-agejev-snajder-2017-using | Using Analytic Scoring Rubrics in the Automatic Assessment of College-Level Summary Writing Tasks in {L}2 | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2031/ | Sladoljev-Agejev, Tamara and {\v{S}}najder, Jan | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 181--186 | Assessing summaries is a demanding, yet useful task which provides valuable information on language competence, especially for second language learners. We consider automated scoring of college-level summary writing task in English as a second language (EL2). We adopt the Reading-for-Understanding (RU) cognitive framework, extended with the Reading-to-Write (RW) element, and use analytic scoring with six rubrics covering content and writing quality. We show that regression models with reference-based and linguistic features considerably outperform the baselines across all the rubrics. Moreover, we find interesting correlations between summary features and analytic rubrics, revealing the links between the RU and RW constructs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,090 |
inproceedings | wang-etal-2017-statistical | A Statistical Framework for Product Description Generation | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2032/ | Wang, Jinpeng and Hou, Yutai and Liu, Jing and Cao, Yunbo and Lin, Chin-Yew | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 187--192 | We present in this paper a statistical framework that generates accurate and fluent product description from product attributes. Specifically, after extracting templates and learning writing knowledge from attribute-description parallel data, we use the learned knowledge to decide what to say and how to say for product description generation. To evaluate accuracy and fluency for the generated descriptions, in addition to BLEU and Recall, we propose to measure what to say (in terms of attribute coverage) and to measure how to say (by attribute-specified generation) separately. Experimental results show that our framework is effective. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,091 |
inproceedings | lee-lee-2017-automatic | Automatic Text Summarization Using Reinforcement Learning with Embedding Features | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2033/ | Lee, Gyoung Ho and Lee, Kong Joo | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 193--197 | An automatic text summarization system can automatically generate a short and brief summary that contains a main concept of an original document. In this work, we explore the advantages of simple embedding features in Reinforcement leaning approach to automatic text summarization tasks. In addition, we propose a novel deep learning network for estimating Q-values used in Reinforcement learning. We evaluate our model by using ROUGE scores with DUC 2001, 2002, Wikipedia, ACL-ARC data. Evaluation results show that our model is competitive with the previous models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,092 |
inproceedings | vadapalli-etal-2017-ssas | {SSAS}: Semantic Similarity for Abstractive Summarization | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2034/ | Vadapalli, Raghuram and J Kurisinkel, Litton and Gupta, Manish and Varma, Vasudeva | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 198--203 | Ideally a metric evaluating an abstract system summary should represent the extent to which the system-generated summary approximates the semantic inference conceived by the reader using a human-written reference summary. Most of the previous approaches relied upon word or syntactic sub-sequence overlap to evaluate system-generated summaries. Such metrics cannot evaluate the summary at semantic inference level. Through this work we introduce the metric of Semantic Similarity for Abstractive Summarization (SSAS), which leverages natural language inference and paraphrasing techniques to frame a novel approach to evaluate system summaries at semantic inference level. SSAS is based upon a weighted composition of quantities representing the level of agreement, contradiction, independence, paraphrasing, and optionally ROUGE score between a system-generated and a human-written summary. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,093 |
inproceedings | mnasri-etal-2017-taking | Taking into account Inter-sentence Similarity for Update Summarization | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2035/ | Mnasri, Ma{\^ali and de Chalendar, Ga{\"el and Ferret, Olivier | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 204--209 | Following Gillick and Favre (2009), a lot of work about extractive summarization has modeled this task by associating two contrary constraints: one aims at maximizing the coverage of the summary with respect to its information content while the other represents its size limit. In this context, the notion of redundancy is only implicitly taken into account. In this article, we extend the framework defined by Gillick and Favre (2009) by examining how and to what extent integrating semantic sentence similarity into an update summarization system can improve its results. We show more precisely the impact of this strategy through evaluations performed on DUC 2007 and TAC 2008 and 2009 datasets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,094 |
inproceedings | masumura-etal-2017-hyperspherical | Hyperspherical Query Likelihood Models with Word Embeddings | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2036/ | Masumura, Ryo and Asami, Taichi and Masataki, Hirokazu and Sadamitsu, Kugatsu and Nishida, Kyosuke and Higashinaka, Ryuichiro | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 210--216 | This paper presents an initial study on hyperspherical query likelihood models (QLMs) for information retrieval (IR). Our motivation is to naturally utilize pre-trained word embeddings for probabilistic IR. To this end, key idea is to directly leverage the word embeddings as random variables for directional probabilistic models based on von Mises-Fisher distributions which are familiar to cosine distances. The proposed method enables us to theoretically take semantic similarities between document and target queries into consideration without introducing heuristic expansion techniques. In addition, this paper reveals relationships between hyperspherical QLMs and conventional QLMs. Experiments show document retrieval evaluation results in which a hyperspherical QLM is compared to conventional QLMs and document distance metrics using word or document embeddings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,095 |
inproceedings | kulkarni-etal-2017-dual | Dual Constrained Question Embeddings with Relational Knowledge Bases for Simple Question Answering | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2037/ | Kulkarni, Kaustubh and Togashi, Riku and Maeda, Hideyuki and Fujita, Sumio | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 217--221 | Embedding based approaches are shown to be effective for solving simple Question Answering (QA) problems in recent works. The major drawback of current approaches is that they look only at the similarity (constraint) between a question and a head, relation pair. Due to the absence of tail (answer) in the questions, these models often require paraphrase datasets to obtain adequate embeddings. In this paper, we propose a dual constraint model which exploits the embeddings obtained by Trans* family of algorithms to solve the simple QA problem without using any additional resources such as paraphrase datasets. The results obtained prove that the embeddings learned using dual constraints are better than those with single constraint models having similar architecture. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,096 |
inproceedings | ziegler-etal-2017-efficiency | Efficiency-aware Answering of Compositional Questions using Answer Type Prediction | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2038/ | Ziegler, David and Abujabal, Abdalghani and Saha Roy, Rishiraj and Weikum, Gerhard | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 222--227 | This paper investigates the problem of answering compositional factoid questions over knowledge bases (KB) under efficiency constraints. The method, called TIPI, (i) decomposes compositional questions, (ii) predicts answer types for individual sub-questions, (iii) reasons over the compatibility of joint types, and finally, (iv) formulates compositional SPARQL queries respecting type constraints. TIPI`s answer type predictor is trained using distant supervision, and exploits lexical, syntactic and embedding-based features to compute context- and hierarchy-aware candidate answer types for an input question. Experiments on a recent benchmark show that TIPI results in state-of-the-art performance under the real-world assumption that only a single SPARQL query can be executed over the KB, and substantial reduction in the number of queries in the more general case. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,097 |
inproceedings | elsahar-etal-2017-high | High Recall Open {IE} for Relation Discovery | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2039/ | Elsahar, Hady and Gravier, Christophe and Laforest, Frederique | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 228--233 | Relation Discovery discovers predicates (relation types) from a text corpus relying on the co-occurrence of two named entities in the same sentence. This is a very narrowing constraint: it represents only a small fraction of all relation mentions in practice. In this paper we propose a high recall approach for Open IE, which enables covering up to 16 times more sentences in a large corpus. Comparison against OpenIE systems shows that our proposed approach achieves 28{\%} improvement over the highest recall OpenIE system and 6{\%} improvement in precision than the same system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,098 |
inproceedings | dai-etal-2017-using | Using Context Events in Neural Network Models for Event Temporal Status Identification | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2040/ | Dai, Zeyu and Yao, Wenlin and Huang, Ruihong | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 234--239 | Focusing on the task of identifying event temporal status, we find that events directly or indirectly governing the target event in a dependency tree are most important contexts. Therefore, we extract dependency chains containing context events and use them as input in neural network models, which consistently outperform previous models using local context words as input. Visualization verifies that the dependency chain representation can effectively capture the context events which are closely related to the target event and play key roles in predicting event temporal status. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,099 |
inproceedings | hsieh-etal-2017-identifying | Identifying Protein-protein Interactions in Biomedical Literature using Recurrent Neural Networks with Long Short-Term Memory | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2041/ | Hsieh, Yu-Lun and Chang, Yung-Chun and Chang, Nai-Wen and Hsu, Wen-Lian | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 240--245 | In this paper, we propose a recurrent neural network model for identifying protein-protein interactions in biomedical literature. Experiments on two largest public benchmark datasets, AIMed and BioInfer, demonstrate that our approach significantly surpasses state-of-the-art methods with relative improvements of 10{\%} and 18{\%}, respectively. Cross-corpus evaluation also demonstrate that the proposed model remains robust despite using different training data. These results suggest that RNN can effectively capture semantic relationships among proteins as well as generalizes over different corpora, without any feature engineering. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,100 |
inproceedings | khanpour-etal-2017-identifying | Identifying Empathetic Messages in Online Health Communities | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2042/ | Khanpour, Hamed and Caragea, Cornelia and Biyani, Prakhar | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 246--251 | Empathy captures one`s ability to correlate with and understand others' emotional states and experiences. Messages with empathetic content are considered as one of the main advantages for joining online health communities due to their potential to improve people`s moods. Unfortunately, to this date, no computational studies exist that automatically identify empathetic messages in online health communities. We propose a combination of Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM) networks, and show that the proposed model outperforms each individual model (CNN and LSTM) as well as several baselines. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,101 |
inproceedings | long-etal-2017-fake | Fake News Detection Through Multi-Perspective Speaker Profiles | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2043/ | Long, Yunfei and Lu, Qin and Xiang, Rong and Li, Minglei and Huang, Chu-Ren | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 252--256 | Automatic fake news detection is an important, yet very challenging topic. Traditional methods using lexical features have only very limited success. This paper proposes a novel method to incorporate speaker profiles into an attention based LSTM model for fake news detection. Speaker profiles contribute to the model in two ways. One is to include them in the attention model. The other includes them as additional input data. By adding speaker profiles such as party affiliation, speaker title, location and credit history, our model outperforms the state-of-the-art method by 14.5{\%} in accuracy using a benchmark fake news detection dataset. This proves that speaker profiles provide valuable information to validate the credibility of news articles. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,102 |
inproceedings | saito-etal-2017-improving | Improving Neural Text Normalization with Data Augmentation at Character- and Morphological Levels | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2044/ | Saito, Itsumi and Suzuki, Jun and Nishida, Kyosuke and Sadamitsu, Kugatsu and Kobashikawa, Satoshi and Masumura, Ryo and Matsumoto, Yuji and Tomita, Junji | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 257--262 | In this study, we investigated the effectiveness of augmented data for encoder-decoder-based neural normalization models. Attention based encoder-decoder models are greatly effective in generating many natural languages. {\%} such as machine translation or machine summarization. In general, we have to prepare for a large amount of training data to train an encoder-decoder model. Unlike machine translation, there are few training data for text-normalization tasks. In this paper, we propose two methods for generating augmented data. The experimental results with Japanese dialect normalization indicate that our methods are effective for an encoder-decoder model and achieve higher BLEU score than that of baselines. We also investigated the oracle performance and revealed that there is sufficient room for improving an encoder-decoder model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,103 |
inproceedings | miura-etal-2017-using | Using Social Networks to Improve Language Variety Identification with Neural Networks | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2045/ | Miura, Yasuhide and Taniguchi, Tomoki and Taniguchi, Motoki and Misawa, Shotaro and Ohkuma, Tomoko | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 263--270 | We propose a hierarchical neural network model for language variety identification that integrates information from a social network. Recently, language variety identification has enjoyed heightened popularity as an advanced task of language identification. The proposed model uses additional texts from a social network to improve language variety identification from two perspectives. First, they are used to introduce the effects of homophily. Secondly, they are used as expanded training data for shared layers of the proposed model. By introducing information from social networks, the model improved its accuracy by 1.67-5.56. Compared to state-of-the-art baselines, these improved performances are better in English and comparable in Spanish. Furthermore, we analyzed the cases of Portuguese and Arabic when the model showed weak performances, and found that the effect of homophily is likely to be weak due to sparsity and noises compared to languages with the strong performances. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,104 |
inproceedings | zhang-etal-2017-boosting | Boosting Neural Machine Translation | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2046/ | Zhang, Dakun and Kim, Jungi and Crego, Josep and Senellart, Jean | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 271--276 | Training efficiency is one of the main problems for Neural Machine Translation (NMT). Deep networks need for very large data as well as many training iterations to achieve state-of-the-art performance. This results in very high computation cost, slowing down research and industrialisation. In this paper, we propose to alleviate this problem with several training methods based on data boosting and bootstrap with no modifications to the neural network. It imitates the learning process of humans, which typically spend more time when learning {\textquotedblleft}difficult{\textquotedblright} concepts than easier ones. We experiment on an English-French translation task showing accuracy improvements of up to 1.63 BLEU while saving 20{\%} of training time. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,105 |
inproceedings | yamagishi-etal-2017-improving | Improving {J}apanese-to-{E}nglish Neural Machine Translation by Voice Prediction | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2047/ | Yamagishi, Hayahide and Kanouchi, Shin and Sato, Takayuki and Komachi, Mamoru | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 277--282 | This study reports an attempt to predict the voice of reference using the information from the input sentences or previous input/output sentences. Our previous study presented a voice controlling method to generate sentences for neural machine translation, wherein it was demonstrated that the BLEU score improved when the voice of generated sentence was controlled relative to that of the reference. However, it is impractical to use the reference information because we cannot discern the voice of the correct translation in advance. Thus, this study presents a voice prediction method for generated sentences for neural machine translation. While evaluating on Japanese-to-English translation, we obtain a 0.70-improvement in the BLEU using the predicted voice. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,106 |
inproceedings | kunchukuttan-etal-2017-utilizing | Utilizing Lexical Similarity between Related, Low-resource Languages for Pivot-based {SMT} | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2048/ | Kunchukuttan, Anoop and Shah, Maulik and Prakash, Pradyot and Bhattacharyya, Pushpak | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 283--289 | We investigate pivot-based translation between related languages in a low resource, phrase-based SMT setting. We show that a subword-level pivot-based SMT model using a related pivot language is substantially better than word and morpheme-level pivot models. It is also highly competitive with the best direct translation model, which is encouraging as no direct source-target training corpus is used. We also show that combining multiple related language pivot models can rival a direct translation model. Thus, the use of subwords as translation units coupled with multiple related pivot languages can compensate for the lack of a direct parallel corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,107 |
inproceedings | mino-etal-2017-key | Key-value Attention Mechanism for Neural Machine Translation | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2049/ | Mino, Hideya and Utiyama, Masao and Sumita, Eiichiro and Tokunaga, Takenobu | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 290--295 | In this paper, we propose a neural machine translation (NMT) with a key-value attention mechanism on the source-side encoder. The key-value attention mechanism separates the source-side content vector into two types of memory known as the key and the value. The key is used for calculating the attention distribution, and the value is used for encoding the context representation. Experiments on three different tasks indicate that our model outperforms an NMT model with a conventional attention mechanism. Furthermore, we perform experiments with a conventional NMT framework, in which a part of the initial value of a weight matrix is set to zero so that the matrix is as the same initial-state as the key-value attention mechanism. As a result, we obtain comparable results with the key-value attention mechanism without changing the network structure. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,108 |
inproceedings | nguyen-chiang-2017-transfer | Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2050/ | Nguyen, Toan Q. and Chiang, David | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 296--301 | We present a simple method to improve neural translation of a low-resource language pair using parallel data from a related, also low-resource, language pair. The method is based on the transfer method of Zoph et al., but whereas their method ignores any source vocabulary overlap, ours exploits it. First, we split words using Byte Pair Encoding (BPE) to increase vocabulary overlap. Then, we train a model on the first language pair and transfer its parameters, including its source word embeddings, to another model and continue training on the second language pair. Our experiments show that transfer learning helps word-based translation only slightly, but when used on top of a much stronger BPE baseline, it yields larger improvements of up to 4.3 BLEU. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,109 |
inproceedings | kim-etal-2017-concept | Concept Equalization to Guide Correct Training of Neural Machine Translation | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2051/ | Kim, Kangil and Shin, Jong-Hun and Na, Seung-Hoon and Jung, SangKeun | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 302--307 | Neural machine translation decoders are usually conditional language models to sequentially generate words for target sentences. This approach is limited to find the best word composition and requires help of explicit methods as beam search. To help learning correct compositional mechanisms in NMTs, we propose concept equalization using direct mapping distributed representations of source and target sentences. In a translation experiment from English to French, the concept equalization significantly improved translation quality by 3.00 BLEU points compared to a state-of-the-art NMT model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,110 |
inproceedings | dernoncourt-lee-2017-pubmed | {P}ub{M}ed 200k {RCT}: a Dataset for Sequential Sentence Classification in Medical Abstracts | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2052/ | Dernoncourt, Franck and Lee, Ji Young | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 308--313 | We present PubMed 200k RCT, a new dataset based on PubMed for sequential sentence classification. The dataset consists of approximately 200,000 abstracts of randomized controlled trials, totaling 2.3 million sentences. Each sentence of each abstract is labeled with their role in the abstract using one of the following classes: background, objective, method, result, or conclusion. The purpose of releasing this dataset is twofold. First, the majority of datasets for sequential short-text classification (i.e., classification of short texts that appear in sequences) are small: we hope that releasing a new large dataset will help develop more accurate algorithms for this task. Second, from an application perspective, researchers need better tools to efficiently skim through the literature. Automatically classifying each sentence in an abstract would help researchers read abstracts more efficiently, especially in fields where abstracts may be long, such as the medical field. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,111 |
inproceedings | miceli-barone-sennrich-2017-parallel | A Parallel Corpus of Python Functions and Documentation Strings for Automated Code Documentation and Code Generation | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2053/ | Miceli Barone, Antonio Valerio and Sennrich, Rico | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 314--319 | Automated documentation of programming source code and automated code generation from natural language are challenging tasks of both practical and scientific interest. Progress in these areas has been limited by the low availability of parallel corpora of code and natural language descriptions, which tend to be small and constrained to specific domains. In this work we introduce a large and diverse parallel corpus of a hundred thousands Python functions with their documentation strings ({\textquotedblleft}docstrings{\textquotedblright}) generated by scraping open source repositories on GitHub. We describe baseline results for the code documentation and code generation tasks obtained by neural machine translation. We also experiment with data augmentation techniques to further increase the amount of training data. We release our datasets and processing scripts in order to stimulate research in these areas. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,112 |
inproceedings | li-wang-2017-building | Building Large {C}hinese Corpus for Spoken Dialogue Research in Specific Domains | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2054/ | Li, Changliang and Wang, Xiuying | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 320--324 | Corpus is a valuable resource for information retrieval and data-driven natural language processing systems,especially for spoken dialogue research in specific domains. However,there is little non-English corpora, particular for ones in Chinese. Spoken by the nation with the largest population in the world, Chinese become increasingly prevalent and popular among millions of people worldwide. In this paper, we build a large-scale and high-quality Chinese corpus, called CSDC (Chinese Spoken Dialogue Corpus). It contains five domains and more than 140 thousand dialogues in all. Each sentence in this corpus is annotated with slot information additionally compared to other corpora. To our best knowledge, this is the largest Chinese spoken dialogue corpus, as well as the first one with slot information. With this corpus, we proposed a method and did a well-designed experiment. The indicative result is reported at last. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,113 |
inproceedings | yeung-lee-2017-identifying | Identifying Speakers and Listeners of Quoted Speech in Literary Works | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2055/ | Yeung, Chak Yan and Lee, John | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 325--329 | We present the first study that evaluates both speaker and listener identification for direct speech in literary texts. Our approach consists of two steps: identification of speakers and listeners near the quotes, and dialogue chain segmentation. Evaluation results show that this approach outperforms a rule-based approach that is state-of-the-art on a corpus of literary texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,114 |
inproceedings | ehara-2017-language | Language-Independent Prediction of Psycholinguistic Properties of Words | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2056/ | Ehara, Yo | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 330--336 | The psycholinguistic properties of words, namely, word familiarity, age of acquisition, concreteness, and imagery, have been reported to be effective for educational natural language-processing tasks. Previous studies on predicting the values of these properties rely on language-dependent features. This paper is the first to propose a practical language-independent method for predicting such values by using only a large raw corpus in a language. Through experiments, our method successfully predicted the values of these properties in two languages. The results for English were competitive with the reported accuracy achieved using features specific to English. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,115 |
inproceedings | yoon-kim-2017-correlation | Correlation Analysis of Chronic Obstructive Pulmonary Disease ({COPD}) and its Biomarkers Using the Word Embeddings | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2057/ | Yoon, Byeong-Hun and Kim, Yu-Seop | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 337--342 | It is very costly and time consuming to find new biomarkers for specific diseases in clinical laboratories. In this study, to find new biomarkers most closely related to Chronic Obstructive Pulmonary Disease (COPD), which is widely known as respiratory disease, biomarkers known to be associated with respiratory diseases and COPD itself were converted into word embedding. And their similarities were measured. We used Word2Vec, Canonical Correlation Analysis (CCA), and Global Vector (GloVe) for word embedding. In order to replace the clinical evaluation, the titles and abstracts of papers retrieved from Google Scholars were analyzed and quantified to estimate the performance of the word em-bedding models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,116 |
inproceedings | asano-etal-2017-reference | Reference-based Metrics can be Replaced with Reference-less Metrics in Evaluating Grammatical Error Correction Systems | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2058/ | Asano, Hiroki and Mizumoto, Tomoya and Inui, Kentaro | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 343--348 | In grammatical error correction (GEC), automatically evaluating system outputs requires gold-standard references, which must be created manually and thus tend to be both expensive and limited in coverage. To address this problem, a reference-less approach has recently emerged; however, previous reference-less metrics that only consider the criterion of grammaticality, have not worked as well as reference-based metrics. This study explores the potential of extending a prior grammaticality-based method to establish a reference-less evaluation method for GEC systems. Further, we empirically show that a reference-less metric that combines fluency and meaning preservation with grammaticality provides a better estimate of manual scores than that of commonly used reference-based metrics. To our knowledge, this is the first study that provides empirical evidence that a reference-less metric can replace reference-based metrics in evaluating GEC systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,117 |
inproceedings | garg-etal-2017-cvbed | {CVB}ed: Structuring {CV}s using{W}ord Embeddings | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2059/ | Garg, Shweta and Singh, Sudhanshu S and Mishra, Abhijit and Dey, Kuntal | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 349--354 | Automatic analysis of curriculum vitae (CVs) of applicants is of tremendous importance in recruitment scenarios. The semi-structuredness of CVs, however, makes CV processing a challenging task. We propose a solution towards transforming CVs to follow a unified structure, thereby, paving ways for smoother CV analysis. The problem of restructuring is posed as a section relabeling problem, where each section of a given CV gets reassigned to a predefined label. Our relabeling method relies on semantic relatedness computed between section header, content and labels, based on phrase-embeddings learned from a large pool of CVs. We follow different heuristics to measure semantic relatedness. Our best heuristic achieves an F-score of 93.17{\%} on a test dataset with gold-standard labels obtained using manual annotation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,118 |
inproceedings | li-etal-2017-leveraging-diverse | Leveraging Diverse Lexical Chains to Construct Essays for {C}hinese College Entrance Examination | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2060/ | Li, Liunian and Wan, Xiaojun and Yao, Jin-ge and Yan, Siming | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 355--360 | In this work we study the challenging task of automatically constructing essays for Chinese college entrance examination where the topic is specified in advance. We explore a sentence extraction framework based on diversified lexical chains to capture coherence and richness. Experimental analysis shows the effectiveness of our approach and reveals the importance of information richness in essay writing. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,119 |
inproceedings | han-schlangen-2017-draw | Draw and Tell: Multimodal Descriptions Outperform Verbal- or Sketch-Only Descriptions in an Image Retrieval Task | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2061/ | Han, Ting and Schlangen, David | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 361--365 | While language conveys meaning largely symbolically, actual communication acts typically contain iconic elements as well: People gesture while they speak, or may even draw sketches while explaining something. Image retrieval prima facie seems like a task that could profit from combined symbolic and iconic reference, but it is typically set up to work either from language only, or via (iconic) sketches with no verbal contribution. Using a model of grounded language semantics and a model of sketch-to-image mapping, we show that adding even very reduced iconic information to a verbal image description improves recall. Verbal descriptions paired with fully detailed sketches still perform better than these sketches alone. We see these results as supporting the assumption that natural user interfaces should respond to multimodal input, where possible, rather than just language alone. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,120 |
inproceedings | sakaguchi-etal-2017-grammatical | Grammatical Error Correction with Neural Reinforcement Learning | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2062/ | Sakaguchi, Keisuke and Post, Matt and Van Durme, Benjamin | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 366--372 | We propose a neural encoder-decoder model with reinforcement learning (NRL) for grammatical error correction (GEC). Unlike conventional maximum likelihood estimation (MLE), the model directly optimizes towards an objective that considers a sentence-level, task-specific evaluation metric, avoiding the exposure bias issue in MLE. We demonstrate that NRL outperforms MLE both in human and automated evaluation metrics, achieving the state-of-the-art on a fluency-oriented GEC corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,121 |
inproceedings | toyama-etal-2017-utilizing | Utilizing Visual Forms of {J}apanese Characters for Neural Review Classification | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2064/ | Toyama, Yota and Miwa, Makoto and Sasaki, Yutaka | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 378--382 | We propose a novel method that exploits visual information of ideograms and logograms in analyzing Japanese review documents. Our method first converts font images of Japanese characters into character embeddings using convolutional neural networks. It then constructs document embeddings from the character embeddings based on Hierarchical Attention Networks, which represent the documents based on attention mechanisms from a character level to a sentence level. The document embeddings are finally used to predict the labels of documents. Our method provides a way to exploit visual features of characters in languages with ideograms and logograms. In the experiments, our method achieved an accuracy comparable to a character embedding-based model while our method has much fewer parameters since it does not need to keep embeddings of thousands of characters. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,123 |
inproceedings | wang-etal-2017-multi | A Multi-task Learning Approach to Adapting Bilingual Word Embeddings for Cross-lingual Named Entity Recognition | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2065/ | Wang, Dingquan and Peng, Nanyun and Duh, Kevin | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 383--388 | We show how to adapt bilingual word embeddings (BWE`s) to bootstrap a cross-lingual name-entity recognition (NER) system in a language with no labeled data. We assume a setting where we are given a comparable corpus with NER labels for the source language only; our goal is to build a NER model for the target language. The proposed multi-task model jointly trains bilingual word embeddings while optimizing a NER objective. This creates word embeddings that are both shared between languages and fine-tuned for the NER task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,124 |
inproceedings | mitsuda-etal-2017-investigating | Investigating the Effect of Conveying Understanding Results in Chat-Oriented Dialogue Systems | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2066/ | Mitsuda, Koh and Higashinaka, Ryuichiro and Tomita, Junji | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 389--394 | In dialogue systems, conveying understanding results of user utterances is important because it enables users to feel understood by the system. However, it is not clear what types of understanding results should be conveyed to users; some utterances may be offensive and some may be too commonsensical. In this paper, we explored the effect of conveying understanding results of user utterances in a chat-oriented dialogue system by an experiment using human subjects. As a result, we found that only certain types of understanding results, such as those related to a user`s permanent state, are effective to improve user satisfaction. This paper clarifies the types of understanding results that can be safely uttered by a system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,125 |
inproceedings | ibeke-etal-2017-extracting | Extracting and Understanding Contrastive Opinion through Topic Relevant Sentences | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2067/ | Ibeke, Ebuka and Lin, Chenghua and Wyner, Adam and Barawi, Mohamad Hardyman | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 395--400 | Contrastive opinion mining is essential in identifying, extracting and organising opinions from user generated texts. Most existing studies separate input data into respective collections. In addition, the relationships between the topics extracted and the sentences in the corpus which express the topics are opaque, hindering our understanding of the opinions expressed in the corpus. We propose a novel unified latent variable model (contraLDA) which addresses the above matters. Experimental results show the effectiveness of our model in mining contrasted opinions, outperforming our baselines. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,126 |
inproceedings | yimam-etal-2017-cwig3g2 | {CWIG}3{G}2 - Complex Word Identification Task across Three Text Genres and Two User Groups | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2068/ | Yimam, Seid Muhie and {\v{S}}tajner, Sanja and Riedl, Martin and Biemann, Chris | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 401--407 | Complex word identification (CWI) is an important task in text accessibility. However, due to the scarcity of CWI datasets, previous studies have only addressed this problem on Wikipedia sentences and have solely taken into account the needs of non-native English speakers. We collect a new CWI dataset (CWIG3G2) covering three text genres News, WikiNews, and Wikipedia) annotated by both native and non-native English speakers. Unlike previous datasets, we cover single words, as well as complex phrases, and present them for judgment in a paragraph context. We present the first study on cross-genre and cross-group CWI, showing measurable influences in native language and genre types. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,127 |
inproceedings | akama-etal-2017-generating | Generating Stylistically Consistent Dialog Responses with Transfer Learning | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2069/ | Akama, Reina and Inada, Kazuaki and Inoue, Naoya and Kobayashi, Sosuke and Inui, Kentaro | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 408--412 | We propose a novel, data-driven, and stylistically consistent dialog response generation system. To create a user-friendly system, it is crucial to make generated responses not only appropriate but also stylistically consistent. For leaning both the properties effectively, our proposed framework has two training stages inspired by transfer learning. First, we train the model to generate appropriate responses, and then we ensure that the responses have a specific style. Experimental results demonstrate that the proposed method produces stylistically consistent responses while maintaining the appropriateness of the responses learned in a general domain. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,128 |
inproceedings | ni-wang-2017-learning | Learning to Explain Non-Standard {E}nglish Words and Phrases | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2070/ | Ni, Ke and Wang, William Yang | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 413--417 | We describe a data-driven approach for automatically explaining new, non-standard English expressions in a given sentence, building on a large dataset that includes 15 years of crowdsourced examples from UrbanDictionary.com. Unlike prior studies that focus on matching keywords from a slang dictionary, we investigate the possibility of learning a neural sequence-to-sequence model that generates explanations of unseen non-standard English expressions given context. We propose a dual encoder approach{---}a word-level encoder learns the representation of context, and a second character-level encoder to learn the hidden representation of the target non-standard expression. Our model can produce reasonable definitions of new non-standard English expressions given their context with certain confidence. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,129 |
inproceedings | chali-etal-2017-towards | Towards Abstractive Multi-Document Summarization Using Submodular Function-Based Framework, Sentence Compression and Merging | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2071/ | Chali, Yllias and Tanvee, Moin and Nayeem, Mir Tafseer | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 418--424 | We propose a submodular function-based summarization system which integrates three important measures namely importance, coverage, and non-redundancy to detect the important sentences for the summary. We design monotone and submodular functions which allow us to apply an efficient and scalable greedy algorithm to obtain informative and well-covered summaries. In addition, we integrate two abstraction-based methods namely sentence compression and merging for generating an abstractive sentence set. We design our summarization models for both generic and query-focused summarization. Experimental results on DUC-2004 and DUC-2007 datasets show that our generic and query-focused summarizers have outperformed the state-of-the-art summarization systems in terms of ROUGE-1 and ROUGE-2 recall and F-measure. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,130 |
inproceedings | fu-etal-2017-domain | Domain Adaptation for Relation Extraction with Domain Adversarial Neural Network | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2072/ | Fu, Lisheng and Nguyen, Thien Huu and Min, Bonan and Grishman, Ralph | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 425--429 | Relations are expressed in many domains such as newswire, weblogs and phone conversations. Trained on a source domain, a relation extractor`s performance degrades when applied to target domains other than the source. A common yet labor-intensive method for domain adaptation is to construct a target-domain-specific labeled dataset for adapting the extractor. In response, we present an unsupervised domain adaptation method which only requires labels from the source domain. Our method is a joint model consisting of a CNN-based relation classifier and a domain-adversarial classifier. The two components are optimized jointly to learn a domain-independent representation for prediction on the target domain. Our model outperforms the state-of-the-art on all three test domains of ACE 2005. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,131 |
inproceedings | hitomi-etal-2017-proofread | Proofread Sentence Generation as Multi-Task Learning with Editing Operation Prediction | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2074/ | Hitomi, Yuta and Tamori, Hideaki and Okazaki, Naoaki and Inui, Kentaro | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 436--441 | This paper explores the idea of robot editors, automated proofreaders that enable journalists to improve the quality of their articles. We propose a novel neural model of multi-task learning that both generates proofread sentences and predicts the editing operations required to rewrite the source sentences and create the proofread ones. The model is trained using logs of the revisions made professional editors revising draft newspaper articles written by journalists. Experiments demonstrate the effectiveness of our multi-task learning approach and the potential value of using revision logs for this task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,133 |
inproceedings | chen-bunescu-2017-exploration | An Exploration of Data Augmentation and {RNN} Architectures for Question Ranking in Community Question Answering | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2075/ | Chen, Charles and Bunescu, Razvan | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 442--447 | The automation of tasks in community question answering (cQA) is dominated by machine learning approaches, whose performance is often limited by the number of training examples. Starting from a neural sequence learning approach with attention, we explore the impact of two data augmentation techniques on question ranking performance: a method that swaps reference questions with their paraphrases, and training on examples automatically selected from external datasets. Both methods are shown to lead to substantial gains in accuracy over a strong baseline. Further improvements are obtained by changing the model architecture to mirror the structure seen in the data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,134 |
inproceedings | xia-yarowsky-2017-deriving | Deriving Consensus for Multi-Parallel Corpora: an {E}nglish {B}ible Study | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-2076/ | Xia, Patrick and Yarowsky, David | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers) | 448--453 | What can you do with multiple noisy versions of the same text? We present a method which generates a single consensus between multi-parallel corpora. By maximizing a function of linguistic features between word pairs, we jointly learn a single corpus-wide multiway alignment: a consensus between 27 versions of the English Bible. We additionally produce English paraphrases, word-level distributions of tags, and consensus dependency parses. Our method is language independent and applicable to any multi-parallel corpora. Given the Bible`s unique role as alignable bitext for over 800 of the world`s languages, this consensus alignment and resulting resources offer value for multilingual annotation projection, and also shed potential insights into the Bible itself. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,135 |
inproceedings | paetzold-etal-2017-massalign | {MASSA}lign: Alignment and Annotation of Comparable Documents | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3001/ | Paetzold, Gustavo and Alva-Manchego, Fernando and Specia, Lucia | Proceedings of the {IJCNLP} 2017, System Demonstrations | 1--4 | We introduce MASSAlign: a Python library for the alignment and annotation of monolingual comparable documents. MASSAlign offers easy-to-use access to state of the art algorithms for paragraph and sentence-level alignment, as well as novel algorithms for word-level annotation of transformation operations between aligned sentences. In addition, MASSAlign provides a visualization module to display and analyze the alignments and annotations performed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,137 |
inproceedings | van-durme-etal-2017-cadet | {CADET}: Computer Assisted Discovery Extraction and Translation | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3002/ | Van Durme, Benjamin and Lippincott, Tom and Duh, Kevin and Burchfield, Deana and Poliak, Adam and Costello, Cash and Finin, Tim and Miller, Scott and Mayfield, James and Koehn, Philipp and Harman, Craig and Lawrie, Dawn and May, Chandler and Thomas, Max and Carrell, Annabelle and Chaloux, Julianne and Chen, Tongfei and Comerford, Alex and Dredze, Mark and Glass, Benjamin and Hao, Shudong and Martin, Patrick and Rastogi, Pushpendre and Sankepally, Rashmi and Wolfe, Travis and Tran, Ying-Ying and Zhang, Ted | Proceedings of the {IJCNLP} 2017, System Demonstrations | 5--8 | Computer Assisted Discovery Extraction and Translation (CADET) is a workbench for helping knowledge workers find, label, and translate documents of interest. It combines a multitude of analytics together with a flexible environment for customizing the workflow for different users. This open-source framework allows for easy development of new research prototypes using a micro-service architecture based atop Docker and Apache Thrift. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,138 |
inproceedings | noh-etal-2017-wisereporter | {W}ise{R}eporter: A {K}orean Report Generation System | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3003/ | Noh, Yunseok and Choi, Su Jeong and Park, Seong-Bae and Park, Se-Young | Proceedings of the {IJCNLP} 2017, System Demonstrations | 9--12 | We demonstrate a report generation system called WiseReporter. The WiseReporter generates a text report of a specific topic which is usually given as a keyword by verbalizing knowledge base facts involving the topic. This demonstration does not demonstate only the report itself, but also the processes how the sentences for the report are generated. We are planning to enhance WiseReporter in the future by adding data analysis based on deep learning architecture and text summarization. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,139 |
inproceedings | wang-etal-2017-encyclolink | {E}ncyclolink: A Cross-Encyclopedia,Cross-language Article-Linking System and Web-based Search Interface | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3004/ | Wang, Yu-Chun and Wong, Ka Ming and Wu, Chun-Kai and Pan, Chao-Lin and Tsai, Richard Tzong-Han | Proceedings of the {IJCNLP} 2017, System Demonstrations | 13--16 | Cross-language article linking (CLAL) is the task of finding corresponding article pairs across encyclopedias of different languages. In this paper, we present Encyclolink, a web-based CLAL search interface designed to help users find equivalent encyclopedia articles in Baidu Baike for a given English Wikipedia article title query. Encyclolink is powered by our cross-encyclopedia entity embedding CLAL system (0.8 MRR). The browser-based Interface provides users with a clear and easily readable preview of the contents of retrieved articles for comparison. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,140 |
inproceedings | wang-etal-2017-telecom | A Telecom-Domain Online Customer Service Assistant Based on Question Answering with Word Embedding and Intent Classification | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3005/ | Wang, Jui-Yang and Kuo, Min-Feng and Han, Jen-Chieh and Shih, Chao-Chuang and Chen, Chun-Hsun and Lee, Po-Ching and Tsai, Richard Tzong-Han | Proceedings of the {IJCNLP} 2017, System Demonstrations | 17--20 | In the paper, we propose an information retrieval based (IR-based) Question Answering (QA) system to assist online customer service staffs respond users in the telecom domain. When user asks a question, the system retrieves a set of relevant answers and ranks them. Moreover, our system uses a novel reranker to enhance the ranking result of information retrieval. It employs the word2vec model to represent the sentences as vectors. It also uses a sub-category feature, predicted by the k-nearest neighbor algorithm. Finally, the system returns the top five candidate answers, making online staffs find answers much more efficiently. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,141 |
inproceedings | scarton-etal-2017-musst | {MUSST}: A Multilingual Syntactic Simplification Tool | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3007/ | Scarton, Carolina and Palmero Aprosio, Alessio and Tonelli, Sara and Mart{\'i}n Wanton, Tamara and Specia, Lucia | Proceedings of the {IJCNLP} 2017, System Demonstrations | 25--28 | We describe MUSST, a multilingual syntactic simplification tool. The tool supports sentence simplifications for English, Italian and Spanish, and can be easily extended to other languages. Our implementation includes a set of general-purpose simplification rules, as well as a sentence selection module (to select sentences to be simplified) and a confidence model (to select only promising simplifications). The tool was implemented in the context of the European project SIMPATICO on text simplification for Public Administration (PA) texts. Our evaluation on sentences in the PA domain shows that we obtain correct simplifications for 76{\%} of the simplified cases in English, 71{\%} of the cases in Spanish. For Italian, the results are lower (38{\%}) but the tool is still under development. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,143 |
inproceedings | wang-etal-2017-semantics | Semantics-Enhanced Task-Oriented Dialogue Translation: A Case Study on Hotel Booking | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3009/ | Wang, Longyue and Du, Jinhua and Li, Liangyou and Tu, Zhaopeng and Way, Andy and Liu, Qun | Proceedings of the {IJCNLP} 2017, System Demonstrations | 33--36 | We showcase TODAY, a semantics-enhanced task-oriented dialogue translation system, whose novelties are: (i) task-oriented named entity (NE) definition and a hybrid strategy for NE recognition and translation; and (ii) a novel grounded semantic method for dialogue understanding and task-order management. TODAY is a case-study demo which can efficiently and accurately assist customers and agents in different languages to reach an agreement in a dialogue for the hotel booking. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,145 |
inproceedings | pham-etal-2017-nnvlp | {NNVLP}: A Neural Network-Based {V}ietnamese Language Processing Toolkit | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3010/ | Pham, Thai-Hoang and Pham, Xuan-Khoai and Nguyen, Tuan-Anh and Le-Hong, Phuong | Proceedings of the {IJCNLP} 2017, System Demonstrations | 37--40 | This paper demonstrates neural network-based toolkit namely NNVLP for essential Vietnamese language processing tasks including part-of-speech (POS) tagging, chunking, Named Entity Recognition (NER). Our toolkit is a combination of bidirectional Long Short-Term Memory (Bi-LSTM), Convolutional Neural Network (CNN), Conditional Random Field (CRF), using pre-trained word embeddings as input, which outperforms previously published toolkits on these three tasks. We provide both of API and web demo for this toolkit. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,146 |
inproceedings | peinelt-etal-2017-classifierguesser | {C}lassifier{G}uesser: A Context-based Classifier Prediction System for {C}hinese Language Learners | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3011/ | Peinelt, Nicole and Liakata, Maria and Hsieh, Shu-Kai | Proceedings of the {IJCNLP} 2017, System Demonstrations | 41--44 | Classifiers are function words that are used to express quantities in Chinese and are especially difficult for language learners. In contrast to previous studies, we argue that the choice of classifiers is highly contextual and train context-aware machine learning models based on a novel publicly available dataset, outperforming previous baselines. We further present use cases for our database and models in an interactive demo system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,147 |
inproceedings | lee-etal-2017-automatic | Automatic Difficulty Assessment for {C}hinese Texts | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3012/ | Lee, John and Liu, Meichun and Lam, Chun Yin and Lau, Tak On and Li, Bing and Li, Keying | Proceedings of the {IJCNLP} 2017, System Demonstrations | 45--48 | We present a web-based interface that automatically assesses reading difficulty of Chinese texts. The system performs word segmentation, part-of-speech tagging and dependency parsing on the input text, and then determines the difficulty levels of the vocabulary items and grammatical constructions in the text. Furthermore, the system highlights the words and phrases that must be simplified or re-written in order to conform to the user-specified target difficulty level. Evaluation results show that the system accurately identifies the vocabulary level of 89.9{\%} of the words, and detects grammar points at 0.79 precision and 0.83 recall. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,148 |
inproceedings | wu-etal-2017-verb | Verb Replacer: An {E}nglish Verb Error Correction System | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3013/ | Wu, Yu-Hsuan and Chen, Jhih-Jie and Chang, Jason | Proceedings of the {IJCNLP} 2017, System Demonstrations | 49--52 | According to the analysis of Cambridge Learner Corpus, using a wrong verb is the most common type of grammatical errors. This paper describes Verb Replacer, a system for detecting and correcting potential verb errors in a given sentence. In our approach, alternative verbs are considered to replace the verb based on an error-annotated corpus and verb-object collocations. The method involves applying regression on channel models, parsing the sentence, identifying the verbs, retrieving a small set of alternative verbs, and evaluating each alternative. Our method combines and improves channel and language models, resulting in high recall of detecting and correcting verb misuse. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,149 |
inproceedings | wu-etal-2017-learning | Learning Synchronous Grammar Patterns for Assisted Writing for Second Language Learners | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3014/ | Wu, Chi-En and Chen, Jhih-Jie and Chang, Jim and Chang, Jason | Proceedings of the {IJCNLP} 2017, System Demonstrations | 53--56 | In this paper, we present a method for extracting Synchronous Grammar Patterns (SGPs) from a given parallel corpus in order to assisted second language learners in writing. A grammar pattern consists of a head word (verb, noun, or adjective) and its syntactic environment. A synchronous grammar pattern describes a grammar pattern in the target language (e.g., English) and its counterpart in an other language (e.g., Mandarin), serving the purpose of native language support. Our method involves identifying the grammar patterns in the target language, aligning these patterns with the target language patterns, and finally filtering valid SGPs. The extracted SGPs with examples are then used to develop a prototype writing assistant system, called WriteAhead/bilingual. Evaluation on a set of randomly selected SGPs shows that our system provides satisfactory writing suggestions for English as a Second Language (ESL) learners. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,150 |
inproceedings | li-etal-2017-guess | Guess What: A Question Answering Game via On-demand Knowledge Validation | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3015/ | Li, Yu-Sheng and Tseng, Chien-Hui and Huang, Chian-Yun and Ma, Wei-Yun | Proceedings of the {IJCNLP} 2017, System Demonstrations | 57--60 | In this paper, we propose an idea of ondemand knowledge validation and fulfill the idea through an interactive Question-Answering (QA) game system, which is named Guess What. An object (e.g. dog) is first randomly chosen by the system, and then a user can repeatedly ask the system questions in natural language to guess what the object is. The system would respond with yes/no along with a confidence score. Some useful hints can also be given if needed. The proposed framework provides a pioneering example of on-demand knowledge validation in dialog environment to address such needs in AI agents/chatbots. Moreover, the released log data that the system gathered can be used to identify the most critical concepts/attributes of an existing knowledge base, which reflects human`s cognition about the world. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,151 |
inproceedings | xu-etal-2017-stcp | {STCP}: Simplified-Traditional {C}hinese Conversion and Proofreading | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3016/ | Xu, Jiarui and Ma, Xuezhe and Tsai, Chen-Tse and Hovy, Eduard | Proceedings of the {IJCNLP} 2017, System Demonstrations | 61--64 | This paper aims to provide an effective tool for conversion between Simplified Chinese and Traditional Chinese. We present STCP, a customizable system comprising statistical conversion model, and proofreading web interface. Experiments show that our system achieves comparable character-level conversion performance with the state-of-art systems. In addition, our proofreading interface can effectively support diagnostics and data annotation. STCP is available at \url{http://lagos.lti.cs.cmu.edu:8002/} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,152 |
inproceedings | mehta-etal-2017-deep | Deep Neural Network based system for solving Arithmetic Word problems | Park, Seong-Bae and Supnithi, Thepchai | nov | 2017 | Tapei, Taiwan | Association for Computational Linguistics | https://aclanthology.org/I17-3017/ | Mehta, Purvanshi and Mishra, Pruthwik and Athavale, Vinayak and Shrivastava, Manish and Sharma, Dipti | Proceedings of the {IJCNLP} 2017, System Demonstrations | 65--68 | This paper presents DILTON a system which solves simple arithmetic word problems. DILTON uses a Deep Neural based model to solve math word problems. DILTON divides the question into two parts - worldstate and query. The worldstate and the query are processed separately in two different networks and finally, the networks are merged to predict the final operation. We report the first deep learning approach for the prediction of operation between two numbers. DILTON learns to predict operations with 88.81{\%} accuracy in a corpus of primary school questions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,153 |
inproceedings | rao-etal-2017-ijcnlp | {IJCNLP}-2017 Task 1: {C}hinese Grammatical Error Diagnosis | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4001/ | Rao, Gaoqi and Zhang, Baolin and Xun, Endong and Lee, Lung-Hao | Proceedings of the {IJCNLP} 2017, Shared Tasks | 1--8 | This paper presents the IJCNLP 2017 shared task for Chinese grammatical error diagnosis (CGED) which seeks to identify grammatical error types and their range of occurrence within sentences written by learners of Chinese as foreign language. We describe the task definition, data preparation, performance metrics, and evaluation results. Of the 13 teams registered for this shared task, 5 teams developed the system and submitted a total of 13 runs. We expected this evaluation campaign could lead to the development of more advanced NLP techniques for educational applications, especially for Chinese error detection. All data sets with gold standards and scoring scripts are made publicly available to researchers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,155 |
inproceedings | yu-etal-2017-ijcnlp | {IJCNLP}-2017 Task 2: Dimensional Sentiment Analysis for {C}hinese Phrases | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4002/ | Yu, Liang-Chih and Lee, Lung-Hao and Wang, Jin and Wong, Kam-Fai | Proceedings of the {IJCNLP} 2017, Shared Tasks | 9--16 | This paper presents the IJCNLP 2017 shared task on Dimensional Sentiment Analysis for Chinese Phrases (DSAP) which seeks to identify a real-value sentiment score of Chinese single words and multi-word phrases in the both valence and arousal dimensions. Valence represents the degree of pleasant and unpleasant (or positive and negative) feelings, and arousal represents the degree of excitement and calm. Of the 19 teams registered for this shared task for two-dimensional sentiment analysis, 13 submitted results. We expected that this evaluation campaign could produce more advanced dimensional sentiment analysis techniques, especially for Chinese affective computing. All data sets with gold standards and scoring script are made publicly available to researchers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,156 |
inproceedings | kumar-singh-etal-2017-ijcnlp | {IJCNLP}-2017 Task 3: Review Opinion Diversification ({R}ev{O}pi{D}-2017) | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4003/ | Kumar Singh, Anil and Thawani, Avijit and Panchal, Mayank and Gupta, Anubhav and McAuley, Julian | Proceedings of the {IJCNLP} 2017, Shared Tasks | 17--25 | Unlike Entity Disambiguation in web search results, Opinion Disambiguation is a relatively unexplored topic. RevOpiD shared task at IJCNLP-2107 aimed to attract attention towards this research problem. In this paper, we summarize the first run of this task and introduce a new dataset that we have annotated for the purpose of evaluating Opinion Mining, Summarization and Disambiguation methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,157 |
inproceedings | liu-etal-2017-ijcnlp | {IJCNLP}-2017 Task 4: Customer Feedback Analysis | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4004/ | Liu, Chao-Hong and Moriya, Yasufumi and Poncelas, Alberto and Groves, Declan | Proceedings of the {IJCNLP} 2017, Shared Tasks | 26--33 | This document introduces the IJCNLP 2017 Shared Task on Customer Feedback Analysis. In this shared task we have prepared corpora of customer feedback in four languages, i.e. English, French, Spanish and Japanese. They were annotated in a common meanings categorization, which was improved from an ADAPT-Microsoft pivot study on customer feedback. Twenty teams participated in the shared task and twelve of them have submitted prediction results. The results show that performance of prediction meanings of customer feedback is reasonable well in four languages. Nine system description papers are archived in the shared tasks proceeding. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,158 |
inproceedings | guo-etal-2017-ijcnlp | {IJCNLP}-2017 Task 5: Multi-choice Question Answering in Examinations | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4005/ | Guo, Shangmin and Liu, Kang and He, Shizhu and Liu, Cao and Zhao, Jun and Wei, Zhuoyu | Proceedings of the {IJCNLP} 2017, Shared Tasks | 34--40 | The IJCNLP-2017 Multi-choice Question Answering(MCQA) task aims at exploring the performance of current Question Answering(QA) techniques via the realworld complex questions collected from Chinese Senior High School Entrance Examination papers and CK12 website1. The questions are all 4-way multi-choice questions writing in Chinese and English respectively that cover a wide range of subjects, e.g. Biology, History, Life Science and etc. And, all questions are restrained within the elementary and middle school level. During the whole procedure of this task, 7 teams submitted 323 runs in total. This paper describes the collected data, the format and size of these questions, formal run statistics and results, overview and performance statistics of different methods | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,159 |
inproceedings | yang-etal-2017-alibaba | {A}libaba at {IJCNLP}-2017 Task 1: Embedding Grammatical Features into {LSTM}s for {C}hinese Grammatical Error Diagnosis Task | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4006/ | Yang, Yi and Xie, Pengjun and Tao, Jun and Xu, Guangwei and Li, Linlin and Si, Luo | Proceedings of the {IJCNLP} 2017, Shared Tasks | 41--46 | This paper introduces Alibaba NLP team system on IJCNLP 2017 shared task No. 1 Chinese Grammatical Error Diagnosis (CGED). The task is to diagnose four types of grammatical errors which are redundant words (R), missing words (M), bad word selection (S) and disordered words (W). We treat the task as a sequence tagging problem and design some handcraft features to solve it. Our system is mainly based on the LSTM-CRF model and 3 ensemble strategies are applied to improve the performance. At the identification level and the position level our system gets the highest F1 scores. At the position level, which is the most difficult level, we perform best on all metrics. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,160 |
inproceedings | wu-etal-2017-thu | {THU}{\_}{NGN} at {IJCNLP}-2017 Task 2: Dimensional Sentiment Analysis for {C}hinese Phrases with Deep {LSTM} | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4007/ | Wu, Chuhan and Wu, Fangzhao and Huang, Yongfeng and Wu, Sixing and Yuan, Zhigang | Proceedings of the {IJCNLP} 2017, Shared Tasks | 47--52 | Predicting valence-arousal ratings for words and phrases is very useful for constructing affective resources for dimensional sentiment analysis. Since the existing valence-arousal resources of Chinese are mainly in word-level and there is a lack of phrase-level ones, the Dimensional Sentiment Analysis for Chinese Phrases (DSAP) task aims to predict the valence-arousal ratings for Chinese affective words and phrases automatically. In this task, we propose an approach using a densely connected LSTM network and word features to identify dimensional sentiment on valence and arousal for words and phrases jointly. We use word embedding as major feature and choose part of speech (POS) and word clusters as additional features to train the dense LSTM network. The evaluation results of our submissions (1st and 2nd in average performance) validate the effectiveness of our system to predict valence and arousal dimensions for Chinese words and phrases. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,161 |
inproceedings | mishra-etal-2017-iiit | {IIIT}-{H} at {IJCNLP}-2017 Task 3: A Bidirectional-{LSTM} Approach for Review Opinion Diversification | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4008/ | Mishra, Pruthwik and Danda, Prathyusha and Kanneganti, Silpa and Lanka, Soujanya | Proceedings of the {IJCNLP} 2017, Shared Tasks | 53--58 | The Review Opinion Diversification (Revopid-2017) shared task focuses on selecting top-k reviews from a set of reviews for a particular product based on a specific criteria. In this paper, we describe our approaches and results for modeling the ranking of reviews based on their usefulness score, this being the first of the three subtasks under this shared task. Instead of posing this as a regression problem, we modeled this as a classification task where we want to identify whether a review is useful or not. We employed a bi-directional LSTM to represent each review and is used with a softmax layer to predict the usefulness score. We chose the review with highest usefulness score, then find its cosine similarity score with rest of the reviews. This is done in order to ensure diversity in the selection of top-k reviews. On the top-5 list prediction, we finished 3rd while in top-10 list one, we are placed 2nd in the shared task. We have discussed the model and the results in detail in the paper. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,162 |
inproceedings | elfardy-etal-2017-bingo | Bingo at {IJCNLP}-2017 Task 4: Augmenting Data using Machine Translation for Cross-linguistic Customer Feedback Classification | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4009/ | Elfardy, Heba and Srivastava, Manisha and Xiao, Wei and Kramer, Jared and Agarwal, Tarun | Proceedings of the {IJCNLP} 2017, Shared Tasks | 59--66 | The ability to automatically and accurately process customer feedback is a necessity in the private sector. Unfortunately, customer feedback can be one of the most difficult types of data to work with due to the sheer volume and variety of services, products, languages, and cultures that comprise the customer experience. In order to address this issue, our team built a suite of classifiers trained on a four-language, multi-label corpus released as part of the shared task on {\textquotedblleft}Customer Feedback Analysis{\textquotedblright} at IJCNLP 2017. In addition to standard text preprocessing, we translated each dataset into each other language to increase the size of the training datasets. Additionally, we also used word embeddings in our feature engineering step. Ultimately, we trained classifiers using Logistic Regression, Random Forest, and Long Short-Term Memory (LSTM) Recurrent Neural Networks. Overall, we achieved a Macro-Average F-score between 48.7{\%} and 56.0{\%} for the four languages and ranked 3/12 for English, 3/7 for Spanish, 1/8 for French, and 2/7 for Japanese. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,163 |
inproceedings | dzendzik-etal-2017-adapt | {ADAPT} Centre Cone Team at {IJCNLP}-2017 Task 5: A Similarity-Based Logistic Regression Approach to Multi-choice Question Answering in an Examinations Shared Task | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4010/ | Dzendzik, Daria and Poncelas, Alberto and Vogel, Carl and Liu, Qun | Proceedings of the {IJCNLP} 2017, Shared Tasks | 67--72 | We describe the work of a team from the ADAPT Centre in Ireland in addressing automatic answer selection for the Multi-choice Question Answering in Examinations shared task. The system is based on a logistic regression over the string similarities between question, answer, and additional text. We obtain the highest grade out of six systems: 48.7{\%} accuracy on a validation set (vs. a baseline of 29.45{\%}) and 45.6{\%} on a test set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,164 |
inproceedings | liao-etal-2017-ynu | {YNU}-{HPCC} at {IJCNLP}-2017 Task 1: {C}hinese Grammatical Error Diagnosis Using a Bi-directional {LSTM}-{CRF} Model | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4011/ | Liao, Quanlei and Wang, Jin and Yang, Jinnan and Zhang, Xuejie | Proceedings of the {IJCNLP} 2017, Shared Tasks | 73--77 | Building a system to detect Chinese grammatical errors is a challenge for natural-language processing researchers. As Chinese learners are increasing, developing such a system can help them study Chinese more easily. This paper introduces a bi-directional long short-term memory (BiLSTM) - conditional random field (CRF) model to produce the sequences that indicate an error type for every position of a sentence, since we regard Chinese grammatical error diagnosis (CGED) as a sequence-labeling problem. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,165 |
inproceedings | li-etal-2017-cvte | {CVTE} at {IJCNLP}-2017 Task 1: Character Checking System for {C}hinese Grammatical Error Diagnosis Task | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4012/ | Li, Xian and Wang, Peng and Wang, Suixue and Jiang, Guanyu and You, Tianyuan | Proceedings of the {IJCNLP} 2017, Shared Tasks | 78--83 | Grammatical error diagnosis is an important task in natural language processing. This paper introduces CVTE Character Checking System in the NLP-TEA-4 shared task for CGED 2017, we use Bi-LSTM to generate the probability of every character, then take two kinds of strategies to decide whether a character is correct or not. This system is probably more suitable to deal with the error type of bad word selection, which is one of four types of errors, and the rest are words re-dundancy, words missing and words disorder. Finally the second strategy achieves better F1 score than the first one at all of detection level, identification level, position level. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,166 |
inproceedings | zhong-wang-2017-ldccnlp | {LDCCNLP} at {IJCNLP}-2017 Task 2: Dimensional Sentiment Analysis for {C}hinese Phrases Using Machine Learning | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4013/ | Zhong, Peng and Wang, Jingbin | Proceedings of the {IJCNLP} 2017, Shared Tasks | 84--88 | Sentiment analysis on Chinese text has intensively studied. The basic task for related research is to construct an affective lexicon and thereby predict emotional scores of different levels. However, finite lexicon resources make it difficult to effectively and automatically distinguish between various types of sentiment information in Chinese texts. This IJCNLP2017-Task2 competition seeks to automatically calculate Valence and Arousal ratings within the hierarchies of vocabulary and phrases in Chinese. We introduce a regression methodology to automatically recognize continuous emotional values, and incorporate a word embedding technique. In our system, the MAE predictive values of Valence and Arousal were 0.811 and 0.996, respectively, for the sentiment dimension prediction of words in Chinese. In phrase prediction, the corresponding results were 0.822 and 0.489, ranking sixth among all teams. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,167 |
inproceedings | li-etal-2017-ckip | {CKIP} at {IJCNLP}-2017 Task 2: Neural Valence-Arousal Prediction for Phrases | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4014/ | Li, Peng-Hsuan and Ma, Wei-Yun and Wang, Hsin-Yang | Proceedings of the {IJCNLP} 2017, Shared Tasks | 89--94 | CKIP takes part in solving the Dimensional Sentiment Analysis for Chinese Phrases (DSAP) share task of IJCNLP 2017. This task calls for systems that can predict the valence and the arousal of Chinese phrases, which are real values between 1 and 9. To achieve this, functions mapping Chinese character sequences to real numbers are built by regression techniques. In addition, the CKIP phrase Valence-Arousal (VA) predictor depends on knowledge of modifier words and head words. This includes the types of known modifier words, VA of head words, and distributional semantics of both these words. The predictor took the second place out of 13 teams on phrase VA prediction, with 0.444 MAE and 0.935 PCC on valence, and 0.395 MAE and 0.904 PCC on arousal. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,168 |
inproceedings | lin-etal-2017-cial | {CIAL} at {IJCNLP}-2017 Task 2: An Ensemble Valence-Arousal Analysis System for {C}hinese Words and Phrases | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4015/ | Lin, Zheng-Wen and Chang, Yung-Chun and Wang, Chen-Ann and Hsieh, Yu-Lun and Hsu, Wen-Lian | Proceedings of the {IJCNLP} 2017, Shared Tasks | 95--99 | Sentiment lexicon is very helpful in dimensional sentiment applications. Because of countless Chinese words, developing a method to predict unseen Chinese words is required. The proposed method can handle both words and phrases by using an ADVWeight List for word prediction, which in turn improves our performance at phrase level. The evaluation results demonstrate that our system is effective in dimensional sentiment analysis for Chinese phrases. The Mean Absolute Error (MAE) and Pearson`s Correlation Coefficient (PCC) for Valence are 0.723 and 0.835, respectively, and those for Arousal are 0.914 and 0.756, respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,169 |
inproceedings | zhou-etal-2017-alibaba | {A}libaba at {IJCNLP}-2017 Task 2: A Boosted Deep System for Dimensional Sentiment Analysis of {C}hinese Phrases | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4016/ | Zhou, Xin and Wang, Jian and Xie, Xu and Sun, Changlong and Si, Luo | Proceedings of the {IJCNLP} 2017, Shared Tasks | 100--104 | This paper introduces Team Alibaba`s systems participating IJCNLP 2017 shared task No. 2 Dimensional Sentiment Analysis for Chinese Phrases (DSAP). The systems mainly utilize a multi-layer neural networks, with multiple features input such as word embedding, part-of-speech-tagging (POST), word clustering, prefix type, character embedding, cross sentiment input, and AdaBoost method for model training. For word level task our best run achieved MAE 0.545 (ranked 2nd), PCC 0.892 (ranked 2nd) in valence prediction and MAE 0.857 (ranked 1st), PCC 0.678 (ranked 2nd) in arousal prediction. For average performance of word and phrase task we achieved MAE 0.5355 (ranked 3rd), PCC 0.8965 (ranked 3rd) in valence prediction and MAE 0.661 (ranked 3rd), PCC 0.766 (ranked 2nd) in arousal prediction. In the final our submitted system achieved 2nd in mean rank. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,170 |
inproceedings | chen-etal-2017-nlpsa | {NLPSA} at {IJCNLP}-2017 Task 2: Imagine Scenario: Leveraging Supportive Images for Dimensional Sentiment Analysis | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4017/ | Chen, Szu-Min and Chen, Zi-Yuan and Ku, Lun-Wei | Proceedings of the {IJCNLP} 2017, Shared Tasks | 105--111 | Categorical sentiment classification has drawn much attention in the field of NLP, while less work has been conducted for dimensional sentiment analysis (DSA). Recent works for DSA utilize either word embedding, knowledge base features, or bilingual language resources. In this paper, we propose our model for IJCNLP 2017 Dimensional Sentiment Analysis for Chinese Phrases shared task. Our model incorporates word embedding as well as image features, attempting to simulate human`s imaging behavior toward sentiment analysis. Though the performance is not comparable to others in the end, we conduct several experiments with possible reasons discussed, and analyze the drawbacks of our model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,171 |
inproceedings | yeh-etal-2017-ncyu | {NCYU} at {IJCNLP}-2017 Task 2: Dimensional Sentiment Analysis for {C}hinese Phrases using Vector Representations | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4018/ | Yeh, Jui-Feng and Tsai, Jian-Cheng and Wu, Bo-Wei and Kuang, Tai-You | Proceedings of the {IJCNLP} 2017, Shared Tasks | 112--117 | This paper presents two vector representations proposed by National Chiayi University (NCYU) about phrased-based sentiment detection which was used to compete in dimensional sentiment analysis for Chinese phrases (DSACP) at IJCNLP 2017. The vector-based sentiment phraselike unit analysis models are proposed in this article. E-HowNet-based clustering is used to obtain the values of valence and arousal for sentiment words first. An out-of-vocabulary function is also defined in this article to measure the dimensional emotion values for unknown words. For predicting the corresponding values of sentiment phrase-like unit, a vectorbased approach is proposed here. According to the experimental results, we can find the proposed approach is efficacious. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,172 |
inproceedings | benajiba-etal-2017-mainiwayai | {M}ainiway{AI} at {IJCNLP}-2017 Task 2: Ensembles of Deep Architectures for Valence-Arousal Prediction | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4019/ | Benajiba, Yassine and Sun, Jin and Zhang, Yong and Weng, Zhiliang and Biran, Or | Proceedings of the {IJCNLP} 2017, Shared Tasks | 118--123 | This paper introduces Mainiway AI Labs submitted system for the IJCNLP 2017 shared task on Dimensional Sentiment Analysis of Chinese Phrases (DSAP), and related experiments. Our approach consists of deep neural networks with various architectures, and our best system is a voted ensemble of networks. We achieve a Mean Absolute Error of 0.64 in valence prediction and 0.68 in arousal prediction on the test set, both placing us as the 5th ranked team in the competition. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,173 |
inproceedings | lee-etal-2017-nctu | {NCTU}-{NTUT} at {IJCNLP}-2017 Task 2: Deep Phrase Embedding using bi-{LSTM}s for Valence-Arousal Ratings Prediction of {C}hinese Phrases | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4020/ | Lee, Yen-Hsuan and Yeh, Han-Yun and Wang, Yih-Ru and Liao, Yuan-Fu | Proceedings of the {IJCNLP} 2017, Shared Tasks | 124--129 | In this paper, a deep phrase embedding approach using bi-directional long short-term memory (Bi-LSTM) is proposed to predict the valence-arousal ratings of Chinese words and phrases. It adopts a Chinese word segmentation frontend, a local order-aware word, a global phrase embedding representations and a deep regression neural network (DRNN) model. The performance of the proposed method was benchmarked by the IJCNLP 2017 shared task 2. According the official evaluation results, our best system achieved mean rank 6.5 among all 24 submissions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,174 |
inproceedings | lin-chang-2017-ntoua | {NTOUA} at {IJCNLP}-2017 Task 2: Predicting Sentiment Scores of {C}hinese Words and Phrases | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4021/ | Lin, Chuan-Jie and Chang, Hao-Tsung | Proceedings of the {IJCNLP} 2017, Shared Tasks | 130--133 | This paper describes the approaches of sentimental score prediction in the NTOU DSA system participating in DSAP this year. The modules to predict scores for words are adapted from our system last year. The approach to predict scores for phrases is keyword-based machine learning method. The performance of our system is good in predicting scores of phrases. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,175 |
inproceedings | wu-etal-2017-cyut | {CYUT} at {IJCNLP}-2017 Task 3: System Report for Review Opinion Diversification | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4022/ | Wu, Shih-Hung and Chang, Su-Yu and Chen, Liang-Pu | Proceedings of the {IJCNLP} 2017, Shared Tasks | 134--137 | Review Opinion Diversification (RevOpiD) 2017 is a shared task which is held in International Joint Conference on Natural Language Processing (IJCNLP). The shared task aims at selecting top-k reviews, as a summary, from a set of re-views. There are three subtasks in RevOpiD: helpfulness ranking, rep-resentativeness ranking, and ex-haustive coverage ranking. This year, our team submitted runs by three models. We focus on ranking reviews based on the helpfulness of the reviews. In the first two models, we use linear regression with two different loss functions. First one is least squares, and second one is cross entropy. The third run is a random baseline. For both k=5 and k=10, our second model gets the best scores in the official evaluation metrics. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,176 |
inproceedings | dey-etal-2017-junlp | {JUNLP} at {IJCNLP}-2017 Task 3: A Rank Prediction Model for Review Opinion Diversification | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4023/ | Dey, Monalisa and Mondal, Anupam and Das, Dipankar | Proceedings of the {IJCNLP} 2017, Shared Tasks | 138--142 | IJCNLP-17 Review Opinion Diversification (RevOpiD-2017) task has been designed for ranking the top-k reviews of a product from a set of reviews, which assists in identifying a summarized output to express the opinion of the entire review set. The task is divided into three independent subtasks as subtask-A,subtask-B, and subtask-C. Each of these three subtasks selects the top-k reviews based on helpfulness, representativeness, and exhaustiveness of the opinions expressed in the review set individually. In order to develop the modules and predict the rank of reviews for all three subtasks, we have employed two well-known supervised classifiers namely, Na{\"ive Bayes and Logistic Regression on the top of several extracted features such as the number of nouns, number of verbs, and number of sentiment words etc from the provided datasets. Finally, the organizers have helped to validate the predicted outputs for all three subtasks by using their evaluation metrics. The metrics provide the scores of list size 5 as (0.80 (mth)) for subtask-A, (0.86 (cos), 0.87 (cos d), 0.71 (cpr), 4.98 (a-dcg), and 556.94 (wt)) for subtask B, and (10.94 (unwt) and 0.67 (recall)) for subtask C individually. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,177 |
inproceedings | plank-2017-1 | All-In-1 at {IJCNLP}-2017 Task 4: Short Text Classification with One Model for All Languages | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4024/ | Plank, Barbara | Proceedings of the {IJCNLP} 2017, Shared Tasks | 143--148 | We present All-In-1, a simple model for multilingual text classification that does not require any parallel data. It is based on a traditional Support Vector Machine classifier exploiting multilingual word embeddings and character n-grams. Our model is simple, easily extendable yet very effective, overall ranking 1st (out of 12 teams) in the IJCNLP 2017 shared task on customer feedback analysis in four languages: English, French, Japanese and Spanish. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,178 |
inproceedings | lin-etal-2017-sentinlp | {S}enti{NLP} at {IJCNLP}-2017 Task 4: Customer Feedback Analysis Using a {B}i-{LSTM}-{CNN} Model | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4025/ | Lin, Shuying and Xie, Huosheng and Yu, Liang-Chih and Lai, K. Robert | Proceedings of the {IJCNLP} 2017, Shared Tasks | 149--154 | The analysis of customer feedback is useful to provide good customer service. There are a lot of online customer feedback are produced. Manual classification is impractical because the high volume of data. Therefore, the automatic classification of the customer feedback is of importance for the analysis system to identify meanings or intentions that the customer express. The aim of shared Task 4 of IJCNLP 2017 is to classify the customer feedback into six tags categorization. In this paper, we present a system that uses word embeddings to express the feature of the sentence in the corpus and the neural network as the classifier to complete the shared task. And then the ensemble method is used to get final predictive result. The proposed method get ranked first among twelve teams in terms of micro-averaged F1 and second for accura-cy metric. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,179 |
inproceedings | danda-etal-2017-iiit | {IIIT}-{H} at {IJCNLP}-2017 Task 4: Customer Feedback Analysis using Machine Learning and Neural Network Approaches | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4026/ | Danda, Prathyusha and Mishra, Pruthwik and Kanneganti, Silpa and Lanka, Soujanya | Proceedings of the {IJCNLP} 2017, Shared Tasks | 155--160 | The IJCNLP 2017 shared task on Customer Feedback Analysis focuses on classifying customer feedback into one of a predefined set of categories or classes. In this paper, we describe our approach to this problem and the results on four languages, i.e. English, French, Japanese and Spanish. Our system implemented a bidirectional LSTM (Graves and Schmidhuber, 2005) using pre-trained glove (Pennington et al., 2014) and fastText (Joulin et al., 2016) embeddings, and SVM (Cortes and Vapnik, 1995) with TF-IDF vectors for classifying the feedback data which is described in the later sections. We also tried different machine learning techniques and compared the results in this paper. Out of the 12 participating teams, our systems obtained 0.65, 0.86, 0.70 and 0.56 exact accuracy score in English, Spanish, French and Japanese respectively. We observed that our systems perform better than the baseline systems in three languages while we match the baseline accuracy for Japanese on our submitted systems. We noticed significant improvements in Japanese in later experiments, matching the highest performing system that was submitted in the shared task, which we will discuss in this paper. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,180 |
inproceedings | lohar-etal-2017-adapt | {ADAPT} at {IJCNLP}-2017 Task 4: A Multinomial Naive {B}ayes Classification Approach for Customer Feedback Analysis task | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4027/ | Lohar, Pintu and Dutta Chowdhury, Koel and Afli, Haithem and Hasanuzzaman, Mohammed and Way, Andy | Proceedings of the {IJCNLP} 2017, Shared Tasks | 161--169 | In this age of the digital economy, promoting organisations attempt their best to engage the customers in the feedback provisioning process. With the assistance of customer insights, an organisation can develop a better product and provide a better service to its customer. In this paper, we analyse the real world samples of customer feedback from Microsoft Office customers in four languages, i.e., English, French, Spanish and Japanese and conclude a five-plus-one-classes categorisation (comment, request, bug, complaint, meaningless and undetermined) for meaning classification. The task is to {\%}access multilingual corpora annotated by the proposed meaning categorization scheme and develop a system to determine what class(es) the customer feedback sentences should be annotated as in four languages. We propose following approaches to accomplish this task: (i) a multinomial naive bayes (MNB) approach for multi-label classification, (ii) MNB with one-vs-rest classifier approach, and (iii) the combination of the multilabel classification-based and the sentiment classification-based approach. Our best system produces F-scores of 0.67, 0.83, 0.72 and 0.7 for English, Spanish, French and Japanese, respectively. The results are competitive to the best ones for all languages and secure 3rd and 5th position for Japanese and French, respectively, among all submitted systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,181 |
inproceedings | dhyani-2017-ohiostate | {O}hio{S}tate at {IJCNLP}-2017 Task 4: Exploring Neural Architectures for Multilingual Customer Feedback Analysis | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4028/ | Dhyani, Dushyanta | Proceedings of the {IJCNLP} 2017, Shared Tasks | 170--173 | This paper describes our systems for IJCNLP 2017 Shared Task on Customer Feedback Analysis. We experimented with simple neural architectures that gave competitive performance on certain tasks. This includes shallow CNN and Bi-Directional LSTM architectures with Facebook`s Fasttext as a baseline model. Our best performing model was in the Top 5 systems using the Exact-Accuracy and Micro-Average-F1 metrics for the Spanish (85.28{\%} for both) and French (70{\%} and 73.17{\%} respectively) task, and outperformed all the other models on comment (87.28{\%}) and meaningless (51.85{\%}) tags using Micro Average F1 by Tags metric for the French task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,182 |
inproceedings | wang-etal-2017-ynu | {YNU}-{HPCC} at {IJCNLP}-2017 Task 4: Attention-based Bi-directional {GRU} Model for Customer Feedback Analysis Task of {E}nglish | Liu, Chao-Hong and Nakov, Preslav and Xue, Nianwen | dec | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-4029/ | Wang, Nan and Wang, Jin and Zhang, Xuejie | Proceedings of the {IJCNLP} 2017, Shared Tasks | 174--179 | This paper describes our submission to IJCNLP 2017 shared task 4, for predicting the tags of unseen customer feedback sentences, such as comments, complaints, bugs, requests, and meaningless and undetermined statements. With the use of a neural network, a large number of deep learning methods have been developed, which perform very well on text classification. Our ensemble classification model is based on a bi-directional gated recurrent unit and an attention mechanism which shows a 3.8{\%} improvement in classification accuracy. To enhance the model performance, we also compared it with several word-embedding models. The comparative results show that a combination of both word2vec and GloVe achieves the best performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,183 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.