entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | ohman-etal-2016-challenges | The Challenges of Multi-dimensional Sentiment Analysis Across Languages | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4315/ | {\"Ohman, Emily and Honkela, Timo and Tiedemann, J{\"org | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 138--142 | This paper outlines a pilot study on multi-dimensional and multilingual sentiment analysis of social media content. We use parallel corpora of movie subtitles as a proxy for colloquial language in social media channels and a multilingual emotion lexicon for fine-grained sentiment analyses. Parallel data sets make it possible to study the preservation of sentiments and emotions in translation and our assessment reveals that the lexical approach shows great inter-language agreement. However, our manual evaluation also suggests that the use of purely lexical methods is limited and further studies are necessary to pinpoint the cross-lingual differences and to develop better sentiment classifiers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,930 |
inproceedings | alam-etal-2016-social | The Social Mood of News: Self-reported Annotations to Design Automatic Mood Detection Systems | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4316/ | Alam, Firoj and Celli, Fabio and Stepanov, Evgeny A. and Ghosh, Arindam and Riccardi, Giuseppe | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 143--152 | In this paper, we address the issue of automatic prediction of readers' mood from newspaper articles and comments. As online newspapers are becoming more and more similar to social media platforms, users can provide affective feedback, such as mood and emotion. We have exploited the self-reported annotation of mood categories obtained from the metadata of the Italian online newspaper corriere.it to design and evaluate a system for predicting five different mood categories from news articles and comments: indignation, disappointment, worry, satisfaction, and amusement. The outcome of our experiments shows that overall, bag-of-word-ngrams perform better compared to all other feature sets; however, stylometric features perform better for the mood score prediction of articles. Our study shows that self-reported annotations can be used to design automatic mood prediction systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,931 |
inproceedings | summa-etal-2016-microblog | Microblog Emotion Classification by Computing Similarity in Text, Time, and Space | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4317/ | Summa, Anja and Resch, Bernd and Strube, Michael | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 153--162 | Most work in NLP analysing microblogs focuses on textual content thus neglecting temporal and spatial information. We present a new interdisciplinary method for emotion classification that combines linguistic, temporal, and spatial information into a single metric. We create a graph of labeled and unlabeled tweets that encodes the relations between neighboring tweets with respect to their emotion labels. Graph-based semi-supervised learning labels all tweets with an emotion. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,932 |
inproceedings | santos-etal-2016-domain | A domain-agnostic approach for opinion prediction on speech | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4318/ | Santos, Pedro Bispo and Beinborn, Lisa and Gurevych, Iryna | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 163--172 | We explore a domain-agnostic approach for analyzing speech with the goal of opinion prediction. We represent the speech signal by mel-frequency cepstral coefficients and apply long short-term memory neural networks to automatically learn temporal regularities in speech. In contrast to previous work, our approach does not require complex feature engineering and works without textual transcripts. As a consequence, it can easily be applied on various speech analysis tasks for different languages and the results show that it can nevertheless be competitive to the state-of-the-art in opinion prediction. In a detailed error analysis for opinion mining we find that our approach performs well in identifying speaker-specific characteristics, but should be combined with additional information if subtle differences in the linguistic content need to be identified. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,933 |
inproceedings | lee-etal-2016-make | Can We Make Computers Laugh at Talks? | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4319/ | Lee, Chong Min and Yoon, Su-Youn and Chen, Lei | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 173--181 | Considering the importance of public speech skills, a system which makes a prediction on where audiences laugh in a talk can be helpful to a person who prepares for a talk. We investigated a possibility that a state-of-the-art humor recognition system can be used in detecting sentences inducing laughters in talks. In this study, we used TED talks and laughters in the talks as data. Our results showed that the state-of-the-art system needs to be improved in order to be used in a practical application. In addition, our analysis showed that classifying humorous sentences in talks is very challenging due to close distance between humorous and non-humorous sentences. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,934 |
inproceedings | mowery-etal-2016-towards | Towards Automatically Classifying Depressive Symptoms from {T}witter Data for Population Health | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4320/ | Mowery, Danielle L. and Park, Albert and Bryan, Craig and Conway, Mike | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 182--191 | Major depressive disorder, a debilitating and burdensome disease experienced by individuals worldwide, can be defined by several depressive symptoms (e.g., anhedonia (inability to feel pleasure), depressed mood, difficulty concentrating, etc.). Individuals often discuss their experiences with depression symptoms on public social media platforms like Twitter, providing a potentially useful data source for monitoring population-level mental health risk factors. In a step towards developing an automated method to estimate the prevalence of symptoms associated with major depressive disorder over time in the United States using Twitter, we developed classifiers for discerning whether a Twitter tweet represents no evidence of depression or evidence of depression. If there was evidence of depression, we then classified whether the tweet contained a depressive symptom and if so, which of three subtypes: depressed mood, disturbed sleep, or fatigue or loss of energy. We observed that the most accurate classifiers could predict classes with high-to-moderate F1-score performances for no evidence of depression (85), evidence of depression (52), and depressive symptoms (49). We report moderate F1-scores for depressive symptoms ranging from 75 (fatigue or loss of energy) to 43 (disturbed sleep) to 35 (depressed mood). Our work demonstrates baseline approaches for automatically encoding Twitter data with granular depressive symptoms associated with major depressive disorder. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,935 |
inproceedings | chen-etal-2016-using | Using {W}ikipedia and Semantic Resources to Find Answer Types and Appropriate Answer Candidate Sets in Question Answering | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4401/ | Chen, Po-Chun and Zhuang, Meng-Jie and Lin, Chuan-Jie | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 1--10 | This paper proposes a new idea that uses Wikipedia categories as answer types and defines candidate sets inside Wikipedia. The focus of a given question is searched in the hierarchy of Wikipedia main pages. Our searching strategy combines head-noun matching and synonym matching provided in semantic resources. The set of answer candidates is determined by the entry hierarchy in Wikipedia and the hyponymy hierarchy in WordNet. The experimental results show that the approach can find candidate sets in a smaller size but achieve better performance especially for ARTIFACT and ORGANIZATION types, where the performance is better than state-of-the-art Chinese factoid QA systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,937 |
inproceedings | otani-etal-2016-large | Large-Scale Acquisition of Commonsense Knowledge via a Quiz Game on a Dialogue System | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4402/ | Otani, Naoki and Kawahara, Daisuke and Kurohashi, Sadao and Kaji, Nobuhiro and Sassano, Manabu | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 11--20 | Commonsense knowledge is essential for fully understanding language in many situations. We acquire large-scale commonsense knowledge from humans using a game with a purpose (GWAP) developed on a smartphone spoken dialogue system. We transform the manual knowledge acquisition process into an enjoyable quiz game and have collected over 150,000 unique commonsense facts by gathering the data of more than 70,000 players over eight months. In this paper, we present a simple method for maintaining the quality of acquired knowledge and an empirical analysis of the knowledge acquisition process. To the best of our knowledge, this is the first work to collect large-scale knowledge via a GWAP on a widely-used spoken dialogue system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,938 |
inproceedings | homma-etal-2016-hierarchical | A Hierarchical Neural Network for Information Extraction of Product Attribute and Condition Sentences | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4403/ | Homma, Yukinori and Sadamitsu, Kugatsu and Nishida, Kyosuke and Higashinaka, Ryuichiro and Asano, Hisako and Matsuo, Yoshihiro | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 21--29 | This paper describes a hierarchical neural network we propose for sentence classification to extract product information from product documents. The network classifies each sentence in a document into attribute and condition classes on the basis of word sequences and sentence sequences in the document. Experimental results showed the method using the proposed network significantly outperformed baseline methods by taking semantic representation of word and sentence sequential data into account. We also evaluated the network with two different product domains (insurance and tourism domains) and found that it was effective for both the domains. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,939 |
inproceedings | shi-etal-2016-combining | Combining Lexical and Semantic-based Features for Answer Sentence Selection | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4404/ | Shi, Jing and Xu, Jiaming and Yao, Yiqun and Zheng, Suncong and Xu, Bo | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 30--38 | Question answering is always an attractive and challenging task in natural language processing area. There are some open domain question answering systems, such as IBM Waston, which take the unstructured text data as input, in some ways of humanlike thinking process and a mode of artificial intelligence. At the conference on Natural Language Processing and Chinese Computing (NLPCC) 2016, China Computer Federation hosted a shared task evaluation about Open Domain Question Answering. We achieve the 2nd place at the document-based subtask. In this paper, we present our solution, which consists of feature engineering in lexical and semantic aspects and model training methods. As the result of the evaluation shows, our solution provides a valuable and brief model which could be used in modelling question answering or sentence semantic relevance. We hope our solution would contribute to this vast and significant task with some heuristic thinking. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,940 |
inproceedings | andy-etal-2016-entity | An Entity-Based approach to Answering Recurrent and Non-Recurrent Questions with Past Answers | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4405/ | Andy, Anietie and Rwebangira, Mugizi and Sekine, Satoshi | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 39--43 | An Entity-based approach to Answering recurrent and non-recurrent questions with Past Answers Abstract Community question answering (CQA) systems such as Yahoo! Answers allow registered-users to ask and answer questions in various question categories. However, a significant percentage of asked questions in Yahoo! Answers are unanswered. In this paper, we propose to reduce this percentage by reusing answers to past resolved questions from the site. Specifically, we propose to satisfy unanswered questions in entity rich categories by searching for and reusing the best answers to past resolved questions with shared needs. For unanswered questions that do not have a past resolved question with a shared need, we propose to use the best answer to a past resolved question with similar needs. Our experiments on a Yahoo! Answers dataset shows that our approach retrieves most of the past resolved questions that have shared and similar needs to unanswered questions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,941 |
inproceedings | perera-nand-2016-answer | Answer Presentation in Question Answering over Linked Data using Typed Dependency Subtree Patterns | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4406/ | Perera, Rivindu and Nand, Parma | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 44--48 | In an era where highly accurate Question Answering (QA) systems are being built using complex Natural Language Processing (NLP) and Information Retrieval (IR) algorithms, presenting the acquired answer to the user akin to a human answer is also crucial. In this paper we present an answer presentation strategy by embedding the answer in a sentence which is developed by incorporating the linguistic structure of the source question extracted through typed dependency parsing. The evaluation using human participants proved that the methodology is human-competitive and can result in linguistically correct sentences for more that 70{\%} of the test dataset acquired from QALD question dataset. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,942 |
inproceedings | neves-kraus-2016-biomedlat | {B}io{M}ed{LAT} Corpus: Annotation of the Lexical Answer Type for Biomedical Questions | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4407/ | Neves, Mariana and Kraus, Milena | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 49--58 | Question answering (QA) systems need to provide exact answers for the questions that are posed to the system. However, this can only be achieved through a precise processing of the question. During this procedure, one important step is the detection of the expected type of answer that the system should provide by extracting the headword of the questions and identifying its semantic type. We have annotated the headword and assigned UMLS semantic types to 643 factoid/list questions from the BioASQ training data. We present statistics on the corpus and a preliminary evaluation in baseline experiments. We also discuss the challenges on both the manual annotation and the automatic detection of the headwords and the semantic types. We believe that this is a valuable resource for both training and evaluation of biomedical QA systems. The corpus is available at: \url{https://github.com/mariananeves/BioMedLAT}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,943 |
inproceedings | jokinen-wilcock-2016-double | Double Topic Shifts in Open Domain Conversations: Natural Language Interface for a {W}ikipedia-based Robot Application | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4408/ | Jokinen, Kristiina and Wilcock, Graham | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 59--66 | The paper describes topic shifting in dialogues with a robot that provides information from Wiki-pedia. The work focuses on a double topical construction of dialogue coherence which refers to discourse coherence on two levels: the evolution of dialogue topics via the interaction between the user and the robot system, and the creation of discourse topics via the content of the Wiki-pedia article itself. The user selects topics that are of interest to her, and the system builds a list of potential topics, anticipated to be the next topic, by the links in the article and by the keywords extracted from the article. The described system deals with Wikipedia articles, but could easily be adapted to other digital information providing systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,944 |
inproceedings | klang-nugues-2016-pairing | Pairing {W}ikipedia Articles Across Languages | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4410/ | Klang, Marcus and Nugues, Pierre | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 72--76 | Wikipedia has become a reference knowledge source for scores of NLP applications. One of its invaluable features lies in its multilingual nature, where articles on a same entity or concept can have from one to more than 200 different versions. The interlinking of language versions in Wikipedia has undergone a major renewal with the advent of Wikidata, a unified scheme to identify entities and their properties using unique numbers. However, as the interlinking is still manually carried out by thousands of editors across the globe, errors may creep in the assignment of entities. In this paper, we describe an optimization technique to match automatically language versions of articles, and hence entities, that is only based on bags of words and anchors. We created a dataset of all the articles on persons we extracted from Wikipedia in six languages: English, French, German, Russian, Spanish, and Swedish. We report a correct match of at least 94.3{\%} on each pair. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,946 |
inproceedings | nam-etal-2016-srdf | {SRDF}: Extracting Lexical Knowledge Graph for Preserving Sentence Meaning | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4411/ | Nam, Sangha and Choi, GyuHyeon and Hahm, Younggyun and Choi, Key-Sun | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 77--81 | In this paper, we present an open information extraction system so-called SRDF that generates lexical knowledge graphs from unstructured texts. In semantic web, knowledge is expressed in the RDF triple form but the natural language text consist of multiple relations between arguments. For this reason, we combine open information extraction with the reification for the full text extraction to preserve meaning of sentence in our knowledge graph. And also our knowledge graph is designed to adapt for many existing semantic web applications. At the end of this paper, we introduce the result of the experiment and a Korean template generation module developed using SRDF. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,947 |
inproceedings | hahm-etal-2016-qaf | {QAF}: Frame Semantics-based Question Interpretation | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4412/ | Hahm, Younggyun and Nam, Sangha and Choi, Key-Sun | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 82--90 | Natural language questions are interpreted to a sequence of patterns to be matched with instances of patterns in a knowledge base (KB) for answering. A natural language (NL) question answering (QA) system utilizes meaningful patterns matching the syntac-tic/lexical features between the NL questions and KB. In the most of KBs, there are only binary relations in triple form to represent relation between two entities or entity and a value using the domain specific ontology. However, the binary relation representation is not enough to cover complex information in questions, and the ontology vocabulary sometimes does not cover the lexical meaning in questions. Complex meaning needs a knowledge representation to link the binary relation-type triples in KB. In this paper, we propose a frame semantics-based semantic parsing approach as KB-independent question pre-processing. We will propose requirements of question interpretation in the KBQA perspective, and a query form representation based on our proposed format QAF (Ques-tion Answering with the Frame Semantics), which is supposed to cover the requirements. In QAF, frame semantics roles as a model to represent complex information in questions and to disambiguate the lexical meaning in questions to match with the ontology vocabu-lary. Our system takes a question as an input and outputs QAF-query by the process which assigns semantic information in the question to its corresponding frame semantic structure using the semantic parsing rules. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,948 |
inproceedings | kano-2016-answering | Answering Yes-No Questions by Penalty Scoring in History Subjects of University Entrance Examinations | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4413/ | Kano, Yoshinobu | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 91--96 | Answering yes{--}no questions is more difficult than simply retrieving ranked search results. To answer yes{--}no questions, especially when the correct answer is no, one must find an objectionable keyword that makes the question`s answer no. Existing systems, such as factoid-based ones, cannot answer yes{--}no questions very well because of insufficient handling of such objectionable keywords. We suggest an algorithm that answers yes{--}no questions by assigning an importance to objectionable keywords. Concretely speaking, we suggest a penalized scoring method that finds and makes lower score for parts of documents that include such objectionable keywords. We check a keyword distribution for each part of a document such as a paragraph, calculating the keyword density as a basic score. Then we use an objectionable keyword penalty when a keyword does not appear in a target part but appears in other parts of the document. Our algorithm is robust for open domain problems because it requires no training. We achieved 4.45 point better results in F1 scores than the best score of the NTCIR-10 RITE2 shared task, also obtained the best score in 2014 mock university examination challenge of the Todai Robot project. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,949 |
inproceedings | kim-etal-2016-dedicated | Dedicated Workflow Management for {OKBQA} Framework | Choi, Key-Sun and Unger, Christina and Vossen, Piek and Kim, Jin-Dong and Kando, Noriko and Ngonga Ngomo, Axel-Cyrille | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4414/ | Kim, Jiseong and Choi, GyuHyeon and Choi, Key-Sun | Proceedings of the Open Knowledge Base and Question Answering Workshop ({OKBQA} 2016) | 97--101 | Nowadays, a question answering (QA) system is used in various areas such a quiz show, personal assistant, home device, and so on. The OKBQA framework supports developing a QA system in an intuitive and collaborative ways. To support collaborative development, the framework should be equipped with some functions, e.g., flexible system configuration, debugging supports, intuitive user interface, and so on while considering different developing groups of different domains. This paper presents OKBQA controller, a dedicated workflow manager for OKBQA framework, to boost collaborative development of a QA system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,950 |
inproceedings | wang-lepage-2016-combining | Combining fast{\_}align with Hierarchical Sub-sentential Alignment for Better Word Alignments | Lambert, Patrik and Babych, Bogdan and Eberle, Kurt and Banchs, Rafael E. and Rapp, Reinhard and Costa-juss{\`a}, Marta R. | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4501/ | Wang, Hao and Lepage, Yves | Proceedings of the Sixth Workshop on Hybrid Approaches to Translation ({H}y{T}ra6) | 1--7 | fast align is a simple and fast word alignment tool which is widely used in state-of-the-art machine translation systems. It yields comparable results in the end-to-end translation experiments of various language pairs. However, fast align does not perform as well as GIZA++ when applied to language pairs with distinct word orders, like English and Japanese. In this paper, given the lexical translation table output by fast align, we propose to realign words using the hierarchical sub-sentential alignment approach. Experimental results show that simple additional processing improves the performance of word alignment, which is measured by counting alignment matches in comparison with fast align. We also report the result of final machine translation in both English-Japanese and Japanese-English. We show our best system provided significant improvements over the baseline as measured by BLEU and RIBES. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,952 |
inproceedings | rikters-2016-neural | Neural Network Language Models for Candidate Scoring in Hybrid Multi-System Machine Translation | Lambert, Patrik and Babych, Bogdan and Eberle, Kurt and Banchs, Rafael E. and Rapp, Reinhard and Costa-juss{\`a}, Marta R. | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4502/ | Rikters, Mat{\={i}}ss | Proceedings of the Sixth Workshop on Hybrid Approaches to Translation ({H}y{T}ra6) | 8--15 | This paper presents the comparison of how using different neural network based language modeling tools for selecting the best candidate fragments affects the final output translation quality in a hybrid multi-system machine translation setup. Experiments were conducted by comparing perplexity and BLEU scores on common test cases using the same training data set. A 12-gram statistical language model was selected as a baseline to oppose three neural network based models of different characteristics. The models were integrated in a hybrid system that depends on the perplexity score of a sentence fragment to produce the best fitting translations. The results show a correlation between language model perplexity and BLEU scores as well as overall improvements in BLEU. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,953 |
inproceedings | hong-etal-2016-image | Image-Image Search for Comparable Corpora Construction | Lambert, Patrik and Babych, Bogdan and Eberle, Kurt and Banchs, Rafael E. and Rapp, Reinhard and Costa-juss{\`a}, Marta R. | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4503/ | Hong, Yu and Yao, Liang and Liu, Mengyi and Zhang, Tongtao and Zhou, Wenxuan and Yao, Jianmin and Ji, Heng | Proceedings of the Sixth Workshop on Hybrid Approaches to Translation ({H}y{T}ra6) | 16--25 | We present a novel method of comparable corpora construction. Unlike the traditional methods which heavily rely on linguistic features, our method only takes image similarity into consid-eration. We use an image-image search engine to obtain similar images, together with the cap-tions in source language and target language. On the basis, we utilize captions of similar imag-es to construct sentence-level bilingual corpora. Experiments on 10,371 target captions show that our method achieves a precision of 0.85 in the top search results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,954 |
inproceedings | angelov-lobanov-2016-predicting | Predicting Translation Equivalents in Linked {W}ord{N}ets | Lambert, Patrik and Babych, Bogdan and Eberle, Kurt and Banchs, Rafael E. and Rapp, Reinhard and Costa-juss{\`a}, Marta R. | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4504/ | Angelov, Krasimir and Lobanov, Gleb | Proceedings of the Sixth Workshop on Hybrid Approaches to Translation ({H}y{T}ra6) | 26--32 | We present an algorithm for predicting translation equivalents between two languages, based on the corresponding WordNets. The assumption is that all synsets of one of the languages are linked to the corresponding synsets in the other language. In theory, given the exact sense of a word in a context it must be possible to translate it as any of the words in the linked synset. In practice, however, this does not work well since automatic and accurate sense disambiguation is difficult. Instead it is possible to define a more robust translation relation between the lexemes of the two languages. As far as we know the Finnish WordNet is the only one that includes that relation. Our algorithm can be used to predict the relation for other languages as well. This is useful for instance in hybrid machine translation systems which are usually more dependent on high-quality translation dictionaries. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,955 |
inproceedings | wang-merlo-2016-modifications | Modifications of Machine Translation Evaluation Metrics by Using Word Embeddings | Lambert, Patrik and Babych, Bogdan and Eberle, Kurt and Banchs, Rafael E. and Rapp, Reinhard and Costa-juss{\`a}, Marta R. | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4505/ | Wang, Haozhou and Merlo, Paola | Proceedings of the Sixth Workshop on Hybrid Approaches to Translation ({H}y{T}ra6) | 33--41 | Traditional machine translation evaluation metrics such as BLEU and WER have been widely used, but these metrics have poor correlations with human judgements because they badly represent word similarity and impose strict identity matching. In this paper, we propose some modifications to the traditional measures based on word embeddings for these two metrics. The evaluation results show that our modifications significantly improve their correlation with human judgements. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,956 |
inproceedings | sudarikov-etal-2016-verb | Verb sense disambiguation in Machine Translation | Lambert, Patrik and Babych, Bogdan and Eberle, Kurt and Banchs, Rafael E. and Rapp, Reinhard and Costa-juss{\`a}, Marta R. | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4506/ | Sudarikov, Roman and Du{\v{s}}ek, Ond{\v{r}}ej and Holub, Martin and Bojar, Ond{\v{r}}ej and Kr{\'i}{\v{z}}, Vincent | Proceedings of the Sixth Workshop on Hybrid Approaches to Translation ({H}y{T}ra6) | 42--50 | We describe experiments in Machine Translation using word sense disambiguation (WSD) information. This work focuses on WSD in verbs, based on two different approaches {--} verbal patterns based on corpus pattern analysis and verbal word senses from valency frames. We evaluate several options of using verb senses in the source-language sentences as an additional factor for the Moses statistical machine translation system. Our results show a statistically significant translation quality improvement in terms of the BLEU metric for the valency frames approach, but in manual evaluation, both WSD methods bring improvements. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,957 |
inproceedings | beloucif-etal-2016-improving | Improving word alignment for low resource languages using {E}nglish monolingual {SRL} | Lambert, Patrik and Babych, Bogdan and Eberle, Kurt and Banchs, Rafael E. and Rapp, Reinhard and Costa-juss{\`a}, Marta R. | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4507/ | Beloucif, Meriem and Saers, Markus and Wu, Dekai | Proceedings of the Sixth Workshop on Hybrid Approaches to Translation ({H}y{T}ra6) | 51--60 | We introduce a new statistical machine translation approach specifically geared to learning translation from low resource languages, that exploits monolingual English semantic parsing to bias inversion transduction grammar (ITG) induction. We show that in contrast to conventional statistical machine translation (SMT) training methods, which rely heavily on phrase memorization, our approach focuses on learning bilingual correlations that help translating low resource languages, by using the output language semantic structure to further narrow down ITG constraints. This approach is motivated by previous research which has shown that injecting a semantic frame based objective function while training SMT models improves the translation quality. We show that including a monolingual semantic objective function during the learning of the translation model leads towards a semantically driven alignment which is more efficient than simply tuning loglinear mixture weights against a semantic frame based evaluation metric in the final stage of statistical machine translation training. We test our approach with three different language pairs and demonstrate that our model biases the learning towards more semantically correct alignments. Both GIZA++ and ITG based techniques fail to capture meaningful bilingual constituents, which is required when trying to learn translation models for low resource languages. In contrast, our proposed model not only improve translation by injecting a monolingual objective function to learn bilingual correlations during early training of the translation model, but also helps to learn more meaningful correlations with a relatively small data set, leading to a better alignment compared to either conventional ITG or traditional GIZA++ based approaches. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,958 |
inproceedings | mahesh-etal-2016-using | Using Bilingual Segments in Generating Word-to-word Translations | Lambert, Patrik and Babych, Bogdan and Eberle, Kurt and Banchs, Rafael E. and Rapp, Reinhard and Costa-juss{\`a}, Marta R. | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4508/ | Mahesh, Kavitha and Pereira Lopes, Gabriel and Gomes, Lu{\'i}s | Proceedings of the Sixth Workshop on Hybrid Approaches to Translation ({H}y{T}ra6) | 61--71 | We defend that bilingual lexicons automatically extracted from parallel corpora, whose entries have been meanwhile validated by linguists and classified as correct or incorrect, should constitute a specific parallel corpora. And, in this paper, we propose to use word-to-word translations to learn morph-units (comprising of bilingual stems and suffixes) from those bilingual lexicons for two language pairs L1-L2 and L1-L3 to induce a bilingual lexicon for the language pair L2-L3, apart from also learning morph-units for this other language pair. The applicability of bilingual morph-units in L1-L2 and L1-L3 is examined from the perspective of pivot-based lexicon induction for language pair L2-L3 with L1 as bridge. While the lexicon is derived by transitivity, the correspondences are identified based on previously learnt bilingual stems and suffixes rather than surface translation forms. The induced pairs are validated using a binary classifier trained on morphological and similarity-based features using an existing, automatically acquired, manually validated bilingual translation lexicon for language pair L2-L3. In this paper, we discuss the use of English (EN)-French (FR) and English (EN)-Portuguese (PT) lexicon of word-to-word translations in generating word-to-word translations for the language pair FR-PT with EN as pivot language. Generated translations are filtered out first using an SVM-based FR-PT classifier and then are manually validated. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,959 |
inproceedings | nakazawa-etal-2016-overview | Overview of the 3rd Workshop on {A}sian Translation | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4601/ | Nakazawa, Toshiaki and Ding, Chenchen and Mino, Hideya and Goto, Isao and Neubig, Graham and Kurohashi, Sadao | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 1--46 | This paper presents the results of the shared tasks from the 3rd workshop on Asian translation (WAT2016) including J {\ensuremath{\leftrightarrow}} E, J {\ensuremath{\leftrightarrow}} C scientific paper translation subtasks, C {\ensuremath{\leftrightarrow}} J, K {\ensuremath{\leftrightarrow}} J, E {\ensuremath{\leftrightarrow}} J patent translation subtasks, I {\ensuremath{\leftrightarrow}} E newswire subtasks and H {\ensuremath{\leftrightarrow}} E, H {\ensuremath{\leftrightarrow}} J mixed domain subtasks. For the WAT2016, 15 institutions participated in the shared tasks. About 500 translation results have been submitted to the automatic evaluation server, and selected submissions were manually evaluated. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,961 |
inproceedings | long-etal-2016-translation | Translation of Patent Sentences with a Large Vocabulary of Technical Terms Using Neural Machine Translation | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4602/ | Long, Zi and Utsuro, Takehito and Mitsuhashi, Tomoharu and Yamamoto, Mikio | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 47--57 | Neural machine translation (NMT), a new approach to machine translation, has achieved promising results comparable to those of traditional approaches such as statistical machine translation (SMT). Despite its recent success, NMT cannot handle a larger vocabulary because training complexity and decoding complexity proportionally increase with the number of target words. This problem becomes even more serious when translating patent documents, which contain many technical terms that are observed infrequently. In NMTs, words that are out of vocabulary are represented by a single unknown token. In this paper, we propose a method that enables NMT to translate patent sentences comprising a large vocabulary of technical terms. We train an NMT system on bilingual data wherein technical terms are replaced with technical term tokens; this allows it to translate most of the source sentences except technical terms. Further, we use it as a decoder to translate source sentences with technical term tokens and replace the tokens with technical term translations using SMT. We also use it to rerank the 1,000-best SMT translations on the basis of the average of the SMT score and that of the NMT rescoring of the translated sentences with technical term tokens. Our experiments on Japanese-Chinese patent sentences show that the proposed NMT system achieves a substantial improvement of up to 3.1 BLEU points and 2.3 RIBES points over traditional SMT systems and an improvement of approximately 0.6 BLEU points and 0.8 RIBES points over an equivalent NMT system without our proposed technique. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,962 |
inproceedings | sato-etal-2016-japanese | {J}apanese-{E}nglish Machine Translation of Recipe Texts | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4603/ | Sato, Takayuki and Harashima, Jun and Komachi, Mamoru | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 58--67 | Concomitant with the globalization of food culture, demand for the recipes of specialty dishes has been increasing. The recent growth in recipe sharing websites and food blogs has resulted in numerous recipe texts being available for diverse foods in various languages. However, little work has been done on machine translation of recipe texts. In this paper, we address the task of translating recipes and investigate the advantages and disadvantages of traditional phrase-based statistical machine translation and more recent neural machine translation. Specifically, we translate Japanese recipes into English, analyze errors in the translated recipes, and discuss available room for improvements. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,963 |
inproceedings | singh-etal-2016-iit | {IIT} {B}ombay`s {E}nglish-{I}ndonesian submission at {WAT}: Integrating Neural Language Models with {SMT} | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4604/ | Singh, Sandhya and Kunchukuttan, Anoop and Bhattacharyya, Pushpak | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 68--74 | This paper describes the IIT Bombay`s submission as a part of the shared task in WAT 2016 for English{--}Indonesian language pair. The results reported here are for both the direction of the language pair. Among the various approaches experimented, Operation Sequence Model (OSM) and Neural Language Model have been submitted for WAT. The OSM approach integrates translation and reordering process resulting in relatively improved translation. Similarly the neural experiment integrates Neural Language Model with Statistical Machine Translation (SMT) as a feature for translation. The Neural Probabilistic Language Model (NPLM) gave relatively high BLEU points for Indonesian to English translation system while the Neural Network Joint Model (NNJM) performed better for English to Indonesian direction of translation system. The results indicate improvement over the baseline Phrase-based SMT by 0.61 BLEU points for English-Indonesian system and 0.55 BLEU points for Indonesian-English translation system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,964 |
inproceedings | hashimoto-etal-2016-domain | Domain Adaptation and Attention-Based Unknown Word Replacement in {C}hinese-to-{J}apanese Neural Machine Translation | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4605/ | Hashimoto, Kazuma and Eriguchi, Akiko and Tsuruoka, Yoshimasa | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 75--83 | This paper describes our UT-KAY system that participated in the Workshop on Asian Translation 2016. Based on an Attention-based Neural Machine Translation (ANMT) model, we build our system by incorporating a domain adaptation method for multiple domains and an attention-based unknown word replacement method. In experiments, we verify that the attention-based unknown word replacement method is effective in improving translation scores in Chinese-to-Japanese machine translation. We further show results of manual analysis on the replaced unknown words. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,965 |
inproceedings | fuji-etal-2016-global | Global Pre-ordering for Improving Sublanguage Translation | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4606/ | Fuji, Masaru and Utiyama, Masao and Sumita, Eiichiro and Matsumoto, Yuji | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 84--93 | When translating formal documents, capturing the sentence structure specific to the sublanguage is extremely necessary to obtain high-quality translations. This paper proposes a novel global reordering method with particular focus on long-distance reordering for capturing the global sentence structure of a sublanguage. The proposed method learns global reordering models from a non-annotated parallel corpus and works in conjunction with conventional syntactic reordering. Experimental results on the patent abstract sublanguage show substantial gains of more than 25 points in the RIBES metric and comparable BLEU scores both for Japanese-to-English and English-to-Japanese translations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,966 |
inproceedings | kanouchi-etal-2016-neural | Neural Reordering Model Considering Phrase Translation and Word Alignment for Phrase-based Translation | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4607/ | Kanouchi, Shin and Sudoh, Katsuhito and Komachi, Mamoru | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 94--103 | This paper presents an improved lexicalized reordering model for phrase-based statistical machine translation using a deep neural network. Lexicalized reordering suffers from reordering ambiguity, data sparseness and noises in a phrase table. Previous neural reordering model is successful to solve the first and second problems but fails to address the third one. Therefore, we propose new features using phrase translation and word alignment to construct phrase vectors to handle inherently noisy phrase translation pairs. The experimental results show that our proposed method improves the accuracy of phrase reordering. We confirm that the proposed method works well with phrase pairs including NULL alignments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,967 |
inproceedings | li-etal-2016-system | System Description of bjtu{\_}nlp Neural Machine Translation System | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4608/ | Li, Shaotong and Xu, JinAn and Chen, Yufeng and Zhang, Yujie | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 104--110 | This paper presents our machine translation system that developed for the WAT2016 evalua-tion tasks of ja-en, ja-zh, en-ja, zh-ja, JPCja-en, JPCja-zh, JPCen-ja, JPCzh-ja. We build our system based on encoder{--}decoder framework by integrating recurrent neural network (RNN) and gate recurrent unit (GRU), and we also adopt an attention mechanism for solving the problem of information loss. Additionally, we propose a simple translation-specific approach to resolve the unknown word translation problem. Experimental results show that our system performs better than the baseline statistical machine translation (SMT) systems in each task. Moreover, it shows that our proposed approach of unknown word translation performs effec-tively improvement of translation results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,968 |
inproceedings | ehara-2016-translation | Translation systems and experimental results of the {EHR} group for {WAT}2016 tasks | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4609/ | Ehara, Terumasa | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 111--118 | System architecture, experimental settings and experimental results of the group for the WAT2016 tasks are described. We participate in six tasks: en-ja, zh-ja, JPCzh-ja, JPCko-ja, HINDENen-hi and HINDENhi-ja. Although the basic architecture of our sys-tems is PBSMT with reordering, several techniques are conducted. Especially, the system for the HINDENhi-ja task with pivoting by English uses the reordering technique. Be-cause Hindi and Japanese are both OV type languages and English is a VO type language, we can use reordering technique to the pivot language. We can improve BLEU score from 7.47 to 7.66 by the reordering technique for the sentence level pivoting of this task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,969 |
inproceedings | neubig-2016-lexicons | Lexicons and Minimum Risk Training for Neural Machine Translation: {NAIST}-{CMU} at {WAT}2016 | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4610/ | Neubig, Graham | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 119--125 | This year, the Nara Institute of Science and Technology (NAIST)/Carnegie Mellon University (CMU) submission to the Japanese-English translation track of the 2016 Workshop on Asian Translation was based on attentional neural machine translation (NMT) models. In addition to the standard NMT model, we make a number of improvements, most notably the use of discrete translation lexicons to improve probability estimates, and the use of minimum risk training to optimize the MT system for BLEU score. As a result, our system achieved the highest translation evaluation scores for the task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,970 |
inproceedings | imamura-sumita-2016-nict | {NICT}-2 Translation System for {WAT}2016: Applying Domain Adaptation to Phrase-based Statistical Machine Translation | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4611/ | Imamura, Kenji and Sumita, Eiichiro | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 126--132 | This paper describes the NICT-2 translation system for the 3rd Workshop on Asian Translation. The proposed system employs a domain adaptation method based on feature augmentation. We regarded the Japan Patent Office Corpus as a mixture of four domain corpora and improved the translation quality of each domain. In addition, we incorporated language models constructed from Google n-grams as external knowledge. Our domain adaptation method can naturally incorporate such external knowledge that contributes to translation quality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,971 |
inproceedings | kinoshita-etal-2016-translation | Translation Using {JAPIO} Patent Corpora: {JAPIO} at {WAT}2016 | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4612/ | Kinoshita, Satoshi and Oshio, Tadaaki and Mitsuhashi, Tomoharu and Ehara, Terumasa | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 133--138 | We participate in scientific paper subtask (ASPEC-EJ/CJ) and patent subtask (JPC-EJ/CJ/KJ) with phrase-based SMT systems which are trained with its own patent corpora. Using larger corpora than those prepared by the workshop organizer, we achieved higher BLEU scores than most participants in EJ and CJ translations of patent subtask, but in crowdsourcing evaluation, our EJ translation, which is best in all automatic evaluations, received a very poor score. In scientific paper subtask, our translations are given lower scores than most translations that are produced by translation engines trained with the in-domain corpora. But our scores are higher than those of general-purpose RBMTs and online services. Considering the result of crowdsourcing evaluation, it shows a possibility that CJ SMT system trained with a large patent corpus translates non-patent technical documents at a practical level. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,972 |
inproceedings | wang-etal-2016-efficient | An Efficient and Effective Online Sentence Segmenter for Simultaneous Interpretation | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4613/ | Wang, Xiaolin and Finch, Andrew and Utiyama, Masao and Sumita, Eiichiro | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 139--148 | Simultaneous interpretation is a very challenging application of machine translation in which the input is a stream of words from a speech recognition engine. The key problem is how to segment the stream in an online manner into units suitable for translation. The segmentation process proceeds by calculating a confidence score for each word that indicates the soundness of placing a sentence boundary after it, and then heuristics are employed to determine the position of the boundaries. Multiple variants of the confidence scoring method and segmentation heuristics were studied. Experimental results show that the best performing strategy is not only efficient in terms of average latency per word, but also achieved end-to-end translation quality close to an offline baseline, and close to oracle segmentation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,973 |
inproceedings | ding-etal-2016-similar | Similar {S}outheast {A}sian Languages: Corpus-Based Case Study on {T}hai-{L}aotian and {M}alay-{I}ndonesian | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4614/ | Ding, Chenchen and Utiyama, Masao and Sumita, Eiichiro | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 149--156 | This paper illustrates the similarity between Thai and Laotian, and between Malay and Indonesian, based on an investigation on raw parallel data from Asian Language Treebank. The cross-lingual similarity is investigated and demonstrated on metrics of correspondence and order of tokens, based on several standard statistical machine translation techniques. The similarity shown in this study suggests a possibility on harmonious annotation and processing of the language pairs in future development. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,974 |
inproceedings | takeno-etal-2016-integrating | Integrating empty category detection into preordering Machine Translation | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4615/ | Takeno, Shunsuke and Nagata, Masaaki and Yamamoto, Kazuhide | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 157--165 | We propose a method for integrating Japanese empty category detection into the preordering process of Japanese-to-English statistical machine translation. First, we apply machine-learning-based empty category detection to estimate the position and the type of empty categories in the constituent tree of the source sentence. Then, we apply discriminative preordering to the augmented constituent tree in which empty categories are treated as if they are normal lexical symbols. We find that it is effective to filter empty categories based on the confidence of estimation. Our experiments show that, for the IWSLT dataset consisting of short travel conversations, the insertion of empty categories alone improves the BLEU score from 33.2 to 34.3 and the RIBES score from 76.3 to 78.7, which imply that reordering has improved For the KFTT dataset consisting of Wikipedia sentences, the proposed preordering method considering empty categories improves the BLEU score from 19.9 to 20.2 and the RIBES score from 66.2 to 66.3, which shows both translation and reordering have improved slightly. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,975 |
inproceedings | eriguchi-etal-2016-character | Character-based Decoding in Tree-to-Sequence Attention-based Neural Machine Translation | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4617/ | Eriguchi, Akiko and Hashimoto, Kazuma and Tsuruoka, Yoshimasa | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 175--183 | This paper reports our systems (UT-AKY) submitted in the 3rd Workshop of Asian Translation 2016 (WAT`16) and their results in the English-to-Japanese translation task. Our model is based on the tree-to-sequence Attention-based NMT (ANMT) model proposed by Eriguchi et al. (2016). We submitted two ANMT systems: one with a word-based decoder and the other with a character-based decoder. Experimenting on the English-to-Japanese translation task, we have confirmed that the character-based decoder can cover almost the full vocabulary in the target language and generate translations much faster than the word-based model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,977 |
inproceedings | tan-2016-faster | Faster and Lighter Phrase-based Machine Translation Baseline | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4618/ | Tan, Liling | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 184--193 | This paper describes the SENSE machine translation system participation in the Third Workshop for Asian Translation (WAT2016). We share our best practices to build a fast and light phrase-based machine translation (PBMT) models that have comparable results to the baseline systems provided by the organizers. As Neural Machine Translation (NMT) overtakes PBMT as the state-of-the-art, deep learning and new MT practitioners might not be familiar with the PBMT paradigm and we hope that this paper will help them build a PBMT baseline system quickly and easily. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,978 |
inproceedings | yang-lepage-2016-improving | Improving Patent Translation using Bilingual Term Extraction and Re-tokenization for {C}hinese{--}{J}apanese | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4619/ | Yang, Wei and Lepage, Yves | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 194--202 | Unlike European languages, many Asian languages like Chinese and Japanese do not have typographic boundaries in written system. Word segmentation (tokenization) that break sentences down into individual words (tokens) is normally treated as the first step for machine translation (MT). For Chinese and Japanese, different rules and segmentation tools lead different segmentation results in different level of granularity between Chinese and Japanese. To improve the translation accuracy, we adjust and balance the granularity of segmentation results around terms for Chinese{--}Japanese patent corpus for training translation model. In this paper, we describe a statistical machine translation (SMT) system which is built on re-tokenized Chinese-Japanese patent training corpus using extracted bilingual multi-word terms. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,979 |
inproceedings | yamagishi-etal-2016-controlling | Controlling the Voice of a Sentence in {J}apanese-to-{E}nglish Neural Machine Translation | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4620/ | Yamagishi, Hayahide and Kanouchi, Shin and Sato, Takayuki and Komachi, Mamoru | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 203--210 | In machine translation, we must consider the difference in expression between languages. For example, the active/passive voice may change in Japanese-English translation. The same verb in Japanese may be translated into different voices at each translation because the voice of a generated sentence cannot be determined using only the information of the Japanese sentence. Machine translation systems should consider the information structure to improve the coherence of the output by using several topicalization techniques such as passivization. Therefore, this paper reports on our attempt to control the voice of the sentence generated by an encoder-decoder model. To control the voice of the generated sentence, we added the voice information of the target sentence to the source sentence during the training. We then generated sentences with a specified voice by appending the voice information to the source sentence. We observed experimentally whether the voice could be controlled. The results showed that, we could control the voice of the generated sentence with 85.0{\%} accuracy on average. In the evaluation of Japanese-English translation, we obtained a 0.73-point improvement in BLEU score by using gold voice labels. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,980 |
inproceedings | sudoh-nagata-2016-chinese | {C}hinese-to-{J}apanese Patent Machine Translation based on Syntactic Pre-ordering for {WAT} 2016 | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4621/ | Sudoh, Katsuhito and Nagata, Masaaki | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 211--215 | This paper presents our Chinese-to-Japanese patent machine translation system for WAT 2016 (Group ID: ntt) that uses syntactic pre-ordering over Chinese dependency structures. Chinese words are reordered by a learning-to-rank model based on pairwise classification to obtain word order close to Japanese. In this year`s system, two different machine translation methods are compared: traditional phrase-based statistical machine translation and recent sequence-to-sequence neural machine translation with an attention mechanism. Our pre-ordering showed a significant improvement over the phrase-based baseline, but, in contrast, it degraded the neural machine translation baseline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,981 |
inproceedings | sen-etal-2016-iitp | {IITP} {E}nglish-{H}indi Machine Translation System at {WAT} 2016 | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4622/ | Sen, Sukanta and Banik, Debajyoty and Ekbal, Asif and Bhattacharyya, Pushpak | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 216--222 | In this paper we describe the system that we develop as part of our participation in WAT 2016. We develop a system based on hierarchical phrase-based SMT for English to Hindi language pair. We perform re-ordering and augment bilingual dictionary to improve the performance. As a baseline we use a phrase-based SMT model. The MT models are fine-tuned on the development set, and the best configurations are used to report the evaluation on the test set. Experiments show the BLEU of 13.71 on the benchmark test data. This is better compared to the official baseline BLEU score of 10.79. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,982 |
inproceedings | shu-miura-2016-residual | Residual Stacking of {RNN}s for Neural Machine Translation | Nakazawa, Toshiaki and Mino, Hideya and Ding, Chenchen and Goto, Isao and Neubig, Graham and Kurohashi, Sadao and Riza, Ir. Hammam and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4623/ | Shu, Raphael and Miura, Akiva | Proceedings of the 3rd Workshop on {A}sian Translation ({WAT}2016) | 223--229 | To enhance Neural Machine Translation models, several obvious ways such as enlarging the hidden size of recurrent layers and stacking multiple layers of RNN can be considered. Surprisingly, we observe that using naively stacked RNNs in the decoder slows down the training and leads to degradation in performance. In this paper, We demonstrate that applying residual connections in the depth of stacked RNNs can help the optimization, which is referred to as residual stacking. In empirical evaluation, residual stacking of decoder RNNs gives superior results compared to other methods of enhancing the model with a fixed parameter budget. Our submitted systems in WAT2016 are based on a NMT model ensemble with residual stacking in the decoder. To further improve the performance, we also attempt various methods of system combination in our experiments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,983 |
inproceedings | song-2016-analyzing | Analyzing Impact, Trend, and Diffusion of Knowledge associated with Neoplasms Research | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4701/ | Song, Min | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 1 | Cancer (a.k.a neoplasms in a broader sense) is one of the leading causes of death worldwide and its incidence is expected to exacerbate. To respond to the critical need from the society, there have been rigorous attempts for the cancer research community to develop treatment for cancer. Accordingly, we observe a surge in the sheer volume of research products and outcomes in relation to neoplasms. In this talk, we introduce the notion of entitymetrics to provide a new lens for understanding the impact, trend, and diffusion of knowledge associated with neoplasms research. To this end, we collected over two million records from PubMed, the most popular search engine in the medical domain. Coupled with text mining techniques including named entity recognition, sentence boundary detection, string approximate matching, entitymetrics enables us to analyze knowledge diffusion, impact, and trend at various knowledge entity units, such as bio-entity, organization, and country. At the end of the talk, the future applications and possible directions of entitymetrics will be discussed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,985 |
inproceedings | amjadian-etal-2016-local | Local-Global Vectors to Improve Unigram Terminology Extraction | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4702/ | Amjadian, Ehsan and Inkpen, Diana and Paribakht, Tahereh and Faez, Farahnaz | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 2--11 | The present paper explores a novel method that integrates efficient distributed representations with terminology extraction. We show that the information from a small number of observed instances can be combined with local and global word embeddings to remarkably improve the term extraction results on unigram terms. To do so we pass the terms extracted by other tools to a filter made of the local-global embeddings and a classifier which in turn decides whether or not a term candidate is a term. The filter can also be used as a hub to merge different term extraction tools into a single higher-performing system. We compare filters that use the skip-gram architecture and filters that employ the CBOW architecture for the task at hand. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,986 |
inproceedings | mykowiecka-etal-2016-recognition | Recognition of non-domain phrases in automatically extracted lists of terms | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4703/ | Mykowiecka, Agnieszka and Marciniak, Malgorzata and Rychlik, Piotr | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 12--20 | In the paper, we address the problem of recognition of non-domain phrases in terminology lists obtained with an automatic term extraction tool. We focus on identification of multi-word phrases that are general terms and discourse function expressions. We tested several methods based on domain corpora comparison and a method based on contexts of phrases identified in a large corpus of general language. We compared the results of the methods to manual annotation. The results show that the task is quite hard as the inter-annotator agreement is low. Several tested methods achieved similar overall results, although the phrase ordering varied between methods. The most successful method with the precision about 0.75 at the half of the tested list was the context based method using a modified contextual diversity coefficient. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,987 |
inproceedings | barriere-etal-2016-contextual | Contextual term equivalent search using domain-driven disambiguation | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4704/ | Barri{\`e}re, Caroline and M{\'e}nard, Pierre Andr{\'e} and Azoulay, Daphn{\'e}e | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 21--29 | This article presents a domain-driven algorithm for the task of term sense disambiguation (TSD). TSD aims at automatically choosing which term record from a term bank best represents the meaning of a term occurring in a particular context. In a translation environment, finding the contextually appropriate term record is necessary to access the proper equivalent to be used in the target language text. The term bank TERMIUM Plus, recently published as an open access repository, is chosen as a domain-rich resource for testing our TSD algorithm, using English and French as source and target languages. We devise an experiment using over 1300 English terms found in scientific articles, and show that our domain-driven TSD algorithm is able to bring the best term record, and therefore the best French equivalent, at the average rank of 1.69 compared to a baseline random rank of 3.51. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,988 |
inproceedings | iwai-etal-2016-method | A Method of Augmenting Bilingual Terminology by Taking Advantage of the Conceptual Systematicity of Terminologies | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4705/ | Iwai, Miki and Takeuchi, Koichi and Kageura, Kyo and Ishibashi, Kazuya | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 30--40 | In this paper, we propose a method of augmenting existing bilingual terminologies. Our method belongs to a {\textquotedblleft}generate and validate{\textquotedblright} framework rather than extraction from corpora. Although many studies have proposed methods to find term translations or to augment terminology within a {\textquotedblleft}generate and validate{\textquotedblright} framework, few has taken full advantage of the systematic nature of terminologies. A terminology of a domain represents the conceptual system of the domain fairly systematically, and we contend that making use of the systematicity fully will greatly contribute to the effective augmentation of terminologies. This paper proposes and evaluates a novel method to generate bilingual term candidates by using existing terminologies and delving into their systematicity. Experiments have shown that our method can generate much better term candidate pairs than the existing method and give improved performance for terminology augmentation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,989 |
inproceedings | roesiger-etal-2016-acquisition | Acquisition of semantic relations between terms: how far can we get with standard {NLP} tools? | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4706/ | Roesiger, Ina and Bettinger, Julia and Sch{\"afer, Johannes and Dorna, Michael and Heid, Ulrich | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 41--51 | The extraction of data exemplifying relations between terms can make use, at least to a large extent, of techniques that are similar to those used in standard hybrid term candidate extraction, namely basic corpus analysis tools (e.g. tagging, lemmatization, parsing), as well as morphological analysis of complex words (compounds and derived items). In this article, we discuss the use of such techniques for the extraction of raw material for a description of relations between terms, and we provide internal evaluation data for the devices developed. We claim that user-generated content is a rich source of term variation through paraphrasing and reformulation, and that these provide relational data at the same time as term variants. Germanic languages with their rich word formation morphology may be particularly good candidates for the approach advocated here. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,990 |
inproceedings | bernier-colborne-drouin-2016-evaluation-distributional | Evaluation of distributional semantic models: a holistic approach | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4707/ | Bernier-Colborne, Gabriel and Drouin, Patrick | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 52--61 | We investigate how both model-related factors and application-related factors affect the accuracy of distributional semantic models (DSMs) in the context of specialized lexicography, and how these factors interact. This holistic approach to the evaluation of DSMs provides valuable guidelines for the use of these models and insight into the kind of semantic information they capture. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,991 |
inproceedings | qasemizadeh-2016-study | A Study on the Interplay Between the Corpus Size and Parameters of a Distributional Model for Term Classification | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4708/ | QasemiZadeh, Behrang | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 62--72 | We propose and evaluate a method for identifying co-hyponym lexical units in a terminological resource. The principles of term recognition and distributional semantics are combined to extract terms from a similar category of concept. Given a set of candidate terms, random projections are employed to represent them as low-dimensional vectors. These vectors are derived automatically from the frequency of the co-occurrences of the candidate terms and words that appear within windows of text in their proximity (context-windows). In a $k$-nearest neighbours framework, these vectors are classified using a small set of manually annotated terms which exemplify concept categories. We then investigate the interplay between the size of the corpus that is used for collecting the co-occurrences and a number of factors that play roles in the performance of the proposed method: the configuration of context-windows for collecting co-occurrences, the selection of neighbourhood size ($k$), and the choice of similarity metric. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,992 |
inproceedings | leon-arauz-etal-2016-pattern | Pattern-based Word Sketches for the Extraction of Semantic Relations | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4709/ | Le{\'o}n-Ara{\'u}z, Pilar and San Mart{\'i}n, Antonio and Faber, Pamela | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 73--82 | Despite advances in computer technology, terminologists still tend to rely on manual work to extract all the semantic information that they need for the description of specialized concepts. In this paper we propose the creation of new word sketches in Sketch Engine for the extraction of semantic relations. Following a pattern-based approach, new sketch grammars are devel-oped in order to extract some of the most common semantic relations used in the field of ter-minology: generic-specific, part-whole, location, cause and function. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,993 |
inproceedings | miyata-kageura-2016-constructing | Constructing and Evaluating Controlled Bilingual Terminologies | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4710/ | Miyata, Rei and Kageura, Kyo | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 83--93 | This paper presents the construction and evaluation of Japanese and English controlled bilingual terminologies that are particularly intended for controlled authoring and machine translation with special reference to the Japanese municipal domain. Our terminologies are constructed by extracting terms from municipal website texts, and the term variations are controlled by defining preferred and proscribed terms for both the source Japanese and the target English. To assess the coverage of the terms/concepts in the municipal domain and validate the quality of the control, we employ a quantitative extrapolation method that estimates the potential vocabulary size. Using Large-Number-of-Rare-Event (LNRE) modelling, we compare two parameters: (1) uncontrolled and controlled and (2) Japanese and English. The results show that our terminologies currently cover about 45{--}65{\%} of the terms and 50{--}65{\%} of the concepts in the municipal domain, and are well controlled. The detailed analysis of growth patterns of terminologies also provides insight into the extent to which we can enlarge the terminologies within the realistic range. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,994 |
inproceedings | francopoulo-etal-2016-providing | Providing and Analyzing {NLP} Terms for our Community | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4711/ | Francopoulo, Gil and Mariani, Joseph and Paroubek, Patrick and Vernier, Fr{\'e}d{\'e}ric | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 94--103 | By its own nature, the Natural Language Processing (NLP) community is a priori the best equipped to study the evolution of its own publications, but works in this direction are rare and only recently have we seen a few attempts at charting the field. In this paper, we use the algorithms, resources, standards, tools and common practices of the NLP field to build a list of terms characteristic of ongoing research, by mining a large corpus of scientific publications, aiming at the largest possible exhaustivity and covering the largest possible time span. Study of the evolution of this term list through time reveals interesting insights on the dynamics of field and the availability of the term database and of the corpus (for a large part) make possible many further comparative studies in addition to providing a test field for a new graphic interface designed to perform visual time analytics of large sized thesauri. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,995 |
inproceedings | kocbek-etal-2016-evaluating | Evaluating a dictionary of human phenotype terms focusing on rare diseases | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4712/ | Kocbek, Simon and Fujiwara, Toyofumi and Kim, Jin-Dong and Takagi, Toshihisa and Groza, Tudor | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 104--109 | Annotating medical text such as clinical notes with human phenotype descriptors is an important task that can, for example, assist in building patient profiles. To automatically annotate text one usually needs a dictionary of predefined terms. However, do to the variety of human expressiveness, current state-of-the art phenotype concept recognizers and automatic annotators struggle with specific domain issues and challenges. In this paper we present results of an-notating gold standard corpus with a dictionary containing lexical variants for the Human Phenotype Ontology terms. The main purpose of the dictionary is to improve the recall of phenotype concept recognition systems. We compare the method with four other approaches and present results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,996 |
inproceedings | sadoun-2016-semi | A semi automatic annotation approach for ontological and terminological knowledge acquisition | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4713/ | Sadoun, Driss | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 110--120 | We propose a semi-automatic method for the acquisition of specialised ontological and terminological knowledge. An ontology and a terminology are automatically built from domain experts' annotations. The ontology formalizes the common and shared conceptual vocabulary of those experts. Its associated terminology defines a glossary linking annotated terms to their semantic categories. These two resources evolve incrementally and are used for an automatic annotation of a new corpus at each iteration. The annotated corpus concerns the evaluation of French higher education and science institutions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,997 |
inproceedings | krishna-hans-2016-understanding | Understanding Medical free text: A Terminology driven approach | Drouin, Patrick and Grabar, Natalia and Hamon, Thierry and Kageura, Kyo and Takeuchi, Koichi | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4714/ | Krishna, Santosh Sai and Hans, Manoj | Proceedings of the 5th International Workshop on Computational Terminology (Computerm2016) | 121--125 | With many hospitals digitalizing clinical records it has opened opportunities for researchers in NLP, Machine Learning to apply techniques for extracting meaning and make actionable insights. There has been previous attempts in mapping free text to medical nomenclature like UMLS, SNOMED. However, in this paper, we had analyzed diagnosis in clinical reports using ICD10 to achieve a lightweight, real-time predictions by introducing concepts like WordInfo, root word identification. We were able to achieve 68.3{\%} accuracy over clinical records collected from qualified clinicians. Our study would further help the healthcare institutes in organizing their clinical reports based on ICD10 mappings and derive numerous insights to achieve operational efficiency and better medical care. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,998 |
inproceedings | malmasi-etal-2016-discriminating | Discriminating between Similar Languages and {A}rabic Dialect Identification: A Report on the Third {DSL} Shared Task | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4801/ | Malmasi, Shervin and Zampieri, Marcos and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Ali, Ahmed and Tiedemann, J{\"org | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 1--14 | We present the results of the third edition of the Discriminating between Similar Languages (DSL) shared task, which was organized as part of the VarDial`2016 workshop at COLING`2016. The challenge offered two subtasks: subtask 1 focused on the identification of very similar languages and language varieties in newswire texts, whereas subtask 2 dealt with Arabic dialect identification in speech transcripts. A total of 37 teams registered to participate in the task, 24 teams submitted test results, and 20 teams also wrote system description papers. High-order character n-grams were the most successful feature, and the best classification approaches included traditional supervised learning methods such as SVM, logistic regression, and language models, while deep learning approaches did not perform very well. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,000 |
inproceedings | coltekin-rama-2016-discriminating | Discriminating Similar Languages with Linear {SVM}s and Neural Networks | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4802/ | {\c{C{\"oltekin, {\c{Ca{\u{gr{\i and Rama, Taraka | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 15--24 | This paper describes the systems we experimented with for participating in the discriminating between similar languages (DSL) shared task 2016. We submitted results of a single system based on support vector machines (SVM) with linear kernel and using character ngram features, which obtained the first rank at the closed training track for test set A. Besides the linear SVM, we also report additional experiments with a number of deep learning architectures. Despite our intuition that non-linear deep learning methods should be advantageous, linear models seems to fare better in this task, at least with the amount of data and the amount of effort we spent on tuning these models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,001 |
inproceedings | rama-coltekin-2016-lstm | {LSTM} Autoencoders for Dialect Analysis | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4803/ | Rama, Taraka and {\c{C{\"oltekin, {\c{Ca{\u{gr{\i | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 25--32 | Computational approaches for dialectometry employed Levenshtein distance to compute an aggregate similarity between two dialects belonging to a single language group. In this paper, we apply a sequence-to-sequence autoencoder to learn a deep representation for words that can be used for meaningful comparison across dialects. In contrast to the alignment-based methods, our method does not require explicit alignments. We apply our architectures to three different datasets and show that the learned representations indicate highly similar results with the analyses based on Levenshtein distance and capture the traditional dialectal differences shown by dialectologists. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,002 |
inproceedings | zirikly-etal-2016-gw-lt3 | The {GW}/{LT}3 {V}ar{D}ial 2016 Shared Task System for Dialects and Similar Languages Detection | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4804/ | Zirikly, Ayah and Desmet, Bart and Diab, Mona | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 33--41 | This paper describes the GW/LT3 contribution to the 2016 VarDial shared task on the identification of similar languages (task 1) and Arabic dialects (task 2). For both tasks, we experimented with Logistic Regression and Neural Network classifiers in isolation. Additionally, we implemented a cascaded classifier that consists of coarse and fine-grained classifiers (task 1) and a classifier ensemble with majority voting for task 2. The submitted systems obtained state-of-the art performance and ranked first for the evaluation on social media data (test sets B1 and B2 for task 1), with a maximum weighted F1 score of 91.94{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,003 |
inproceedings | diab-2016-processing | Processing Dialectal {A}rabic: Exploiting Variability and Similarity to Overcome Challenges and Discover Opportunities | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4805/ | Diab, Mona | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 42 | We recently witnessed an exponential growth in dialectal Arabic usage in both textual data and speech recordings especially in social media. Processing such media is of great utility for all kinds of applications ranging from information extraction to social media analytics for political and commercial purposes to building decision support systems. Compared to other languages, Arabic, especially the informal variety, poses a significant challenge to natural language processing algorithms since it comprises multiple dialects, linguistic code switching, and a lack of standardized orthographies, to top its relatively complex morphology. Inherently, the problem of processing Arabic in the context of social media is the problem of how to handle resource poor languages. In this talk I will go over some of our insights to some of these problems and show how there is a silver lining where we can generalize some of our solutions to other low resource language contexts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,004 |
inproceedings | popovic-etal-2016-language | Language Related Issues for Machine Translation between Closely Related {S}outh {S}lavic Languages | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4806/ | Popovi{\'c}, Maja and Ar{\v{c}}an, Mihael and Klubi{\v{c}}ka, Filip | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 43--52 | Machine translation between closely related languages is less challenging and exibits a smaller number of translation errors than translation between distant languages, but there are still obstacles which should be addressed in order to improve such systems. This work explores the obstacles for machine translation systems between closely related South Slavic languages, namely Croatian, Serbian and Slovenian. Statistical systems for all language pairs and translation directions are trained using parallel texts from different domains, however mainly on spoken language i.e. subtitles. For translation between Serbian and Croatian, a rule-based system is also explored. It is shown that for all language pairs and translation systems, the main obstacles are differences between structural properties. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,005 |
inproceedings | adouane-etal-2016-romanized | {R}omanized {B}erber and {R}omanized {A}rabic Automatic Language Identification Using Machine Learning | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4807/ | Adouane, Wafia and Semmar, Nasredine and Johansson, Richard | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 53--61 | The identification of the language of text/speech input is the first step to be able to properly do any language-dependent natural language processing. The task is called Automatic Language Identification (ALI). Being a well-studied field since early 1960`s, various methods have been applied to many standard languages. The ALI standard methods require datasets for training and use character/word-based n-gram models. However, social media and new technologies have contributed to the rise of informal and minority languages on the Web. The state-of-the-art automatic language identifiers fail to properly identify many of them. Romanized Arabic (RA) and Romanized Berber (RB) are cases of these informal languages which are under-resourced. The goal of this paper is twofold: detect RA and RB, at a document level, as separate languages and distinguish between them as they coexist in North Africa. We consider the task as a classification problem and use supervised machine learning to solve it. For both languages, character-based 5-grams combined with additional lexicons score the best, F-score of 99.75{\%} and 97.77{\%} for RB and RA respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,006 |
inproceedings | ostling-2016-many | How Many Languages Can a Language Model Model? | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4808/ | {\"Ostling, Robert | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 62 | One of the purposes of the VarDial workshop series is to encourage research into NLP methods that treat human languages as a continuum, by designing models that exploit the similarities between languages and variants. In my work, I am using a continuous vector representation of languages that allows modeling and exploring the language continuum in a very direct way. The basic tool for this is a character-based recurrent neural network language model conditioned on language vectors whose values are learned during training. By feeding the model Bible translations in a thousand languages, not only does the learned vector space capture language similarity, but by interpolating between the learned vectors it is possible to generate text in unattested intermediate forms between the training languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,007 |
inproceedings | adouane-etal-2016-automatic | Automatic Detection of {A}rabicized {B}erber and {A}rabic Varieties | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4809/ | Adouane, Wafia and Semmar, Nasredine and Johansson, Richard and Bobicev, Victoria | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 63--72 | Automatic Language Identification (ALI) is the detection of the natural language of an input text by a machine. It is the first necessary step to do any language-dependent natural language processing task. Various methods have been successfully applied to a wide range of languages, and the state-of-the-art automatic language identifiers are mainly based on character n-gram models trained on huge corpora. However, there are many languages which are not yet automatically processed, for instance minority and informal languages. Many of these languages are only spoken and do not exist in a written format. Social media platforms and new technologies have facilitated the emergence of written format for these spoken languages based on pronunciation. The latter are not well represented on the Web, commonly referred to as under-resourced languages, and the current available ALI tools fail to properly recognize them. In this paper, we revisit the problem of ALI with the focus on Arabicized Berber and dialectal Arabic short texts. We introduce new resources and evaluate the existing methods. The results show that machine learning models combined with lexicons are well suited for detecting Arabicized Berber and different Arabic varieties and distinguishing between them, giving a macro-average F-score of 92.94{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,008 |
inproceedings | aminian-etal-2016-automatic | Automatic Verification and Augmentation of Multilingual Lexicons | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4810/ | Aminian, Maryam and Al-Badrashiny, Mohamed and Diab, Mona | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 73--81 | We present an approach for automatic verification and augmentation of multilingual lexica. We exploit existing parallel and monolingual corpora to extract multilingual correspondents via tri-angulation. We demonstrate the efficacy of our approach on two publicly available resources: Tharwa, a three-way lexicon comprising Dialectal Arabic, Modern Standard Arabic and English lemmas among other information (Diab et al., 2014); and BabelNet, a multilingual thesaurus comprising over 276 languages including Arabic variant entries (Navigli and Ponzetto, 2012). Our automated approach yields an F1-score of 71.71{\%} in generating correct multilingual correspondents against gold Tharwa, and 54.46{\%} against gold BabelNet without any human intervention. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,009 |
inproceedings | kunchukuttan-bhattacharyya-2016-faster | Faster Decoding for Subword Level Phrase-based {SMT} between Related Languages | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4811/ | Kunchukuttan, Anoop and Bhattacharyya, Pushpak | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 82--88 | A common and effective way to train translation systems between related languages is to consider sub-word level basic units. However, this increases the length of the sentences resulting in increased decoding time. The increase in length is also impacted by the specific choice of data format for representing the sentences as subwords. In a phrase-based SMT framework, we investigate different choices of decoder parameters as well as data format and their impact on decoding time and translation accuracy. We suggest best options for these settings that significantly improve decoding time with little impact on the translation accuracy. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,010 |
inproceedings | malmasi-2016-subdialectal | Subdialectal Differences in {S}orani {K}urdish | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4812/ | Malmasi, Shervin | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 89--96 | In this study we apply classification methods for detecting subdialectal differences in Sorani Kurdish texts produced in different regions, namely Iran and Iraq. As Sorani is a low-resource language, no corpus including texts from different regions was readily available. To this end, we identified data sources that could be leveraged for this task to create a dataset of 200,000 sentences. Using surface features, we attempted to classify Sorani subdialects, showing that sentences from news sources in Iraq and Iran are distinguishable with 96{\%} accuracy. This is the first preliminary study for a dialect that has not been widely studied in computational linguistics, evidencing the possible existence of distinct subdialects. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,011 |
inproceedings | popovic-etal-2016-enlarging | Enlarging Scarce In-domain {E}nglish-{C}roatian Corpus for {SMT} of {MOOC}s Using {S}erbian | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4813/ | Popovi{\'c}, Maja and Cholakov, Kostadin and Kordoni, Valia and Ljube{\v{s}}i{\'c}, Nikola | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 97--105 | Massive Open Online Courses have been growing rapidly in size and impact. Yet the language barrier constitutes a major growth impediment in reaching out all people and educating all citizens. A vast majority of educational material is available only in English, and state-of-the-art machine translation systems still have not been tailored for this peculiar genre. In addition, a mere collection of appropriate in-domain training material is a challenging task. In this work, we investigate statistical machine translation of lecture subtitles from English into Croatian, which is morphologically rich and generally weakly supported, especially for the educational domain. We show that results comparable with publicly available systems trained on much larger data can be achieved if a small in-domain training set is used in combination with additional in-domain corpus originating from the closely related Serbian language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,012 |
inproceedings | malmasi-zampieri-2016-arabic | {A}rabic Dialect Identification in Speech Transcripts | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4814/ | Malmasi, Shervin and Zampieri, Marcos | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 106--113 | In this paper we describe a system developed to identify a set of four regional Arabic dialects (Egyptian, Gulf, Levantine, North African) and Modern Standard Arabic (MSA) in a transcribed speech corpus. We competed under the team name MAZA in the Arabic Dialect Identification sub-task of the 2016 Discriminating between Similar Languages (DSL) shared task. Our system achieved an F1-score of 0.51 in the closed training track, ranking first among the 18 teams that participated in the sub-task. Our system utilizes a classifier ensemble with a set of linear models as base classifiers. We experimented with three different ensemble fusion strategies, with the mean probability approach providing the best performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,013 |
inproceedings | herman-etal-2016-dsl | {DSL} Shared Task 2016: Perfect Is The Enemy of Good Language Discrimination Through Expectation{--}Maximization and Chunk-based Language Model | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4815/ | Herman, Ond{\v{r}}ej and Suchomel, V{\'i}t and Baisa, V{\'i}t and Rychl{\'y}, Pavel | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 114--118 | In this paper we investigate two approaches to discrimination of similar languages: Expectation{--}maximization algorithm for estimating conditional probability P(word|language) and byte level language models similar to compression-based language modelling methods. The accuracy of these methods reached respectively 86.6{\%} and 88.3{\%} on set A of the DSL Shared task 2016 competition. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,014 |
inproceedings | bjerva-2016-byte | Byte-based Language Identification with Deep Convolutional Networks | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4816/ | Bjerva, Johannes | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 119--125 | We report on our system for the shared task on discriminating between similar languages (DSL 2016). The system uses only byte representations in a deep residual network (ResNet). The system, named ResIdent, is trained only on the data released with the task (closed training). We obtain 84.88{\%} accuracy on subtask A, 68.80{\%} accuracy on subtask B1, and 69.80{\%} accuracy on subtask B2. A large difference in accuracy on development data can be observed with relatively minor changes in our network`s architecture and hyperparameters. We therefore expect fine-tuning of these parameters to yield higher accuracies. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,015 |
inproceedings | hanani-etal-2016-classifying | Classifying {ASR} Transcriptions According to {A}rabic Dialect | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4817/ | Hanani, Abualsoud and Qaroush, Aziz and Taylor, Stephen | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 126--134 | We describe several systems for identifying short samples of Arabic dialects. The systems were prepared for the shared task of the 2016 DSL Workshop. Our best system, an SVM using character tri-gram features, achieved an accuracy on the test data for the task of 0.4279, compared to a baseline of 0.20 for chance guesses or 0.2279 if we had always chosen the same most frequent class in the test set. This compares with the results of the team with the best weighted F1 score, which was an accuracy of 0.5117. The team entries seem to fall into cohorts, with all the teams in a cohort within a standard-deviation of each other, and our three entries are in the third cohort, which is about seven standard deviations from the top. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,016 |
inproceedings | ionescu-popescu-2016-unibuckernel | {U}nibuc{K}ernel: An Approach for {A}rabic Dialect Identification Based on Multiple String Kernels | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4818/ | Ionescu, Radu Tudor and Popescu, Marius | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 135--144 | The most common approach in text mining classification tasks is to rely on features like words, part-of-speech tags, stems, or some other high-level linguistic features. Unlike the common approach, we present a method that uses only character p-grams (also known as n-grams) as features for the Arabic Dialect Identification (ADI) Closed Shared Task of the DSL 2016 Challenge. The proposed approach combines several string kernels using multiple kernel learning. In the learning stage, we try both Kernel Discriminant Analysis (KDA) and Kernel Ridge Regression (KRR), and we choose KDA as it gives better results in a 10-fold cross-validation carried out on the training set. Our approach is shallow and simple, but the empirical results obtained in the ADI Shared Task prove that it achieves very good results. Indeed, we ranked on the second place with an accuracy of 50.91{\%} and a weighted F1 score of 51.31{\%}. We also present improved results in this paper, which we obtained after the competition ended. Simply by adding more regularization into our model to make it more suitable for test data that comes from a different distribution than training data, we obtain an accuracy of 51.82{\%} and a weighted F1 score of 52.18{\%}. Furthermore, the proposed approach has an important advantage in that it is language independent and linguistic theory neutral, as it does not require any NLP tools. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,017 |
inproceedings | belinkov-glass-2016-character | A Character-level Convolutional Neural Network for Distinguishing Similar Languages and Dialects | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4819/ | Belinkov, Yonatan and Glass, James | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 145--152 | Discriminating between closely-related language varieties is considered a challenging and important task. This paper describes our submission to the DSL 2016 shared-task, which included two sub-tasks: one on discriminating similar languages and one on identifying Arabic dialects. We developed a character-level neural network for this task. Given a sequence of characters, our model embeds each character in vector space, runs the sequence through multiple convolutions with different filter widths, and pools the convolutional representations to obtain a hidden vector representation of the text that is used for predicting the language or dialect. We primarily focused on the Arabic dialect identification task and obtained an F1 score of 0.4834, ranking 6th out of 18 participants. We also analyze errors made by our system on the Arabic data in some detail, and point to challenges such an approach is faced with. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,018 |
inproceedings | jauhiainen-etal-2016-heli | {H}e{LI}, a Word-Based Backoff Method for Language Identification | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4820/ | Jauhiainen, Tommi and Lind{\'e}n, Krister and Jauhiainen, Heidi | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 153--162 | In this paper we describe the Helsinki language identification method, HeLI, and the resources we created for and used in the 3rd edition of the Discriminating between Similar Languages (DSL) shared task, which was organized as part of the VarDial 2016 workshop. The shared task comprised of a total of 8 tracks, of which we participated in 7. The shared task had a record number of participants, with 17 teams providing results for the closed track of the test set A. Our system reached the 2nd position in 4 tracks (A closed and open, B1 open and B2 open) and in this paper we are focusing on the methods and data used for those tracks. We describe our word-based backoff method in mathematical notation. We also describe how we selected the corpus we used in the open tracks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,019 |
inproceedings | adouane-etal-2016-asirem | {ASIREM} Participation at the Discriminating Similar Languages Shared Task 2016 | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4821/ | Adouane, Wafia and Semmar, Nasredine and Johansson, Richard | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 163--169 | This paper presents the system built by ASIREM team for the Discriminating between Similar Languages (DSL) Shared task 2016. It describes the system which uses character-based and word-based n-grams separately. ASIREM participated in both sub-tasks (sub-task 1 and sub-task 2) and in both open and closed tracks. For the sub-task 1 which deals with Discriminating between similar languages and national language varieties, the system achieved an accuracy of 87.79{\%} on the closed track, ending up ninth (the best results being 89.38{\%}). In sub-task 2, which deals with Arabic dialect identification, the system achieved its best performance using character-based n-grams (49.67{\%} accuracy), ranking fourth in the closed track (the best result being 51.16{\%}), and an accuracy of 53.18{\%}, ranking first in the open track. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,020 |
inproceedings | gamallo-etal-2016-comparing | Comparing Two Basic Methods for Discriminating Between Similar Languages and Varieties | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4822/ | Gamallo, Pablo and Alegria, I{\~n}aki and Pichel, Jos{\'e} Ramom and Agirrezabal, Manex | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 170--177 | This article describes the systems submitted by the Citius{\_}Ixa{\_}Imaxin team to the Discriminating Similar Languages Shared Task 2016. The systems are based on two different strategies: classification with ranked dictionaries and Naive Bayes classifiers. The results of the evaluation show that ranking dictionaries are more sound and stable across different domains while basic bayesian models perform reasonably well on in-domain datasets, but their performance drops when they are applied on out-of-domain texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,021 |
inproceedings | goutte-leger-2016-advances | Advances in Ngram-based Discrimination of Similar Languages | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4823/ | Goutte, Cyril and L{\'e}ger, Serge | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 178--184 | We describe the systems entered by the National Research Council in the 2016 shared task on discriminating similar languages. Like previous years, we relied on character ngram features, and a mixture of discriminative and generative statistical classifiers. We mostly investigated the influence of the amount of data on the performance, in the open task, and compared the two-stage approach (predicting language/group, then variant) to a flat approach. Results suggest that ngrams are still state-of-the-art for language and variant identification, and that additional data has a small but decisive impact. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,022 |
inproceedings | guggilla-2016-discrimination | Discrimination between Similar Languages, Varieties and Dialects using {CNN}- and {LSTM}-based Deep Neural Networks | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4824/ | Guggilla, Chinnappa | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 185--194 | In this paper, we describe a system (CGLI) for discriminating similar languages, varieties and dialects using convolutional neural networks (CNNs) and long short-term memory (LSTM) neural networks. We have participated in the Arabic dialect identification sub-task of DSL 2016 shared task for distinguishing different Arabic language texts under closed submission track. Our proposed approach is language independent and works for discriminating any given set of languages, varieties, and dialects. We have obtained 43.29{\%} weighted-F1 accuracy in this sub-task using CNN approach using default network parameters. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,023 |
inproceedings | mcnamee-2016-language | Language and Dialect Discrimination Using Compression-Inspired Language Models | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4825/ | McNamee, Paul | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 195--203 | The DSL 2016 shared task continued previous evaluations from 2014 and 2015 that facilitated the study of automated language and dialect identification. This paper describes results for this year`s shared task and from several related experiments conducted at the Johns Hopkins University Human Language Technology Center of Excellence (JHU HLTCOE). Previously the HLTCOE has explored the use of compression-inspired language modeling for language and dialect identification, using news, Wikipedia, blog post, and Twitter corpora. The technique we have relied upon is based on prediction by partial matching (PPM), a state of the art text compression technique. Due to the close relationship between adaptive compression and language modeling, such compression techniques can also be applied to multi-way text classification problems, and previous studies have examined tasks such as authorship attribution, email spam detection, and topical classification. We applied our approach to the multi-class decision that considered each dialect or language as a possibility for the given shared task input line. Results for test-set A were in accord with our expectations, however results for test-sets B and C appear to be markedly worse. We had not anticipated the inclusion of multiple communications in differing languages in test-set B (social media) input lines, and had not expected the test-set C (dialectal Arabic) data to be represented phonetically instead of in native orthography. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,024 |
inproceedings | alshutayri-etal-2016-arabic | {A}rabic Language {WEKA}-Based Dialect Classifier for {A}rabic Automatic Speech Recognition Transcripts | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4826/ | Alshutayri, Areej and Atwell, Eric and Alosaimy, Abdulrahman and Dickins, James and Ingleby, Michael and Watson, Janet | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 204--211 | This paper describes an Arabic dialect identification system which we developed for the Discriminating Similar Languages (DSL) 2016 shared task. We classified Arabic dialects by using Waikato Environment for Knowledge Analysis (WEKA) data analytic tool which contains many alternative filters and classifiers for machine learning. We experimented with several classifiers and the best accuracy was achieved using the Sequential Minimal Optimization (SMO) algorithm for training and testing process set to three different feature-sets for each testing process. Our approach achieved an accuracy equal to 42.85{\%} which is considerably worse in comparison to the evaluation scores on the training set of 80-90{\%} and with training set {\textquotedblleft}60:40{\textquotedblright} percentage split which achieved accuracy around 50{\%}. We observed that Buckwalter transcripts from the Saarland Automatic Speech Recognition (ASR) system are given without short vowels, though the Buckwalter system has notation for these. We elaborate such observations, describe our methods and analyse the training dataset. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,025 |
inproceedings | barbaresi-2016-unsupervised | An Unsupervised Morphological Criterion for Discriminating Similar Languages | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4827/ | Barbaresi, Adrien | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 212--220 | In this study conducted on the occasion of the Discriminating between Similar Languages shared task, I introduce an additional decision factor focusing on the token and subtoken level. The motivation behind this submission is to test whether a morphologically-informed criterion can add linguistically relevant information to global categorization and thus improve performance. The contributions of this paper are (1) a description of the unsupervised, low-resource method; (2) an evaluation and analysis of its raw performance; and (3) an assessment of its impact within a model comprising common indicators used in language identification. I present and discuss the systems used in the task A, a 12-way language identification task comprising varieties of five main language groups. Additionally I introduce a new off-the-shelf Naive Bayes classifier using a contrastive word and subword n-gram model ({\textquotedblleft}Bayesline{\textquotedblright}) which outperforms the best submissions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,026 |
inproceedings | eldesouki-etal-2016-qcri | {QCRI} @ {DSL} 2016: Spoken {A}rabic Dialect Identification Using Textual Features | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4828/ | Eldesouki, Mohamed and Dalvi, Fahim and Sajjad, Hassan and Darwish, Kareem | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 221--226 | The paper describes the QCRI submissions to the task of automatic Arabic dialect classification into 5 Arabic variants, namely Egyptian, Gulf, Levantine, North-African, and Modern Standard Arabic (MSA). The training data is relatively small and is automatically generated from an ASR system. To avoid over-fitting on such small data, we carefully selected and designed the features to capture the morphological essence of the different dialects. We submitted four runs to the Arabic sub-task. For all runs, we used a combined feature vector of character bi-grams, tri-grams, 4-grams, and 5-grams. We tried several machine-learning algorithms, namely Logistic Regression, Naive Bayes, Neural Networks, and Support Vector Machines (SVM) with linear and string kernels. However, our submitted runs used SVM with a linear kernel. In the closed submission, we got the best accuracy of 0.5136 and the third best weighted F1 score, with a difference less than 0.002 from the highest score. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,027 |
inproceedings | franco-penya-mamani-sanchez-2016-tuning | Tuning {B}ayes Baseline for Dialect Detection | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4829/ | Franco-Penya, Hector-Hugo and Mamani Sanchez, Liliana | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 227--234 | This paper describes an analysis of our submissions to the Dialect Detection Shared Task 2016. We proposed three different systems that involved simplistic features, to name: a Naive-bayes system, a Support Vector Machines-based system and a Tree Kernel-based system. These systems underperform when compared to other submissions in this shared task, since the best one achieved an accuracy of {\textasciitilde}0.834. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,028 |
inproceedings | nisioi-etal-2016-vanilla | Vanilla Classifiers for Distinguishing between Similar Languages | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4830/ | Nisioi, Sergiu and Ciobanu, Alina Maria and Dinu, Liviu P. | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 235--242 | In this paper we describe the submission of the UniBuc-NLP team for the Discriminating between Similar Languages Shared Task, DSL 2016. We present and analyze the results we obtained in the closed track of sub-task 1 (Similar languages and language varieties) and sub-task 2 (Arabic dialects). For sub-task 1 we used a logistic regression classifier with tf-idf feature weighting and for sub-task 2 a character-based string kernel with an SVM classifier. Our results show that good accuracy scores can be obtained with limited feature and model engineering. While certain limitations are to be acknowledged, our approach worked surprisingly well for out-of-domain, social media data, with 0.898 accuracy (3rd place) for dataset B1 and 0.838 accuracy (4th place) for dataset B2. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,029 |
inproceedings | cianflone-kosseim-2016-n | N-gram and Neural Language Models for Discriminating Similar Languages | Nakov, Preslav and Zampieri, Marcos and Tan, Liling and Ljube{\v{si{\'c, Nikola and Tiedemann, J{\"org and Malmasi, Shervin | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4831/ | Cianflone, Andre and Kosseim, Leila | Proceedings of the Third Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial3) | 243--250 | This paper describes our submission to the 2016 Discriminating Similar Languages (DSL) Shared Task. We participated in the closed Sub-task 1 with two separate machine learning techniques. The first approach is a character based Convolution Neural Network with an LSTM layer (CLSTM), which achieved an accuracy of 78.45{\%} with minimal tuning. The second approach is a character-based n-gram model of size 7. It achieved an accuracy of 88.45{\%} which is close to the accuracy of 89.38{\%} achieved by the best submission. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,030 |
inproceedings | liu-matsumoto-2016-simplification | Simplification of Example Sentences for Learners of {J}apanese Functional Expressions | Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4901/ | Liu, Jun and Matsumoto, Yuji | Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016) | 1--5 | Learning functional expressions is one of the difficulties for language learners, since functional expressions tend to have multiple meanings and complicated usages in various situations. In this paper, we report an experiment of simplifying example sentences of Japanese functional expressions especially for Chinese-speaking learners. For this purpose, we developed {\textquotedblleft}Japanese Functional Expressions List{\textquotedblright} and {\textquotedblleft}Simple Japanese Replacement List{\textquotedblright}. To evaluate the method, we conduct a small-scale experiment with Chinese-speaking learners on the effectiveness of the simplified example sentences. The experimental results indicate that simplified sentences are helpful in learning Japanese functional expressions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,032 |
inproceedings | kotani-yoshimi-2016-effectiveness | Effectiveness of Linguistic and Learner Features to Listenability Measurement Using a Decision Tree Classifier | Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4902/ | Kotani, Katsunori and Yoshimi, Takehiko | Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016) | 6--10 | In learning Asian languages, learners encounter the problem of character types that are different from those in their first language, for instance, between Chinese characters and the Latin alphabet. This problem also affects listening because learners reconstruct letters from speech sounds. Hence, special attention should be paid to listening practice for learners of Asian languages. However, to our knowledge, few studies have evaluated the ease of listening comprehension (listenability) in Asian languages. Therefore, as a pilot study of listenability in Asian languages, we developed a measurement method for learners of English in order to examine the discriminability of linguistic and learner features. The results showed that the accuracy of our method outperformed a simple majority vote, which suggests that a combination of linguistic and learner features should be used to measure listenability in Asian languages as well as in English. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,033 |
inproceedings | pathak-etal-2016-two | A Two-Phase Approach Towards Identifying Argument Structure in Natural Language | Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4903/ | Pathak, Arkanath and Goyal, Pawan and Bhowmick, Plaban | Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016) | 11--19 | We propose a new approach for extracting argument structure from natural language texts that contain an underlying argument. Our approach comprises of two phases: Score Assignment and Structure Prediction. The Score Assignment phase trains models to classify relations between argument units (Support, Attack or Neutral). To that end, different training strategies have been explored. We identify different linguistic and lexical features for training the classifiers. Through ablation study, we observe that our novel use of word-embedding features is most effective for this task. The Structure Prediction phase makes use of the scores from the Score Assignment phase to arrive at the optimal structure. We perform experiments on three argumentation datasets, namely, AraucariaDB, Debatepedia and Wikipedia. We also propose two baselines and observe that the proposed approach outperforms baseline systems for the final task of Structure Prediction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,034 |
inproceedings | adams-etal-2016-distributed | Distributed Vector Representations for Unsupervised Automatic Short Answer Grading | Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4904/ | Adams, Oliver and Roy, Shourya and Krishnapuram, Raghuram | Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016) | 20--29 | We address the problem of automatic short answer grading, evaluating a collection of approaches inspired by recent advances in distributional text representations. In addition, we propose an unsupervised approach for determining text similarity using one-to-many alignment of word vectors. We evaluate the proposed technique across two datasets from different domains, namely, computer science and English reading comprehension, that additionally vary between highschool level and undergraduate students. Experiments demonstrate that the proposed technique often outperforms other compositional distributional semantics approaches as well as vector space methods such as latent semantic analysis. When combined with a scoring scheme, the proposed technique provides a powerful tool for tackling the complex problem of short answer grading. We also discuss a number of other key points worthy of consideration in preparing viable, easy-to-deploy automatic short-answer grading systems for the real-world. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,035 |
inproceedings | kang-etal-2016-comparison | A Comparison of Word Embeddings for {E}nglish and Cross-Lingual {C}hinese Word Sense Disambiguation | Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4905/ | Kang, Hong Jin and Chen, Tao and Chandrasekaran, Muthu Kumar and Kan, Min-Yen | Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016) | 30--39 | Word embeddings are now ubiquitous forms of word representation in natural language processing. There have been applications of word embeddings for monolingual word sense disambiguation (WSD) in English, but few comparisons have been done. This paper attempts to bridge that gap by examining popular embeddings for the task of monolingual English WSD. Our simplified method leads to comparable state-of-the-art performance without expensive retraining. Cross-Lingual WSD {--} where the word senses of a word in a source language come from a separate target translation language {--} can also assist in language learning; for example, when providing translations of target vocabulary for learners. Thus we have also applied word embeddings to the novel task of cross-lingual WSD for Chinese and provide a public dataset for further benchmarking. We have also experimented with using word embeddings for LSTM networks and found surprisingly that a basic LSTM network does not work well. We discuss the ramifications of this outcome. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,036 |
inproceedings | lee-etal-2016-overview | Overview of {NLP}-{TEA} 2016 Shared Task for {C}hinese Grammatical Error Diagnosis | Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4906/ | Lee, Lung-Hao and Rao, Gaoqi and Yu, Liang-Chih and Xun, Endong and Zhang, Baolin and Chang, Li-Ping | Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016) | 40--48 | This paper presents the NLP-TEA 2016 shared task for Chinese grammatical error diagnosis which seeks to identify grammatical error types and their range of occurrence within sentences written by learners of Chinese as foreign language. We describe the task definition, data preparation, performance metrics, and evaluation results. Of the 15 teams registered for this shared task, 9 teams developed the system and submitted a total of 36 runs. We expected this evaluation campaign could lead to the development of more advanced NLP techniques for educational applications, especially for Chinese error detection. All data sets with gold standards and scoring scripts are made publicly available to researchers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 59,037 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.