entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
zheng-etal-2016-chinese
{C}hinese Grammatical Error Diagnosis with Long Short-Term Memory Networks
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4907/
Zheng, Bo and Che, Wanxiang and Guo, Jiang and Liu, Ting
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
49--56
Grammatical error diagnosis is an important task in natural language processing. This paper introduces our Chinese Grammatical Error Diagnosis (CGED) system in the NLP-TEA-3 shared task for CGED. The CGED system can diagnose four types of grammatical errors which are redundant words (R), missing words (M), bad word selection (S) and disordered words (W). We treat the CGED task as a sequence labeling task and describe three models, including a CRF-based model, an LSTM-based model and an ensemble model using stacking. We also show in details how we build and train the models. Evaluation includes three levels, which are detection level, identification level and position level. On the CGED-HSK dataset of NLP-TEA-3 shared task, our system presents the best F1-scores in all the three levels and also the best recall in the last two levels.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,038
inproceedings
liu-etal-2016-automatic
Automatic Grammatical Error Detection for {C}hinese based on Conditional Random Field
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4908/
Liu, Yajun and Han, Yingjie and Zhuo, Liyan and Zan, Hongying
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
57--62
In the process of learning and using Chinese, foreigners may have grammatical errors due to negative migration of their native languages. Currently, the computer-oriented automatic detection method of grammatical errors is not mature enough. Based on the evaluating task {---} CGED2016, we select and analyze the classification model and design feature extraction method to obtain grammatical errors including Mission(M), Disorder(W), Selection (S) and Redundant (R) automatically. The experiment results based on the dynamic corpus of HSK show that the Chinese grammatical error automatic detection method, which uses CRF as classification model and n-gram as feature extraction method. It is simple and efficient which play a positive effect on the research of Chinese grammatical error automatic detection and also a supporting and guiding role in the teaching of Chinese as a foreign language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,039
inproceedings
chen-etal-2016-cyut
{CYUT}-{III} System at {C}hinese Grammatical Error Diagnosis Task
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4909/
Chen, Po-Lin and Wu, Shih-Hung and Chen, Liang-Pu and Yang, Ping-Che
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
63--72
This paper describe the CYUT-III system on grammar error detection in the 2016 NLP-TEA Chinese Grammar Error Detection shared task CGED. In this task a system has to detect four types of errors, in-cluding redundant word error, missing word error, word selection error and word ordering error. Based on the conditional random fields (CRF) model, our system is a linear tagger that can detect the errors in learners' essays. Since the system performance depends on the features heavily, in this paper, we are going to report how to integrate the collocation feature into the CRF model. Our system presents the best detection accuracy and Identification accuracy on the TOCFL dataset, which is in traditional Chi-nese. The same system also works well on the simplified Chinese HSK dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,040
inproceedings
chou-etal-2016-word
Word Order Sensitive Embedding Features/Conditional Random Field-based {C}hinese Grammatical Error Detection
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4910/
Chou, Wei-Chieh and Lin, Chin-Kui and Liao, Yuan-Fu and Wang, Yih-Ru
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
73--81
This paper discusses how to adapt two new word embedding features to build a more efficient Chinese Grammatical Error Diagnosis (CGED) systems to assist Chinese foreign learners (CFLs) in improving their written essays. The major idea is to apply word order sensitive Word2Vec approaches including (1) structured skip-gram and (2) continuous window (CWindow) models, because they are more suitable for solving syntax-based problems. The proposed new features were evaluated on the Test of Chinese as a Foreign Language (TOCFL) learner database provided by NLP-TEA-3{\&}CGED shared task. Experimental results showed that the new features did work better than the traditional word order insensitive Word2Vec approaches. Moreover, according to the official evaluation results, our system achieved the lowest (0.1362) false positive (FA) and the highest precision rates in all three measurements.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,041
inproceedings
roy-etal-2016-fluctuation
A Fluctuation Smoothing Approach for Unsupervised Automatic Short Answer Grading
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4911/
Roy, Shourya and Dandapat, Sandipan and Narahari, Y.
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
82--91
We offer a fluctuation smoothing computational approach for unsupervised automatic short answer grading (ASAG) techniques in the educational ecosystem. A major drawback of the existing techniques is the significant effect that variations in model answers could have on their performances. The proposed fluctuation smoothing approach, based on classical sequential pattern mining, exploits lexical overlap in students' answers to any typical question. We empirically demonstrate using multiple datasets that the proposed approach improves the overall performance and significantly reduces (up to 63{\%}) variation in performance (standard deviation) of unsupervised ASAG techniques. We bring in additional benchmarks such as (a) paraphrasing of model answers and (b) using answers by k top performing students as model answers, to amplify the benefits of the proposed approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,042
inproceedings
hading-etal-2016-japanese
{J}apanese Lexical Simplification for Non-Native Speakers
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4912/
Hading, Muhaimin and Matsumoto, Yuji and Sakamoto, Maki
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
92--96
This paper introduces Japanese lexical simplification. Japanese lexical simplification is the task of replacing difficult words in a given sentence to produce a new sentence with simple words without changing the original meaning of the sentence. We purpose a method of supervised regression learning to estimate difficulty ordering of words with statistical features obtained from two types of Japanese corpora. For the similarity of words, we use a Japanese thesaurus and dependency-based word embeddings. Evaluation of the proposed method is performed by comparing the difficulty ordering of the words.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,043
inproceedings
cao-etal-2016-corpus
A Corpus-based Approach for {S}panish-{C}hinese Language Learning
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4913/
Cao, Shuyuan and da Cunha, Iria and Iruskieta, Mikel
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
97--106
Due to the huge population that speaks Spanish and Chinese, these languages occupy an important position in the language learning studies. Although there are some automatic translation systems that benefit the learning of both languages, there is enough space to create resources in order to help language learners. As a quick and effective resource that can give large amount language information, corpus-based learning is becoming more and more popular. In this paper we enrich a Spanish-Chinese parallel corpus automatically with part of-speech (POS) information and manually with discourse segmentation (following the Rhetorical Structure Theory (RST) (Mann and Thompson, 1988)). Two search tools allow the Spanish-Chinese language learners to carry out different queries based on tokens and lemmas. The parallel corpus and the research tools are available to the academic community. We propose some examples to illustrate how learners can use the corpus to learn Spanish and Chinese.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,044
inproceedings
morgado-da-costa-etal-2016-syntactic
Syntactic Well-Formedness Diagnosis and Error-Based Coaching in Computer Assisted Language Learning using Machine Translation
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4914/
Morgado da Costa, Luis and Bond, Francis and He, Xiaoling
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
107--116
We present a novel approach to Computer Assisted Language Learning (CALL), using deep syntactic parsers and semantic based machine translation (MT) in diagnosing and providing explicit feedback on language learners' errors. We are currently developing a proof of concept system showing how semantic-based machine translation can, in conjunction with robust computational grammars, be used to interact with students, better understand their language errors, and help students correct their grammar through a series of useful feedback messages and guided language drills. Ultimately, we aim to prove the viability of a new integrated rule-based MT approach to disambiguate students' intended meaning in a CALL system. This is a necessary step to provide accurate coaching on how to correct ungrammatical input, and it will allow us to overcome a current bottleneck in the field {---} an exponential burst of ambiguity caused by ambiguous lexical items (Flickinger, 2010). From the users' interaction with the system, we will also produce a richly annotated Learner Corpus, annotated automatically with both syntactic and semantic information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,045
inproceedings
kalitvianski-etal-2016-aligned
An Aligned {F}rench-{C}hinese corpus of 10{K} segments from university educational material
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4915/
Kalitvianski, Ruslan and Wang, Lingxiao and Bellynck, Val{\'e}rie and Boitet, Christian
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
117--121
This paper describes a corpus of nearly 10K French-Chinese aligned segments, produced by post-editing machine translated computer science courseware. This corpus was built from 2013 to 2016 within the PROJECT{\_}NAME project, by native Chinese students. The quality, as judged by native speakers, is ad-equate for understanding (far better than by reading only the original French) and for getting better marks. This corpus is annotated at segment-level by a self-assessed quality score. It has been directly used as supplemental training data to build a statistical machine translation system dedicated to that sublanguage, and can be used to extract the specific bilingual terminology. To our knowledge, it is the first corpus of this kind to be released.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,046
inproceedings
zalmout-etal-2016-analysis
Analysis of Foreign Language Teaching Methods: An Automatic Readability Approach
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4916/
Zalmout, Nasser and Saddiki, Hind and Habash, Nizar
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
122--130
Much research in education has been done on the study of different language teaching methods. However, there has been little investigation using computational analysis to compare such methods in terms of readability or complexity progression. In this paper, we make use of existing readability scoring techniques and our own classifiers to analyze the textbooks used in two very different teaching methods for English as a Second Language {--} the grammar-based and the communicative methods. Our analysis indicates that the grammar-based curriculum shows a more coherent readability progression compared to the communicative curriculum. This finding corroborates with the expectations about the differences between these two methods and validates our approach`s value in comparing different teaching methods quantitatively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,047
inproceedings
chen-etal-2016-generating
Generating and Scoring Correction Candidates in {C}hinese Grammatical Error Diagnosis
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4917/
Chen, Shao-Heng and Tsai, Yu-Lin and Lin, Chuan-Jie
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
131--139
Grammatical error diagnosis is an essential part in a language-learning tutoring system. Based on the data sets of Chinese grammar error detection tasks, we proposed a system which measures the likelihood of correction candidates generated by deleting or inserting characters or words, moving substrings to different positions, substituting prepositions with other prepositions, or substituting words with their synonyms or similar strings. Sentence likelihood is measured based on the frequencies of substrings from the space-removed version of Google n-grams. The evaluation on the training set shows that Missing-related and Selection-related candidate generation methods have promising performance. Our final system achieved a precision of 30.28{\%} and a recall of 62.85{\%} in the identification level evaluated on the test set.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,048
inproceedings
yeh-etal-2016-grammatical
Grammatical Error Detection Based on Machine Learning for {M}andarin as Second Language Learning
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4918/
Yeh, Jui-Feng and Hsu, Tsung-Wei and Yeh, Chan-Kun
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
140--147
Mandarin is not simple language for foreigner. Even using Mandarin as the mother tongue, they have to spend more time to learn when they were child. The following issues are the reason why causes learning problem. First, the word is envolved by Hieroglyphic. So a character can express meanings independently, but become a word has another semantic. Second, the Mandarin`s grammars have flexible rule and special usage. Therefore, the common grammatical errors can classify to missing, redundant, selection and disorder. In this paper, we proposed the structure of the Recurrent Neural Networks using Long Short-term memory (RNN-LSTM). It can detect the error type from the foreign learner writing. The features based on the word vector and part-of-speech vector. In the test data found that our method in the detection level of recall better than the others, even as high as 0.9755. That is because we give the possibility of greater choice in detecting errors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,049
inproceedings
huang-wang-2016-bi
{B}i-{LSTM} Neural Networks for {C}hinese Grammatical Error Diagnosis
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4919/
Huang, Shen and Wang, Houfeng
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
148--154
Grammatical Error Diagnosis for Chinese has always been a challenge for both foreign learners and NLP researchers, for the variousity of grammar and the flexibility of expression. In this paper, we present a model based on Bidirectional Long Short-Term Memory(Bi-LSTM) neural networks, which treats the task as a sequence labeling problem, so as to detect Chinese grammatical errors, to identify the error types and to locate the error positions. In the corpora of this year`s shared task, there can be multiple errors in a single offset of a sentence, to address which, we simutaneously train three Bi-LSTM models sharing word embeddings which label Missing, Redundant and Selection errors respectively. We regard word ordering error as a special kind of word selection error which is longer during training phase, and then separate them by length during testing phase. In NLP-TEA 3 shared task for Chinese Grammatical Error Diagnosis(CGED), Our system achieved relatively high F1 for all the three levels in the traditional Chinese track and for the detection level in the Simpified Chinese track.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,050
inproceedings
yang-etal-2016-chinese
{C}hinese Grammatical Error Diagnosis Using Single Word Embedding
Chen, Hsin-Hsi and Tseng, Yuen-Hsien and Ng, Vincent and Lu, Xiaofei
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-4920/
Yang, Jinnan and Peng, Bo and Wang, Jin and Zhang, Jixian and Zhang, Xuejie
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA}2016)
155--161
Automatic grammatical error detection for Chinese has been a big challenge for NLP researchers. Due to the formal and strict grammar rules in Chinese, it is hard for foreign students to master Chinese. A computer-assisted learning tool which can automatically detect and correct Chinese grammatical errors is necessary for those foreign students. Some of the previous works have sought to identify Chinese grammatical errors using template- and learning-based methods. In contrast, this study introduced convolutional neural network (CNN) and long-short term memory (LSTM) for the shared task of Chinese Grammatical Error Diagnosis (CGED). Different from traditional word-based embedding, single word embedding was used as input of CNN and LSTM. The proposed single word embedding can capture both semantic and syntactic information to detect those four type grammatical error. In experimental evaluation, the recall and f1-score of our submitted results Run1 of the TOCFL testing data ranked the fourth place in all submissions in detection-level.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,051
inproceedings
joshi-etal-2016-thought
{\textquoteleft}Who would have thought of that!': A Hierarchical Topic Model for Extraction of Sarcasm-prevalent Topics and Sarcasm Detection
Blanco, Eduardo and Morante, Roser and Saur{\'i}, Roser
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5001/
Joshi, Aditya and Jain, Prayas and Bhattacharyya, Pushpak and Carman, Mark
Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics ({E}x{P}ro{M})
1--10
Topic Models have been reported to be beneficial for aspect-based sentiment analysis. This paper reports the first topic model for sarcasm detection, to the best of our knowledge. Designed on the basis of the intuition that sarcastic tweets are likely to have a mixture of words of both sentiments as against tweets with literal sentiment (either positive or negative), our hierarchical topic model discovers sarcasm-prevalent topics and topic-level sentiment. Using a dataset of tweets labeled using hashtags, the model estimates topic-level, and sentiment-level distributions. Our evaluation shows that topics such as {\textquoteleft}work', {\textquoteleft}gun laws', {\textquoteleft}weather' are sarcasm-prevalent topics. Our model is also able to discover the mixture of sentiment-bearing words that exist in a text of a given sentiment-related label. Finally, we apply our model to predict sarcasm in tweets. We outperform two prior work based on statistical classifiers with specific features, by around 25{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,053
inproceedings
vincze-2016-detecting
Detecting Uncertainty Cues in {H}ungarian Social Media Texts
Blanco, Eduardo and Morante, Roser and Saur{\'i}, Roser
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5002/
Vincze, Veronika
Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics ({E}x{P}ro{M})
11--21
In this paper, we aim at identifying uncertainty cues in Hungarian social media texts. We present our machine learning based uncertainty detector which is based on a rich features set including lexical, morphological, syntactic, semantic and discourse-based features, and we evaluate our system on a small set of manually annotated social media texts. We also carry out cross-domain and domain adaptation experiments using an annotated corpus of standard Hungarian texts and show that domain differences significantly affect machine learning. Furthermore, we argue that differences among uncertainty cue types may also affect the efficiency of uncertainty detection.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,054
inproceedings
colomer-etal-2016-detecting
Detecting Level of Belief in {C}hinese and {S}panish
Blanco, Eduardo and Morante, Roser and Saur{\'i}, Roser
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5003/
Colomer, Juan Pablo and Lai, Keyu and Rambow, Owen
Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics ({E}x{P}ro{M})
22--30
There has been extensive work on detecting the level of committed belief (also known as {\textquotedblleft}factuality{\textquotedblright}) that an author is expressing towards the propositions in his or her utterances. Previous work on English has revealed that this can be done as a sequence tagging task. In this paper, we investigate the same task for Chinese and Spanish, two very different languages from English and from each other.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,055
inproceedings
lendvai-reichel-2016-contradiction
Contradiction Detection for Rumorous Claims
Blanco, Eduardo and Morante, Roser and Saur{\'i}, Roser
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5004/
Lendvai, Piroska and Reichel, Uwe
Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics ({E}x{P}ro{M})
31--40
The utilization of social media material in journalistic workflows is increasing, demanding automated methods for the identification of mis- and disinformation. Since textual contradiction across social media posts can be a signal of rumorousness, we seek to model how claims in Twitter posts are being textually contradicted. We identify two different contexts in which contradiction emerges: its broader form can be observed across independently posted tweets and its more specific form in threaded conversations. We define how the two scenarios differ in terms of central elements of argumentation: claims and conversation structure. We design and evaluate models for the two scenarios uniformly as 3-way Recognizing Textual Entailment tasks in order to represent claims and conversation structure implicitly in a generic inference model, while previous studies used explicit or no representation of these properties. To address noisy text, our classifiers use simple similarity features derived from the string and part-of-speech level. Corpus statistics reveal distribution differences for these features in contradictory as opposed to non-contradictory tweet relations, and the classifiers yield state of the art performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,056
inproceedings
nakov-2016-negation
Negation and Modality in Machine Translation
Blanco, Eduardo and Morante, Roser and Saur{\'i}, Roser
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5005/
Nakov, Preslav
Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics ({E}x{P}ro{M})
41
Negation and modality are two important grammatical phenomena that have attracted recent research attention as they can contribute to extra-propositional meaning aspects, among with factuality, attribution, irony and sarcasm. These aspects go beyond analysis such as semantic role labeling, and modeling them is important as a step towards a higher level of language understanding, which is needed for practical applications such as sentiment analysis. In this talk, I will go beyond English, and I will discuss how negation and modality are expressed in other languages. I will also go beyond sentiment analysis and I will present some challenges that the two phenomena pose for machine translation (MT). In particular, I will demonstrate how contemporary MT systems fail on them, and I will discuss some possible solutions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,057
inproceedings
jimenez-zafra-etal-2016-problematic
Problematic Cases in the Annotation of Negation in {S}panish
Blanco, Eduardo and Morante, Roser and Saur{\'i}, Roser
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5006/
Jim{\'e}nez-Zafra, Salud Mar{\'i}a and Martin, Maite and Ure{\~n}a-L{\'o}pez, L. Alfonso and Mart{\'i}, Toni and Taul{\'e}, Mariona
Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics ({E}x{P}ro{M})
42--48
This paper presents the main sources of disagreement found during the annotation of the Spanish SFU Review Corpus with negation (SFU ReviewSP -NEG). Negation detection is a challenge in most of the task related to NLP, so the availability of corpora annotated with this phenomenon is essential in order to advance in tasks related to this area. A thorough analysis of the problems found during the annotation could help in the study of this phenomenon.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,058
inproceedings
van-son-etal-2016-building
Building a Dictionary of Affixal Negations
Blanco, Eduardo and Morante, Roser and Saur{\'i}, Roser
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5007/
van Son, Chantal and van Miltenburg, Emiel and Morante, Roser
Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics ({E}x{P}ro{M})
49--56
This paper discusses the need for a dictionary of affixal negations and regular antonyms to facilitate their automatic detection in text. Without such a dictionary, affixal negations are very difficult to detect. In addition, we show that the set of affixal negations is not homogeneous, and that different NLP tasks may require different subsets. A dictionary can store the subtypes of affixal negations, making it possible to select a certain subset or to make inferences on the basis of these subtypes. We take a first step towards creating a negation dictionary by annotating all direct antonym pairs inWordNet using an existing typology of affixal negations. By highlighting some of the issues that were encountered in this annotation experiment, we hope to provide some insights into the necessary steps of building a negation dictionary.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,059
inproceedings
baker-etal-2016-cancer
Cancer Hallmark Text Classification Using Convolutional Neural Networks
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5101/
Baker, Simon and Korhonen, Anna and Pyysalo, Sampo
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
1--9
Methods based on deep learning approaches have recently achieved state-of-the-art performance in a range of machine learning tasks and are increasingly applied to natural language processing (NLP). Despite strong results in various established NLP tasks involving general domain texts, there is only limited work applying these models to biomedical NLP. In this paper, we consider a Convolutional Neural Network (CNN) approach to biomedical text classification. Evaluation using a recently introduced cancer domain dataset involving the categorization of documents according to the well-established hallmarks of cancer shows that a basic CNN model can achieve a level of performance competitive with a Support Vector Machine (SVM) trained using complex manually engineered features optimized to the task. We further show that simple modifications to the CNN hyperparameters, initialization, and training process allow the model to notably outperform the SVM, establishing a new state of the art result at this task. We make all of the resources and tools introduced in this study available under open licenses from \url{https://cambridgeltl.github.io/cancer-hallmark-cnn/}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,061
inproceedings
limsopatham-collier-2016-learning
Learning Orthographic Features in Bi-directional {LSTM} for Biomedical Named Entity Recognition
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5102/
Limsopatham, Nut and Collier, Nigel
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
10--19
End-to-end neural network models for named entity recognition (NER) have shown to achieve effective performances on general domain datasets (e.g. newswire), without requiring additional hand-crafted features. However, in biomedical domain, recent studies have shown that hand-engineered features (e.g. orthographic features) should be used to attain effective performance, due to the complexity of biomedical terminology (e.g. the use of acronyms and complex gene names). In this work, we propose a novel approach that allows a neural network model based on a long short-term memory (LSTM) to automatically learn orthographic features and incorporate them into a model for biomedical NER. Importantly, our bi-directional LSTM model learns and leverages orthographic features on an end-to-end basis. We evaluate our approach by comparing against existing neural network models for NER using three well-established biomedical datasets. Our experimental results show that the proposed approach consistently outperforms these strong baselines across all of the three datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,062
inproceedings
amplayo-song-2016-building
Building Content-driven Entity Networks for Scarce Scientific Literature using Content Information
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5103/
Amplayo, Reinald Kim and Song, Min
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
20--29
This paper proposes several network construction methods for collections of scarce scientific literature data. We define scarcity as lacking in value and in volume. Instead of using the paper`s metadata to construct several kinds of scientific networks, we use the full texts of the articles and automatically extract the entities needed to construct the networks. Specifically, we present seven kinds of networks using the proposed construction methods: co-occurrence networks for author, keyword, and biological entities, and citation networks for author, keyword, biological, and topic entities. We show two case studies that applies our proposed methods: CADASIL, a rare yet the most common form of hereditary stroke disorder, and Metformin, the first-line medication to the type 2 diabetes treatment. We apply our proposed method to four different applications for evaluation: finding prolific authors, finding important bio-entities, finding meaningful keywords, and discovering influential topics. The results show that the co-occurrence and citation networks constructed using the proposed method outperforms the traditional-based networks. We also compare our proposed networks to traditional citation networks constructed using enough data and infer that even with the same amount of enough data, our methods perform comparably or better than the traditional methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,063
inproceedings
almgren-etal-2016-named
Named Entity Recognition in {S}wedish Health Records with Character-Based Deep Bidirectional {LSTM}s
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5104/
Almgren, Simon and Pavlov, Sean and Mogren, Olof
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
30--39
We propose an approach for named entity recognition in medical data, using a character-based deep bidirectional recurrent neural network. Such models can learn features and patterns based on the character sequence, and are not limited to a fixed vocabulary. This makes them very well suited for the NER task in the medical domain. Our experimental evaluation shows promising results, with a 60{\%} improvement in F 1 score over the baseline, and our system generalizes well between different datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,064
inproceedings
schulze-neves-2016-entity
Entity-Supported Summarization of Biomedical Abstracts
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5105/
Schulze, Frederik and Neves, Mariana
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
40--49
The increasing amount of biomedical information that is available for researchers and clinicians makes it harder to quickly find the right information. Automatic summarization of multiple texts can provide summaries specific to the user`s information needs. In this paper we look into the use named-entity recognition for graph-based summarization. We extend the LexRank algorithm with information about named entities and present EntityRank, a multi-document graph-based summarization algorithm that is solely based on named entities. We evaluate our system on a datasets of 1009 human written summaries provided by BioASQ and on 1974 gene summaries, fetched from the Entrez Gene database. The results show that the addition of named-entity information increases the performance of graph-based summarizers and that the EntityRank significantly outperforms the other methods with regard to the ROUGE measures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,065
inproceedings
perez-etal-2016-fully
Fully unsupervised low-dimensional representation of adverse drug reaction events through distributional semantics
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5106/
P{\'e}rez, Alicia and Casillas, Arantza and Gojenola, Koldo
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
50--59
Electronic health records show great variability since the same concept is often expressed with different terms, either scientific latin forms, common or lay variants and even vernacular naming. Deep learning enables distributional representation of terms in a vector-space, and therefore, related terms tend to be close in the vector space. Accordingly, embedding words through these vectors opens the way towards accounting for semantic relatedness through classical algebraic operations. In this work we propose a simple though efficient unsupervised characterization of Adverse Drug Reactions (ADRs). This approach exploits the embedding representation of the terms involved in candidate ADR events, that is, drug-disease entity pairs. In brief, the ADRs are represented as vectors that link the drug with the disease in their context through a recursive additive model. We discovered that a low-dimensional representation that makes use of the modulus and argument of the embedded representation of the ADR event shows correlation with the manually annotated class. Thus, it can be derived that this characterization results in to be beneficial for further classification tasks as predictive features.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,066
inproceedings
lavergne-etal-2016-dataset
A Dataset for {ICD}-10 Coding of Death Certificates: Creation and Usage
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5107/
Lavergne, Thomas and N{\'e}v{\'e}ol, Aur{\'e}lie and Robert, Aude and Grouin, Cyril and Rey, Gr{\'e}goire and Zweigenbaum, Pierre
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
60--69
Very few datasets have been released for the evaluation of diagnosis coding with the International Classification of Diseases, and only one so far in a language other than English. This paper describes a large-scale dataset prepared from French death certificates, and the problems which needed to be solved to turn it into a dataset suitable for the application of machine learning and natural language processing methods of ICD-10 coding. The dataset includes the free-text statements written by medical doctors, the associated meta-data, the human coder-assigned codes for each statement, as well as the statement segments which supported the coder`s decision for each code. The dataset comprises 93,694 death certificates totalling 276,103 statements and 377,677 ICD-10 code assignments (3,457 unique codes). It was made available for an international automated coding shared task, which attracted five participating teams. An extended version of the dataset will be used in a new edition of the shared task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,067
inproceedings
shmanina-etal-2016-corpus
A Corpus of Tables in Full-Text Biomedical Research Publications
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5108/
Shmanina, Tatyana and Zukerman, Ingrid and Cheam, Ai Lee and Bochynek, Thomas and Cavedon, Lawrence
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
70--79
The development of text mining techniques for biomedical research literature has received increased attention in recent times. However, most of these techniques focus on prose, while much important biomedical data reside in tables. In this paper, we present a corpus created to serve as a gold standard for the development and evaluation of techniques for the automatic extraction of information from biomedical tables. We describe the guidelines used for corpus annotation and the manner in which they were developed. The high inter-annotator agreement achieved on the corpus, and the generic nature of our annotation approach, suggest that the developed guidelines can serve as a general framework for table annotation in biomedical and other scientific domains. The annotated corpus and the guidelines are available at \url{http://www.csse.monash.edu.au/research/umnl/data/index.shtml}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,068
inproceedings
zweigenbaum-etal-2016-supervised
Supervised classification of end-of-lines in clinical text with no manual annotation
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5109/
Zweigenbaum, Pierre and Grouin, Cyril and Lavergne, Thomas
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
80--88
In some plain text documents, end-of-line marks may or may not mark the boundary of a text unit (e.g., of a paragraph). This vexing problem is likely to impact subsequent natural language processing components, but is seldom addressed in the literature. We propose a method which uses no manual annotation to classify whether end-of-lines must actually be seen as simple spaces (soft line breaks) or as true text unit boundaries. This method, which includes self-training and co-training steps based on token and line length features, achieves 0.943 F-measure on a corpus of short e-books with controlled format
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
0.904 on a random sample of 24 clinical texts with soft line breaks, and F=0.898 on a larger set of mixed clinical texts which may or may not contain soft line breaks, a fairly high value for a method with no manual annotation." }
null
59,069
inproceedings
gopalan-lalitha-devi-2016-biodca
{B}io{DCA} Identifier: A System for Automatic Identification of Discourse Connective and Arguments from Biomedical Text
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5110/
Gopalan, Sindhuja and Lalitha Devi, Sobha
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
89--98
This paper describes a Natural language processing system developed for automatic identification of explicit connectives, its sense and arguments. Prior work has shown that the difference in usage of connectives across corpora affects the cross domain connective identification task negatively. Hence the development of domain specific discourse parser has become indispensable. Here, we present a corpus annotated with discourse relations on Medline abstracts. Kappa score is calculated to check the annotation quality of our corpus. The previous works on discourse analysis in bio-medical data have concentrated only on the identification of connectives and hence we have developed an end-end parser for connective and argument identification using Conditional Random Fields algorithm. The type and sub-type of the connective sense is also identified. The results obtained are encouraging.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,070
inproceedings
sarker-gonzalez-2016-data
Data, tools and resources for mining social media drug chatter
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5111/
Sarker, Abeed and Gonzalez, Graciela
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
99--107
Social media has emerged into a crucial resource for obtaining population-based signals for various public health monitoring and surveillance tasks, such as pharmacovigilance. There is an abundance of knowledge hidden within social media data, and the volume is growing. Drug-related chatter on social media can include user-generated information that can provide insights into public health problems such as abuse, adverse reactions, long-term effects, and multi-drug interactions. Our objective in this paper is to present to the biomedical natural language processing, data science, and public health communities data sets (annotated and unannotated), tools and resources that we have collected and created from social media. The data we present was collected from Twitter using the generic and brand names of drugs as keywords, along with their common misspellings. Following the collection of the data, annotation guidelines were created over several iterations, which detail important aspects of social media data annotation and can be used by future researchers for developing similar data sets. The annotation guidelines were followed to prepare data sets for text classification, information extraction and normalization. In this paper, we discuss the preparation of these guidelines, outline the data sets prepared, and present an overview of our state-of-the-art systems for data collection, supervised classification, and information extraction. In addition to the development of supervised systems for classification and extraction, we developed and released unlabeled data and language models. We discuss the potential uses of these language models in data mining and the large volumes of unlabeled data from which they were generated. We believe that the summaries and repositories we present here of our data, annotation guidelines, models, and tools will be beneficial to the research community as a single-point entry for all these resources, and will promote further research in this area.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,071
inproceedings
dhondt-etal-2016-detection
Detection of Text Reuse in {F}rench Medical Corpora
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5112/
D{'}hondt, Eva and Grouin, Cyril and N{\'e}v{\'e}ol, Aur{\'e}lie and Stamatatos, Efstathios and Zweigenbaum, Pierre
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
108--114
Electronic Health Records (EHRs) are increasingly available in modern health care institutions either through the direct creation of electronic documents in hospitals' health information systems, or through the digitization of historical paper records. Each EHR creation method yields the need for sophisticated text reuse detection tools in order to prepare the EHR collections for efficient secondary use relying on Natural Language Processing methods. Herein, we address the detection of two types of text reuse in French EHRs: 1) the detection of updated versions of the same document and 2) the detection of document duplicates that still bear surface differences due to OCR or de-identification processing. We present a robust text reuse detection method to automatically identify redundant document pairs in two French EHR corpora that achieves an overall macro F-measure of 0.68 and 0.60, respectively and correctly identifies all redundant document pairs of interest.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,072
inproceedings
cotik-etal-2016-negation
Negation Detection in Clinical Reports Written in {G}erman
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5113/
Cotik, Viviana and Roller, Roland and Xu, Feiyu and Uszkoreit, Hans and Budde, Klemens and Schmidt, Danilo
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
115--124
An important subtask in clinical text mining tries to identify whether a clinical finding is expressed as present, absent or unsure in a text. This work presents a system for detecting mentions of clinical findings that are negated or just speculated. The system has been applied to two different types of German clinical texts: clinical notes and discharge summaries. Our approach is built on top of NegEx, a well known algorithm for identifying non-factive mentions of medical findings. In this work, we adjust a previous adaptation of NegEx to German and evaluate the system on our data to detect negation and speculation. The results are compared to a baseline algorithm and are analyzed for both types of clinical documents. Our system achieves an F1-Score above 0.9 on both types of reports.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,073
inproceedings
dandala-etal-2016-scoring
Scoring Disease-Medication Associations using Advanced {NLP}, Machine Learning, and Multiple Content Sources
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5114/
Dandala, Bharath and Devarakonda, Murthy and Bornea, Mihaela and Nielson, Christopher
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
125--133
Effective knowledge resources are critical for developing successful clinical decision support systems that alleviate the cognitive load on physicians in patient care. In this paper, we describe two new methods for building a knowledge resource of disease to medication associations. These methods use fundamentally different content and are based on advanced natural language processing and machine learning techniques. One method uses distributional semantics on large medical text, and the other uses data mining on a large number of patient records. The methods are evaluated using 25,379 unique disease-medication pairs extracted from 100 de-identified longitudinal patient records of a large multi-provider hospital system. We measured recall (R), precision (P), and F scores for positive and negative association prediction, along with coverage and accuracy. While individual methods performed well, a combined stacked classifier achieved the best performance, indicating the limitations and unique value of each resource and method. In predicting positive associations, the stacked combination significantly outperformed the baseline (a distant semi-supervised method on large medical text), achieving F scores of 0.75 versus 0.55 on the pairs seen in the patient records, and F scores of 0.69 and 0.35 on unique pairs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,074
inproceedings
vishnyakova-etal-2016-author
Author Name Disambiguation in {MEDLINE} Based on Journal Descriptors and Semantic Types
Ananiadou, Sophia and Batista-Navarro, Riza and Cohen, Kevin Bretonnel and Demner-Fushman, Dina and Thompson, Paul
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5115/
Vishnyakova, Dina and Rodriguez-Esteban, Raul and Ozol, Khan and Rinaldi, Fabio
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining ({B}io{T}xt{M}2016)
134--142
Author name disambiguation (AND) in publication and citation resources is a well-known problem. Often, information about email address and other details in the affiliation is missing. In cases where such information is not available, identifying the authorship of publications becomes very challenging. Consequently, there have been attempts to resolve such cases by utilizing external resources as references. However, such external resources are heterogeneous and are not always reliable regarding the correctness of information. To solve the AND task, especially when information about an author is not complete we suggest the use of new features such as journal descriptors (JD) and semantic types (ST). The evaluation of different feature models shows that their inclusion has an impact equivalent to that of other important features such as email address. Using such features we show that our system outperforms the state of the art.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,075
inproceedings
mohanty-etal-2016-kathaa-nlp
{K}athaa : {NLP} Systems as Edge-Labeled Directed Acyclic {M}ulti{G}raphs
Murakami, Yohei and Lin, Donghui and Ide, Nancy and Pustejovsky, James
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5201/
Mohanty, Sharada and Wani, Nehal J and Srivastava, Manish and Sharma, Dipti
Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies ({WLSI}/{OIAF}4{HLT}2016)
1--10
We present Kathaa, an Open Source web-based Visual Programming Framework for Natural Language Processing (NLP) Systems. Kathaa supports the design, execution and analysis of complex NLP systems by visually connecting NLP components from an easily extensible Module Library. It models NLP systems an edge-labeled Directed Acyclic MultiGraph, and lets the user use publicly co-created modules in their own NLP applications irrespective of their technical proficiency in Natural Language Processing. Kathaa exposes an intuitive web based Interface for the users to interact with and modify complex NLP Systems; and a precise Module definition API to allow easy integration of new state of the art NLP components. Kathaa enables researchers to publish their services in a standardized format to enable the masses to use their services out of the box. The vision of this work is to pave the way for a system like Kathaa, to be the Lego blocks of NLP Research and Applications. As a practical use case we use Kathaa to visually implement the Sampark Hindi-Panjabi Machine Translation Pipeline and the Sampark Hindi-Urdu Machine Translation Pipeline, to demonstrate the fact that Kathaa can handle really complex NLP systems while still being intuitive for the end user.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,077
inproceedings
ide-etal-2016-lapps
{LAPPS}/Galaxy: Current State and Next Steps
Murakami, Yohei and Lin, Donghui and Ide, Nancy and Pustejovsky, James
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5202/
Ide, Nancy and Suderman, Keith and Nyberg, Eric and Pustejovsky, James and Verhagen, Marc
Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies ({WLSI}/{OIAF}4{HLT}2016)
11--18
The US National Science Foundation (NSF) SI2-funded LAPPS/Galaxy project has developed an open-source platform for enabling complex analyses while hiding complexities associated with underlying infrastructure, that can be accessed through a web interface, deployed on any Unix system, or run from the cloud. It provides sophisticated tool integration and history capabilities, a workflow system for building automated multi-step analyses, state-of-the-art evaluation capabilities, and facilities for sharing and publishing analyses. This paper describes the current facilities available in LAPPS/Galaxy and outlines the project`s ongoing activities to enhance the framework.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,078
inproceedings
eckart-de-castilho-2016-automatic
Automatic Analysis of Flaws in Pre-Trained {NLP} Models
Murakami, Yohei and Lin, Donghui and Ide, Nancy and Pustejovsky, James
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5203/
Eckart de Castilho, Richard
Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies ({WLSI}/{OIAF}4{HLT}2016)
19--27
Most tools for natural language processing today are based on machine learning and come with pre-trained models. In addition, third-parties provide pre-trained models for popular NLP tools. The predictive power and accuracy of these tools depends on the quality of these models. Downstream researchers often base their results on pre-trained models instead of training their own. Consequently, pre-trained models are an essential resource to our community. However, to be best of our knowledge, no systematic study of pre-trained models has been conducted so far. This paper reports on the analysis of 274 pre-models for six NLP tools and four potential causes of problems: encoding, tokenization, normalization and change over time. The analysis is implemented in the open source tool Model Investigator. Our work 1) allows model consumers to better assess whether a model is suitable for their task, 2) enables tool and model creators to sanity-check their models before distributing them, and 3) enables improvements in tool interoperability by performing automatic adjustments of normalization or other pre-processing based on the models used.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,079
inproceedings
nakaguchi-etal-2016-combining
Combining Human Inputters and Language Services to provide Multi-language support system for International Symposiums
Murakami, Yohei and Lin, Donghui and Ide, Nancy and Pustejovsky, James
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5204/
Nakaguchi, Takao and Otani, Masayuki and Takasaki, Toshiyuki and Ishida, Toru
Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies ({WLSI}/{OIAF}4{HLT}2016)
28--35
In this research, we introduce and implement a method that combines human inputters and machine translators. When the languages of the participants vary widely, the cost of simultaneous translation becomes very high. However, the results of simply applying machine translation to speech text do not have the quality that is needed for real use. Thus, we propose a method that people who understand the language of the speaker cooperate with a machine translation service in support of multilingualization by the co-creation of value. We implement a system with this method and apply it to actual presentations. While the quality of direct machine translations is 1.84 (fluency) and 2.89 (adequacy), the system has corresponding values of 3.76 and 3.85.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,080
inproceedings
assawinjaipetch-etal-2016-recurrent
Recurrent Neural Network with Word Embedding for Complaint Classification
Murakami, Yohei and Lin, Donghui and Ide, Nancy and Pustejovsky, James
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5205/
Assawinjaipetch, Panuwat and Shirai, Kiyoaki and Sornlertlamvanich, Virach and Marukata, Sanparith
Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies ({WLSI}/{OIAF}4{HLT}2016)
36--43
Complaint classification aims at using information to deliver greater insights to enhance user experience after purchasing the products or services. Categorized information can help us quickly collect emerging problems in order to provide a support needed. Indeed, the response to the complaint without the delay will grant users highest satisfaction. In this paper, we aim to deliver a novel approach which can clarify the complaints precisely with the aim to classify each complaint into nine predefined classes i.e. acces-sibility, company brand, competitors, facilities, process, product feature, staff quality, timing respec-tively and others. Given the idea that one word usually conveys ambiguity and it has to be interpreted by its context, the word embedding technique is used to provide word features while applying deep learning techniques for classifying a type of complaints. The dataset we use contains 8,439 complaints of one company.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,081
inproceedings
eli-etal-2016-universal
Universal dependencies for {U}yghur
Murakami, Yohei and Lin, Donghui and Ide, Nancy and Pustejovsky, James
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5206/
Eli, Marhaba and Mushajiang, Weinila and Yibulayin, Tuergen and Abiderexiti, Kahaerjiang and Liu, Yan
Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies ({WLSI}/{OIAF}4{HLT}2016)
44--50
The Universal Dependencies (UD) Project seeks to build a cross-lingual studies of treebanks, linguistic structures and parsing. Its goal is to create a set of multilingual harmonized treebanks that are designed according to a universal annotation scheme. In this paper, we report on the conversion of the Uyghur dependency treebank to a UD version of the treebank which we term the Uyghur Universal Dependency Treebank (UyDT). We present the mapping of the Uyghur dependency treebank`s labelling scheme to the UD scheme, along with a clear description of the structural changes required in this conversion.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,082
inproceedings
luong-vu-2016-non
A non-expert {K}aldi recipe for {V}ietnamese Speech Recognition System
Murakami, Yohei and Lin, Donghui and Ide, Nancy and Pustejovsky, James
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5207/
Luong, Hieu-Thi and Vu, Hai-Quan
Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies ({WLSI}/{OIAF}4{HLT}2016)
51--55
In this paper we describe a non-expert setup for Vietnamese speech recognition system using Kaldi toolkit. We collected a speech corpus over fifteen hours from about fifty Vietnamese native speakers and using it to test the feasibility of our setup. The essential linguistic components for the Automatic Speech Recognition (ASR) system was prepared basing on the written form of the language instead of expertise knowledge on linguistic and phonology as commonly seen in rich resource languages like English. The modeling of tones by integrating them into the phoneme and using the phonetic decision tree is also discussed. Experimental results showed this setup for ASR systems does yield competitive results while still have potentials for further improvements.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,083
inproceedings
lu-etal-2016-evaluating
Evaluating Ensemble Based Pre-annotation on Named Entity Corpus Construction in {E}nglish and {C}hinese
Murakami, Yohei and Lin, Donghui and Ide, Nancy and Pustejovsky, James
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5208/
Lu, Tingming and Zhu, Man and Gao, Zhiqiang and Gui, Yaocheng
Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies ({WLSI}/{OIAF}4{HLT}2016)
56--60
Annotated corpora are crucial language resources, and pre-annotation is an usual way to reduce the cost of corpus construction. Ensemble based pre-annotation approach combines multiple existing named entity taggers and categorizes annotations into normal annotations with high confidence and candidate annotations with low confidence, to reduce the human annotation time. In this paper, we manually annotate three English datasets under various pre-annotation conditions, report the effects of ensemble based pre-annotation, and analyze the experimental results. In order to verify the effectiveness of ensemble based pre-annotation in other languages, such as Chinese, three Chinese datasets are also tested. The experimental results show that the ensemble based pre-annotation approach significantly reduces the number of annotations which human annotators have to add, and outperforms the baseline approaches in reduction of human annotation time without loss in annotation performance (in terms of F1-measure), on both English and Chinese datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,084
inproceedings
murakami-etal-2016-ontology
An Ontology for Language Service Composability
Murakami, Yohei and Lin, Donghui and Ide, Nancy and Pustejovsky, James
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5209/
Murakami, Yohei and Nakaguchi, Takao and Lin, Donghui and Ishida, Toru
Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies ({WLSI}/{OIAF}4{HLT}2016)
61--69
Fragmentation and recombination is a key to create customized language environments for supporting various intercultural activities. Fragmentation provides various language resource components for the customized language environments and recombination builds each language environment according to user`s request by combining these components. To realize this fragmentation and recombination process, existing language resources (both data and programs) should be shared as language services and combined beyond mismatch of their service interfaces. To address this issue, standardization is inevitable: standardized interfaces are necessary for language services as well as data format required for language resources. Therefore, we have constructed a hierarchy of language services based on inheritance of service interfaces, which is called language service ontology. This ontology allows users to create a new customized language service that is compatible with existing ones. Moreover, we have developed a dynamic service binding technology that instantiates various executable customized services from an abstract workflow according to user`s request. By using the ontology and service binding together, users can bind the instantiated language service to another abstract workflow for a new customized one.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,085
inproceedings
kano-2016-platform
Between Platform and {API}s: {K}achako {API} for Developers
Murakami, Yohei and Lin, Donghui and Ide, Nancy and Pustejovsky, James
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5210/
Kano, Yoshinobu
Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies ({WLSI}/{OIAF}4{HLT}2016)
70--75
Different types of users require different functions in NLP software. It is difficult for a single platform to cover all types of users. When a framework aims to provide more interoperability, users are required to learn more concepts; users' application designs are restricted to be compliant with the framework. While an interoperability framework is useful in certain cases, some types of users will not select the framework due to the learning cost and design restrictions. We suggest a rather simple framework for the interoperability aiming at developers. Reusing an existing NLP platform Kachako, we created an API oriented NLP system. This system loosely couples rich high-end functions, including annotation visualizations, statistical evaluations, an-notation searching, etc. This API do not require users much learning cost, providing customization ability for power users while also allowing easy users to employ many GUI functions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,086
inproceedings
biemann-2016-vectors
Vectors or Graphs? On Differences of Representations for Distributional Semantic Models
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5301/
Biemann, Chris
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
1--7
Distributional Semantic Models (DSMs) have recently received increased attention, together with the rise of neural architectures for scalable training of dense vector embeddings. While some of the literature even includes terms like {\textquoteleft}vectors' and {\textquoteleft}dimensionality' in the definition of DSMs, there are some good reasons why we should consider alternative formulations of distributional models. As an instance, I present a scalable graph-based solution to distributional semantics. The model belongs to the family of {\textquoteleft}count-based' DSMs, keeps its representation sparse and explicit, and thus fully interpretable. I will highlight some important differences between sparse graph-based and dense vector approaches to DSMs: while dense vector-based models are computationally easier to handle and provide a nice uniform representation that can be compared and combined in many ways, they lack interpretability, provenance and robustness. On the other hand, graph-based sparse models have a more straightforward interpretation, handle sense distinctions more naturally and can straightforwardly be linked to knowledge bases, while lacking the ability to compare arbitrary lexical units and a compositionality operation. Since both representations have their merits, I opt for exploring their combination in the outlook.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,088
inproceedings
lebani-lenci-2016-beware
{\textquotedblleft}Beware the Jabberwock, dear reader!{\textquotedblright} Testing the distributional reality of construction semantics
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5302/
Lebani, Gianluca and Lenci, Alessandro
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
8--18
Notwithstanding the success of the notion of construction, the computational tradition still lacks a way to represent the semantic content of these linguistic entities. Here we present a simple corpus-based model implementing the idea that the meaning of a syntactic construction is intimately related to the semantics of its typical verbs. It is a two-step process, that starts by identifying the typical verbs occurring with a given syntactic construction and building their distributional vectors. We then calculated the weighted centroid of these vectors in order to derive the distributional signature of a construction. In order to assess the goodness of our approach, we replicated the priming effect described by Johnson and Golberg (2013) as a function of the semantic distance between a construction and its prototypical verbs. Additional support for our view comes from a regression analysis showing that our distributional information can be used to model behavioral data collected with a crowdsourced elicitation experiment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,089
inproceedings
lopukhina-lopukhin-2016-regular
Regular polysemy: from sense vectors to sense patterns
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5303/
Lopukhina, Anastasiya and Lopukhin, Konstantin
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
19--23
Regular polysemy was extensively investigated in lexical semantics, but this phenomenon has been very little studied in distributional semantics. We propose a model for regular polysemy detection that is based on sense vectors and allows to work directly with senses in semantic vector space. Our method is able to detect polysemous words that have the same regular sense alternation as in a given example (a word with two automatically induced senses that represent one polysemy pattern, such as ANIMAL / FOOD). The method works equally well for nouns, verbs and adjectives and achieves average recall of 0.55 and average precision of 0.59 for ten different polysemy patterns.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,090
inproceedings
shwartz-dagan-2016-path
Path-based vs. Distributional Information in Recognizing Lexical Semantic Relations
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5304/
Shwartz, Vered and Dagan, Ido
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
24--29
Recognizing various semantic relations between terms is beneficial for many NLP tasks. While path-based and distributional information sources are considered complementary for this task, the superior results the latter showed recently suggested that the former`s contribution might have become obsolete. We follow the recent success of an integrated neural method for hypernymy detection (Shwartz et al., 2016) and extend it to recognize multiple relations. The empirical results show that this method is effective in the multiclass setting as well. We further show that the path-based information source always contributes to the classification, and analyze the cases in which it mostly complements the distributional information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,091
inproceedings
santos-etal-2016-semantic
Semantic Relation Classification: Task Formalisation and Refinement
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5305/
Santos, Vivian and Huerliman, Manuela and Davis, Brian and Handschuh, Siegfried and Freitas, Andr{\'e}
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
30--39
The identification of semantic relations between terms within texts is a fundamental task in Natural Language Processing which can support applications requiring a lightweight semantic interpretation model. Currently, semantic relation classification concentrates on relations which are evaluated over open-domain data. This work provides a critique on the set of abstract relations used for semantic relation classification with regard to their ability to express relationships between terms which are found in a domain-specific corpora. Based on this analysis, this work proposes an alternative semantic relation model based on reusing and extending the set of abstract relations present in the DOLCE ontology. The resulting set of relations is well grounded, allows to capture a wide range of relations and could thus be used as a foundation for automatic classification of semantic relations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,092
inproceedings
attia-etal-2016-power
The Power of Language Music: {A}rabic Lemmatization through Patterns
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5306/
Attia, Mohammed and Zirikly, Ayah and Diab, Mona
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
40--50
The interaction between roots and patterns in Arabic has intrigued lexicographers and morphologists for centuries. While roots provide the consonantal building blocks, patterns provide the syllabic vocalic moulds. While roots provide abstract semantic classes, patterns realize these classes in specific instances. In this way both roots and patterns are indispensable for understanding the derivational, morphological and, to some extent, the cognitive aspects of the Arabic language. In this paper we perform lemmatization (a high-level lexical processing) without relying on a lookup dictionary. We use a hybrid approach that consists of a machine learning classifier to predict the lemma pattern for a given stem, and mapping rules to convert stems to their respective lemmas with the vocalization defined by the pattern.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,093
inproceedings
kageback-salomonsson-2016-word
Word Sense Disambiguation using a Bidirectional {LSTM}
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5307/
K{\r{ageb{\"ack, Mikael and Salomonsson, Hans
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
51--56
In this paper we present a clean, yet effective, model for word sense disambiguation. Our approach leverage a bidirectional long short-term memory network which is shared between all words. This enables the model to share statistical strength and to scale well with vocabulary size. The model is trained end-to-end, directly from the raw text to sense labels, and makes effective use of word order. We evaluate our approach on two standard datasets, using identical hyperparameter settings, which are in turn tuned on a third set of held out data. We employ no external resources (e.g. knowledge graphs, part-of-speech tagging, etc), language specific features, or hand crafted rules, but still achieve statistically equivalent results to the best state-of-the-art systems, that employ no such limitations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,094
inproceedings
zock-biemann-2016-towards
Towards a resource based on users' knowledge to overcome the Tip of the Tongue problem.
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5308/
Zock, Michael and Biemann, Chris
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
57--68
Language production is largely a matter of words which, in the case of access problems, can be searched for in an external resource (lexicon, thesaurus). In this kind of dialogue the user provides the momentarily available knowledge concerning the target and the system responds with the best guess(es) it can make given this input. As tip-of-the-tongue (ToT)-studies have shown, people always have some knowledge concerning the target (meaning fragments, number of syllables, ...) even if its complete form is eluding them. We will show here how to tap on this knowledge to build a resource likely to help authors (speakers/writers) to overcome the ToT-problem. Yet, before doing so we need a better understanding of the various kinds of knowledge people have when looking for a word. To this end, we asked crowdworkers to provide some cues to describe a given target and to specify then how each one of them relates to the target, in the hope that this could help others to find the elusive word. Next, we checked how well a given search strategy worked when being applied to differently built lexical networks. The results showed quite dramatic differences, which is not really surprising. After all, different networks are built for different purposes; hence each one of them is more or less suited for a given task. What was more surprising though is the fact that the relational information given by the users did not allow us to find the elusive word in WordNet better than without it.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,095
inproceedings
santus-etal-2016-cogalex
The {C}og{AL}ex-{V} Shared Task on the Corpus-Based Identification of Semantic Relations
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5309/
Santus, Enrico and Gladkova, Anna and Evert, Stefan and Lenci, Alessandro
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
69--79
The shared task of the 5th Workshop on Cognitive Aspects of the Lexicon (CogALex-V) aims at providing a common benchmark for testing current corpus-based methods for the identification of lexical semantic relations (synonymy, antonymy, hypernymy, part-whole meronymy) and at gaining a better understanding of their respective strengths and weaknesses. The shared task uses a challenging dataset extracted from EVALution 1.0, which contains word pairs holding the above-mentioned relations as well as semantically unrelated control items (random). The task is split into two subtasks: (i) identification of related word pairs vs. unrelated ones; (ii) classification of the word pairs according to their semantic relation. This paper describes the subtasks, the dataset, the evaluation metrics, the seven participating systems and their results. The best performing system in subtask 1 is GHHH (F1 = 0.790), while the best system in subtask 2 is LexNet (F1 = 0.445). The dataset and the task description are available at \url{https://sites.google.com/site/cogalex2016/home/shared-task}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,096
inproceedings
shwartz-dagan-2016-cogalex
{C}og{AL}ex-{V} Shared Task: {L}ex{NET} - Integrated Path-based and Distributional Method for the Identification of Semantic Relations
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5310/
Shwartz, Vered and Dagan, Ido
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
80--85
We present a submission to the CogALex 2016 shared task on the corpus-based identification of semantic relations, using LexNET (Shwartz and Dagan, 2016), an integrated path-based and distributional method for semantic relation classification. The reported results in the shared task bring this submission to the third place on subtask 1 (word relatedness), and the first place on subtask 2 (semantic relation classification), demonstrating the utility of integrating the complementary path-based and distributional information sources in recognizing concrete semantic relations. Combined with a common similarity measure, LexNET performs fairly good on the word relatedness task (subtask 1). The relatively low performance of LexNET and all other systems on subtask 2, however, confirms the difficulty of the semantic relation classification task, and stresses the need to develop additional methods for this task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,097
inproceedings
attia-etal-2016-cogalex
{C}og{AL}ex-{V} Shared Task: {GHHH} - Detecting Semantic Relations via Word Embeddings
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5311/
Attia, Mohammed and Maharjan, Suraj and Samih, Younes and Kallmeyer, Laura and Solorio, Thamar
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
86--91
This paper describes our system submission to the CogALex-2016 Shared Task on Corpus-Based Identification of Semantic Relations. Our system won first place for Task-1 and second place for Task-2. The evaluation results of our system on the test set is 88.1{\%} (79.0{\%} for TRUE only) f-measure for Task-1 on detecting semantic similarity, and 76.0{\%} (42.3{\%} when excluding RANDOM) for Task-2 on identifying finer-grained semantic relations. In our experiments, we try word analogy, linear regression, and multi-task Convolutional Neural Networks (CNNs) with word embeddings from publicly available word vectors. We found that linear regression performs better in the binary classification (Task-1), while CNNs have better performance in the multi-class semantic classification (Task-2). We assume that word analogy is more suited for deterministic answers rather than handling the ambiguity of one-to-many and many-to-many relationships. We also show that classifier performance could benefit from balancing the distribution of labels in the training data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,098
inproceedings
evert-2016-cogalex
{C}og{AL}ex-{V} Shared Task: Mach5 {--} A traditional {DSM} approach to semantic relatedness
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5312/
Evert, Stefan
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
92--97
This contribution provides a strong baseline result for the CogALex-V shared task using a traditional {\textquotedblleft}count{\textquotedblright}-type DSM (placed in rank 2 out of 7 in subtask 1 and rank 3 out of 6 in subtask 2). Parameter tuning experiments reveal some surprising effects and suggest that the use of random word pairs as negative examples may be problematic, guiding the parameter optimization in an undesirable direction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,099
inproceedings
chersoni-etal-2016-cogalex
{C}og{AL}ex-{V} Shared Task: {ROOT}18
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5313/
Chersoni, Emmanuele and Rambelli, Giulia and Santus, Enrico
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
98--103
In this paper, we describe ROOT 18, a classifier using the scores of several unsupervised distributional measures as features to discriminate between semantically related and unrelated words, and then to classify the related pairs according to their semantic relation (i.e. synonymy, antonymy, hypernymy, part-whole meronymy). Our classifier participated in the CogALex-V Shared Task, showing a solid performance on the first subtask, but a poor performance on the second subtask. The low scores reported on the second subtask suggest that distributional measures are not sufficient to discriminate between multiple semantic relations at once.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,100
inproceedings
guggilla-2016-cogalex
{C}og{AL}ex-{V} Shared Task: {CGSRC} - Classifying Semantic Relations using Convolutional Neural Networks
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5314/
Guggilla, Chinnappa
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
104--109
In this paper, we describe a system (CGSRC) for classifying four semantic relations: synonym, hypernym, antonym and meronym using convolutional neural networks (CNN). We have participated in CogALex-V semantic shared task of corpus-based identification of semantic relations. Proposed approach using CNN-based deep neural networks leveraging pre-compiled word2vec distributional neural embeddings achieved 43.15{\%} weighted-F1 accuracy on subtask-1 (checking existence of a relation between two terms) and 25.24{\%} weighted-F1 accuracy on subtask-2 (classifying relation types).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,101
inproceedings
luce-etal-2016-cogalex
{C}og{AL}ex-{V} Shared Task: {LOPE}
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5315/
Luce, Kanan and Yu, Jiaxing and Hsieh, Shu-Kai
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
110--113
Automatic discovery of semantically-related words is one of the most important NLP tasks, and has great impact on the theoretical psycholinguistic modeling of the mental lexicon. In this shared task, we employ the word embeddings model to testify two thoughts explicitly or implicitly assumed by the NLP community: (1). Word embedding models can reflect syntagmatic similarities in usage between words to distances in projected vector space. (2). Word embedding models can reflect paradigmatic relationships between words.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,102
inproceedings
wartena-aga-2016-cogalex
{C}og{AL}ex-{V} Shared Task: {H}s{H}-Supervised {--} Supervised similarity learning using entry wise product of context vectors
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5316/
Wartena, Christian and Aga, Rosa Tsegaye
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
114--118
The CogALex-V Shared Task provides two datasets that consists of pairs of words along with a classification of their semantic relation. The dataset for the first task distinguishes only between related and unrelated, while the second data set distinguishes several types of semantic relations. A number of recent papers propose to construct a feature vector that represents a pair of words by applying a pairwise simple operation to all elements of the feature vector. Subsequently, the pairs can be classified by training any classification algorithm on these vectors. In the present paper we apply this method to the provided datasets. We see that the results are not better than from the given simple baseline. We conclude that the results of the investigated method are strongly depended on the type of data to which it is applied.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,103
inproceedings
aoki-nakatani-2016-study
A Study of the Bump Alternation in {J}apanese from the Perspective of Extended/Onset Causation
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5317/
Aoki, Natsuno and Nakatani, Kentaro
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
119--124
This paper deals with a seldom studied object/oblique alternation phenomenon in Japanese, which. We call this the bump alternation. This phenomenon, first discussed by Sadanobu (1990), is similar to the English with/against alternation. For example, compare hit the wall with the bat [=immobile-as-direct-object frame] to hit the bat against the wall [=mobile-as-direct-object frame]). However, in the Japanese version, the case frame remains constant. Although we fundamentally question Sadanobu`s acceptability judgment, we also claim that the causation type (i.e., whether the event is an instance of onset or extended causation; Talmy, 1988; 2000) could make an improvement. An extended causative interpretation could improve the acceptability of the otherwise awkward immobile-as-direct-object frame. We examined this claim through a rating study, and the results showed an interaction between the Causation type (extended/onset) and the Object type (mobile/immobile) in the direction we predicted. We propose that a perspective shift on what is moving causes the {\textquotedblleft}extended causation{\textquotedblright} advantage.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,104
inproceedings
bott-etal-2016-ghost
{G}ho{S}t-{PV}: A Representative Gold Standard of {G}erman Particle Verbs
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5318/
Bott, Stefan and Khvtisavrishvili, Nana and Kisselew, Max and Schulte im Walde, Sabine
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
125--133
German particle verbs represent a frequent type of multi-word-expression that forms a highly productive paradigm in the lexicon. Similarly to other multi-word expressions, particle verbs exhibit various levels of compositionality. One of the major obstacles for the study of compositionality is the lack of representative gold standards of human ratings. In order to address this bottleneck, this paper presents such a gold standard data set containing 400 randomly selected German particle verbs. It is balanced across several particle types and three frequency bands, and accomplished by human ratings on the degree of semantic compositionality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,105
inproceedings
daoud-daoud-2016-discovering
Discovering Potential Terminological Relationships from {T}witter`s Timed Content
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5319/
Daoud, Mohammad and Daoud, Daoud
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
134--144
This paper presents a method to discover possible terminological relationships from tweets. We match the histories of terms (frequency patterns). Similar history indicates a possible relationship between terms. For example, if two terms (t1, t2) appeared frequently in Twitter at particular days, and there is a {\textquoteleft}similarity' in the frequencies over a period of time, then t1 and t2 can be related. Maintaining standard terminological repository with updated relationships can be difficult; especially in a dynamic domain such as social media where thousands of new terms (neology) are coined every day. So we propose to construct a raw repository of lexical units with unconfirmed relationships. We have experimented our method on time-sensitive Arabic terms used by the online Arabic community of Twitter. We draw relationships between these terms by matching their similar frequency patterns (timelines). We use dynamic time warping as a similarity measure. For evaluation, we have selected 630 possible terms (we call them preterms) and we matched the similarity of these terms over a period of 30 days. Around 270 correct relationships were discovered with a precision of 0.61. These relationships were extracted without considering the textual context of the term.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,106
inproceedings
fonseca-etal-2016-lexfom
{L}exfom: a lexical functions ontology model
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5320/
Fonseca, Alexsandro and Sadat, Fatiha and Lareau, Fran{\c{c}}ois
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
145--155
A lexical function represents a type of relation that exists between lexical units (words or expressions) in any language. For example, the antonymy is a type of relation that is represented by the lexical function Anti: Anti(big) = small. Those relations include both paradigmatic relations, i.e. vertical relations, such as synonymy, antonymy and meronymy and syntagmatic relations, i.e. horizontal relations, such as objective qualification (legitimate demand), subjective qualification (fruitful analysis), positive evaluation (good review) and support verbs (pay a visit, subject to an interrogation). In this paper, we present the Lexical Functions Ontology Model (lexfom) to represent lexical functions and the relation among lexical units. Lexfom is divided in four modules: lexical function representation (lfrep), lexical function family (lffam), lexical function semantic perspective (lfsem) and lexical function relations (lfrel). Moreover, we show how it combines to Lexical Model for Ontologies (lemon), for the transformation of lexical networks into the semantic web formats. So far, we have implemented 100 simple and 500 complex lexical functions, and encoded about 8,000 syntagmatic and 46,000 paradigmatic relations, for the French language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,107
inproceedings
l-homme-etal-2016-proposal
A Proposal for combining {\textquotedblleft}general{\textquotedblright} and specialized frames
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5321/
L{'} Homme, Marie-Claude and Subirats, Carlos and Robichaud, Beno{\^i}t
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
156--165
The objectives of the work described in this paper are: 1. To list the differences between a general language resource (namely FrameNet) and a domain-specific resource; 2. To devise solutions to merge their contents in order to increase the coverage of the general resource. Both resources are based on Frame Semantics (Fillmore 1985; Fillmore and Baker 2010) and this raises specific challenges since the theoretical framework and the methodology derived from it provide for both a lexical description and a conceptual representation. We propose a series of strategies that handle both lexical and conceptual (frame) differences and implemented them in the specialized resource. We also show that most differences can be handled in a straightforward manner. However, some more domain specific differences (such as frames defined exclusively for the specialized domain or relations between these frames) are likely to be much more difficult to take into account since some are domain-specific.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,108
inproceedings
pastena-lenci-2016-antonymy
Antonymy and Canonicity: Experimental and Distributional Evidence
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5322/
Pastena, Andreana and Lenci, Alessandro
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
166--175
The present paper investigates the phenomenon of antonym canonicity by providing new behavioural and distributional evidence on Italian adjectives. Previous studies have showed that some pairs of antonyms are perceived to be better examples of opposition than others, and are so considered representative of the whole category (e.g., Deese, 1964; Murphy, 2003; Paradis et al., 2009). Our goal is to further investigate why such canonical pairs (Murphy, 2003) exist and how they come to be associated. In the literature, two different approaches have dealt with this issue. The lexical-categorical approach (Charles and Miller, 1989; Justeson and Katz, 1991) finds the cause of canonicity in the high co-occurrence frequency of the two adjectives. The cognitive-prototype approach (Paradis et al., 2009; Jones et al., 2012) instead claims that two adjectives form a canonical pair because they are aligned along a simple and salient dimension. Our empirical evidence, while supporting the latter view, shows that the paradigmatic distributional properties of adjectives can also contribute to explain the phenomenon of canonicity, providing a corpus-based correlate of the cognitive notion of salience.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,109
inproceedings
silva-etal-2016-categorization
Categorization of Semantic Roles for Dictionary Definitions
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5323/
Silva, Vivian and Handschuh, Siegfried and Freitas, Andr{\'e}
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
176--184
Understanding the semantic relationships between terms is a fundamental task in natural language processing applications. While structured resources that can express those relationships in a formal way, such as ontologies, are still scarce, a large number of linguistic resources gathering dictionary definitions is becoming available, but understanding the semantic structure of natural language definitions is fundamental to make them useful in semantic interpretation tasks. Based on an analysis of a subset of WordNet`s glosses, we propose a set of semantic roles that compose the semantic structure of a dictionary definition, and show how they are related to the definition`s syntactic configuration, identifying patterns that can be used in the development of information extraction frameworks and semantic models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,110
inproceedings
tomokiyo-boitet-2016-corpus
Corpus and dictionary development for classifiers/quantifiers towards a {F}rench-{J}apanese machine translation
Zock, Michael and Lenci, Alessandro and Evert, Stefan
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5324/
Tomokiyo, Mutsuko and Boitet, Christian
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon ({C}og{AL}ex - V)
185--192
Although quantifiers/classifiers expressions occur frequently in everyday communications or written documents, there is no description for them in classical bilingual paper dictionaries, nor in machine-readable dictionaries. The paper describes a corpus and dictionary development for quantifiers/classifiers, and their usage in the framework of French-Japanese machine translation (MT). They often cause problems of lexical ambiguity and of set phrase recognition during analysis, in particular for a long-distance language pair like French and Japanese. For the development of a dictionary aiming at ambiguity resolution for expressions including quantifiers and classifiers which may be ambiguous with common nouns, we have annotated our corpus with UWs (interlingual lexemes) of UNL (Universal Networking Language) found on the UNL-jp dictionary. The extraction of potential classifiers/quantifiers from corpus is made by UNLexplorer web service. Keywords : classifiers, quantifiers, phraseology study, corpus annotation, UNL (Universal Networking Language), UWs dictionary, Tori Bank, French-Japanese machine translation (MT).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,111
inproceedings
gotou-etal-2016-extension
An extension of {ISO}-Space for annotating object direction
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5401/
Gotou, Daiki and Nishikawa, Hitoshi and Tokunaga, Takenobu
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
1--9
In this paper, we extend an existing annotation scheme ISO-Space for annotating necessary spatial information for the task placing an specified object at a specified location with a specified direction according to a natural language instruction. We call such task the spatial placement problem. Our extension particularly focuses on describing the object direction, when the object is placed on the 2D plane. We conducted an annotation experiment in which a corpus of 20 situated dialogues were annotated. The annotation result showed the number of newly introduced tags by our proposal is not negligible. We also implemented an analyser that automatically assigns the proposed tags to the corpus and evaluated its performance. The result showed that the performance for entity tag was quite high ranging from 0.68 to 0.99 in F-measure, but not the case for relation tags, i.e. less than 0.4 in F-measure.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,113
inproceedings
kaneko-etal-2016-annotation
Annotation and Analysis of Discourse Relations, Temporal Relations and Multi-Layered Situational Relations in {J}apanese Texts
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5402/
Kaneko, Kimi and Sugawara, Saku and Mineshima, Koji and Bekki, Daisuke
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
10--19
This paper proposes a methodology for building a specialized Japanese data set for recognizing temporal relations and discourse relations. In addition to temporal and discourse relations, multi-layered situational relations that distinguish generic and specific states belonging to different layers in a discourse are annotated. Our methodology has been applied to 170 text fragments taken from Wikinews articles in Japanese. The validity of our methodology is evaluated and analyzed in terms of degree of annotator agreement and frequency of errors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,114
inproceedings
leung-etal-2016-developing
Developing {U}niversal {D}ependencies for {M}andarin {C}hinese
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5403/
Leung, Herman and Poiret, Rafa{\"el and Wong, Tak-sum and Chen, Xinying and Gerdes, Kim and Lee, John
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
20--29
This article proposes a Universal Dependency Annotation Scheme for Mandarin Chinese, including POS tags and dependency analysis. We identify cases of idiosyncrasy of Mandarin Chinese that are difficult to fit into the current schema which has mainly been based on the descriptions of various Indo-European languages. We discuss differences between our scheme and those of the Stanford Chinese Dependencies and the Chinese Dependency Treebank.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,115
inproceedings
minamiguchi-tsuchiya-2016-developing
Developing Corpus of Lecture Utterances Aligned to Slide Components
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5404/
Minamiguchi, Ryo and Tsuchiya, Masatoshi
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
30--37
The approach which formulates the automatic text summarization as a maximum coverage problem with knapsack constraint over a set of textual units and a set of weighted conceptual units is promising. However, it is quite important and difficult to determine the appropriate granularity of conceptual units for this formulation. In order to resolve this problem, we are examining to use components of presentation slides as conceptual units to generate a summary of lecture utterances, instead of other possible conceptual units like base noun phrases or important nouns. This paper explains our developing corpus designed to evaluate our proposing approach, which consists of presentation slides and lecture utterances aligned to presentation slide components.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,116
inproceedings
nguyen-etal-2016-vsolscsum
{VS}o{LSCS}um: Building a {V}ietnamese Sentence-Comment Dataset for Social Context Summarization
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5405/
Nguyen, Minh-Tien and Lai, Dac Viet and Do, Phong-Khac and Tran, Duc-Vu and Nguyen, Minh-Le
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
38--48
This paper presents VSoLSCSum, a Vietnamese linked sentence-comment dataset, which was manually created to treat the lack of standard corpora for social context summarization in Vietnamese. The dataset was collected through the keywords of 141 Web documents in 12 special events, which were mentioned on Vietnamese Web pages. Social users were asked to involve in creating standard summaries and the label of each sentence or comment. The inter-agreement calculated by Cohen`s Kappa among raters after validating is 0.685. To illustrate the potential use of our dataset, a learning to rank method was trained by using a set of local and social features. Experimental results indicate that the summary model trained on our dataset outperforms state-of-the-art baselines in both ROUGE-1 and ROUGE-2 in social context summarization.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,117
inproceedings
asahara-matsumoto-2016-bccwj
{BCCWJ}-{D}ep{P}ara: A Syntactic Annotation Treebank on the {\textquoteleft}{B}alanced {C}orpus of {C}ontemporary {W}ritten {J}apanese'
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5406/
Asahara, Masayuki and Matsumoto, Yuji
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
49--58
Paratactic syntactic structures are difficult to represent in syntactic dependency tree structures. As such, we propose an annotation schema for syntactic dependency annotation of Japanese, in which coordinate structures are split from and overlaid on bunsetsu-based (base phrase unit) dependency. The schema represents nested coordinate structures, non-constituent conjuncts, and forward sharing as the set of regions. The annotation was performed on the core data of {\textquoteleft}Balanced Corpus of Contemporary Written Japanese', which comprised about one million words and 1980 samples from six registers, such as newspapers, books, magazines, and web texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,118
inproceedings
chu-etal-2016-sctb
{SCTB}: A {C}hinese Treebank in Scientific Domain
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5407/
Chu, Chenhui and Nakazawa, Toshiaki and Kawahara, Daisuke and Kurohashi, Sadao
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
59--67
Treebanks are curial for natural language processing (NLP). In this paper, we present our work for annotating a Chinese treebank in scientific domain (SCTB), to address the problem of the lack of Chinese treebanks in this domain. Chinese analysis and machine translation experiments conducted using this treebank indicate that the annotated treebank can significantly improve the performance on both tasks. This treebank is released to promote Chinese NLP research in scientific domain.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,119
inproceedings
iwakura-etal-2016-big
Big Community Data before World Wide Web Era
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5408/
Iwakura, Tomoya and Takahashi, Tetsuro and Ohtani, Akihiro and Matsui, Kunio
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
68--72
This paper introduces the NIFTY-Serve corpus, a large data archive collected from Japanese discussion forums that operated via a Bulletin Board System (BBS) between 1987 and 2006. This corpus can be used in Artificial Intelligence researches such as Natural Language Processing, Community Analysis, and so on. The NIFTY-Serve corpus differs from data on WWW in three ways; (1) essentially spam- and duplication-free because of strict data collection procedures, (2) historic user-generated data before WWW, and (3) a complete data set because the service now shut down. We also introduce some examples of use of the corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,120
inproceedings
gunarso-riza-2016-overview
An Overview of {BPPT}`s {I}ndonesian Language Resources
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5409/
Gunarso, Gunarso and Riza, Hammam
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
73--77
This paper describes various Indonesian language resources that Agency for the Assessment and Application of Technology (BPPT) has developed and collected since mid 80`s when we joined MMTS (Multilingual Machine Translation System), an international project coordinated by CICC-Japan to develop a machine translation system for five Asian languages (Bahasa Indonesia, Malay, Thai, Japanese, and Chinese). Since then, we have been actively doing many types of research in the field of statistical machine translation, speech recognition, and speech synthesis which requires many text and speech corpus. Most recent cooperation within ASEAN-IVO is the development of Indonesian ALT (Asian Language Treebank) has added new NLP tools.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,121
inproceedings
kimura-etal-2016-creating
Creating {J}apanese Political Corpus from Local Assembly Minutes of 47 prefectures
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5410/
Kimura, Yasutomo and Takamaru, Keiichi and Tanaka, Takuma and Kobayashi, Akio and Sakaji, Hiroki and Uchida, Yuzu and Ototake, Hokuto and Masuyama, Shigeru
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
78--85
This paper describes a Japanese political corpus created for interdisciplinary political research. The corpus contains the local assembly minutes of 47 prefectures from April 2011 to March 2015. This four-year period coincides with the term of office for assembly members in most autonomies. We analyze statistical data, such as the number of speakers, characters, and words, to clarify the characteristics of local assembly minutes. In addition, we identify problems associated with the different web services used by the autonomies to make the minutes available to the public.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,122
inproceedings
xu-etal-2016-selective
Selective Annotation of Sentence Parts: Identification of Relevant Sub-sentential Units
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5411/
Xu, Ge and Yang, Xiaoyan and Huang, Chu-Ren
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
86--94
Many NLP tasks involve sentence-level annotation yet the relevant information is not encoded at sentence level but at some relevant parts of the sentence. Such tasks include but are not limited to: sentiment expression annotation, product feature annotation, and template annotation for Q{\&}A systems. However, annotation of the full corpus sentence by sentence is resource intensive. In this paper, we propose an approach that iteratively extracts frequent parts of sentences for annotating, and compresses the set of sentences after each round of annotation. Our approach can also be used in preparing training sentences for binary classification (domain-related vs. noise, subjectivity vs. objectivity, etc.), assuming that sentence-type annotation can be predicted by annotation of the most relevant sub-sentences. Two experiments are performed to test our proposal and evaluated in terms of time saved and agreement of annotation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,123
inproceedings
yamamura-etal-2016-kyutech
The {K}yutech corpus and topic segmentation using a combined method
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5412/
Yamamura, Takashi and Shimada, Kazutaka and Kawahara, Shintaro
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
95--104
Summarization of multi-party conversation is one of the important tasks in natural language processing. In this paper, we explain a Japanese corpus and a topic segmentation task. To the best of our knowledge, the corpus is the first Japanese corpus annotated for summarization tasks and freely available to anyone. We call it {\textquotedblleft}the Kyutech corpus.{\textquotedblright} The task of the corpus is a decision-making task with four participants and it contains utterances with time information, topic segmentation and reference summaries. As a case study for the corpus, we describe a method combined with LCSeg and TopicTiling for a topic segmentation task. We discuss the effectiveness and the problems of the combined method through the experiment with the Kyutech corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,124
inproceedings
shudo-etal-2016-automatic
Automatic Evaluation of Commonsense Knowledge for Refining {J}apanese {C}oncept{N}et
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5413/
Shudo, Seiya and Rzepka, Rafal and Araki, Kenji
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
105--112
In this paper we present two methods for automatic common sense knowledge evaluation for Japanese entries in ConceptNet ontology. Our proposed methods utilize text-mining approach: one with relation clue words and WordNet synonyms, and one without. Both methods were tested with a blog corpus. The system based on our proposed methods reached relatively high precision score for three relations (MadeOf, UsedFor, AtLocation), which is comparable with previous research using commercial search engines and simpler input. We analyze errors and discuss problems of common sense evaluation, both manual and automatic and propose ideas for further improvements.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,125
inproceedings
al-badrashiny-etal-2016-samer
{SAMER}: A Semi-Automatically Created Lexical Resource for {A}rabic Verbal Multiword Expressions Tokens Paradigm and their Morphosyntactic Features
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5414/
Al-Badrashiny, Mohamed and Hawwari, Abdelati and Ghoneim, Mahmoud and Diab, Mona
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
113--122
Although MWE are relatively morphologically and syntactically fixed expressions, several types of flexibility can be observed in MWE, verbal MWE in particular. Identifying the degree of morphological and syntactic flexibility of MWE is very important for many Lexicographic and NLP tasks. Adding MWE variants/tokens to a dictionary resource requires characterizing the flexibility among other morphosyntactic features. Carrying out the task manually faces several challenges since it is a very laborious task time and effort wise, as well as it will suffer from coverage limitation. The problem is exacerbated in rich morphological languages where the average word in Arabic could have 12 possible inflection forms. Accordingly, in this paper we introduce a semi-automatic Arabic multiwords expressions resource (SAMER). We propose an automated method that identifies the morphological and syntactic flexibility of Arabic Verbal Multiword Expressions (AVMWE). All observed morphological variants and syntactic pattern alternations of an AVMWE are automatically acquired using large scale corpora. We look for three morphosyntactic aspects of AVMWE types investigating derivational and inflectional variations and syntactic templates, namely: 1) inflectional variation (inflectional paradigm) and calculating degree of flexibility; 2) derivational productivity; and 3) identifying and classifying the different syntactic types. We build a comprehensive list of AVMWE. Every token in the AVMWE list is lemmatized and tagged with POS information. We then search Arabic Gigaword and All ATBs for all possible flexible matches. For each AVMWE type we generate: a) a statistically ranked list of MWE-lexeme inflections and syntactic pattern alternations; b) An abstract syntactic template; and c) The most frequent form. Our technique is validated using a Golden MWE annotated list. The results shows that the quality of the generated resource is 80.04{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,126
inproceedings
le-etal-2016-sentiment
Sentiment Analysis for Low Resource Languages: A Study on Informal {I}ndonesian Tweets
Hasida, Koiti and Wong, Kam-Fai and Calzorari, Nicoletta and Choi, Key-Sun
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/W16-5415/
Le, Tuan Anh and Moeljadi, David and Miura, Yasuhide and Ohkuma, Tomoko
Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12)
123--131
This paper describes our attempt to build a sentiment analysis system for Indonesian tweets. With this system, we can study and identify sentiments and opinions in a text or document computationally. We used four thousand manually labeled tweets collected in February and March 2016 to build the model. Because of the variety of content in tweets, we analyze tweets into eight groups in total, including pos(itive), neg(ative), and neu(tral). Finally, we obtained 73.2{\%} accuracy with Long Short Term Memory (LSTM) without normalizer.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,127
article
faruqui-etal-2016-morpho
Morpho-syntactic Lexicon Generation Using Graph-based Semi-supervised Learning
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1001/
Faruqui, Manaal and McDonald, Ryan and Soricut, Radu
null
1--16
Morpho-syntactic lexicons provide information about the morphological and syntactic roles of words in a language. Such lexicons are not available for all languages and even when available, their coverage can be limited. We present a graph-based semi-supervised learning method that uses the morphological, syntactic and semantic relations between words to automatically construct wide coverage lexicons from small seed sets. Our method is language-independent, and we show that we can expand a 1000 word seed lexicon to more than 100 times its size with high quality for 11 languages. In addition, the automatically created lexicons provide features that improve performance in two downstream tasks: morphological tagging and dependency parsing.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00079
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,606
article
hill-etal-2016-learning-understand
Learning to Understand Phrases by Embedding the Dictionary
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1002/
Hill, Felix and Cho, Kyunghyun and Korhonen, Anna and Bengio, Yoshua
null
17--30
Distributional models that learn rich semantic word representations are a success story of recent NLP research. However, developing models that learn useful representations of phrases and sentences has proved far harder. We propose using the definitions found in everyday dictionaries as a means of bridging this gap between lexical and phrasal semantics. Neural language embedding models can be effectively trained to map dictionary definitions (phrases) to (lexical) representations of the words defined by those definitions. We present two applications of these architectures: reverse dictionaries that return the name of a concept given a definition or description and general-knowledge crossword question answerers. On both tasks, neural language embedding models trained on definitions from a handful of freely-available lexical resources perform as well or better than existing commercial systems that rely on significant task-specific engineering. The results highlight the effectiveness of both neural embedding architectures and definition-based training for developing models that understand phrases and sentences.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00080
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,607
article
frermann-lapata-2016-bayesian
A {B}ayesian Model of Diachronic Meaning Change
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1003/
Frermann, Lea and Lapata, Mirella
null
31--45
Word meanings change over time and an automated procedure for extracting this information from text would be useful for historical exploratory studies, information retrieval or question answering. We present a dynamic Bayesian model of diachronic meaning change, which infers temporal word representations as a set of senses and their prevalence. Unlike previous work, we explicitly model language change as a smooth, gradual process. We experimentally show that this modeling decision is beneficial: our model performs competitively on meaning change detection tasks whilst inducing discernible word senses and their development over time. Application of our model to the SemEval-2015 temporal classification benchmark datasets further reveals that it performs on par with highly optimized task-specific systems.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00081
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,608
article
gutierrez-etal-2016-detecting
Detecting Cross-Cultural Differences Using a Multilingual Topic Model
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1004/
Guti{\'e}rrez, E.D. and Shutova, Ekaterina and Lichtenstein, Patricia and de Melo, Gerard and Gilardi, Luca
null
47--60
Understanding cross-cultural differences has important implications for world affairs and many aspects of the life of society. Yet, the majority of text-mining methods to date focus on the analysis of monolingual texts. In contrast, we present a statistical model that simultaneously learns a set of common topics from multilingual, non-parallel data and automatically discovers the differences in perspectives on these topics across linguistic communities. We perform a behavioural evaluation of a subset of the differences identified by our model in English and Spanish to investigate their psychological validity.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00082
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,609
article
pavlick-tetreault-2016-empirical
An Empirical Analysis of Formality in Online Communication
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1005/
Pavlick, Ellie and Tetreault, Joel
null
61--74
This paper presents an empirical study of linguistic formality. We perform an analysis of humans' perceptions of formality in four different genres. These findings are used to develop a statistical model for predicting formality, which is evaluated under different feature settings and genres. We apply our model to an investigation of formality in online discussion forums, and present findings consistent with theories of formality and linguistic coordination.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00083
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,610
article
hauer-kondrak-2016-decoding
Decoding Anagrammed Texts Written in an Unknown Language and Script
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1006/
Hauer, Bradley and Kondrak, Grzegorz
null
75--86
Algorithmic decipherment is a prime example of a truly unsupervised problem. The first step in the decipherment process is the identification of the encrypted language. We propose three methods for determining the source language of a document enciphered with a monoalphabetic substitution cipher. The best method achieves 97{\%} accuracy on 380 languages. We then present an approach to decoding anagrammed substitution ciphers, in which the letters within words have been arbitrarily transposed. It obtains the average decryption word accuracy of 93{\%} on a set of 50 ciphertexts in 5 languages. Finally, we report the results on the Voynich manuscript, an unsolved fifteenth century cipher, which suggest Hebrew as the language of the document.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00084
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,611
article
jardine-heinz-2016-learning
Learning Tier-based Strictly 2-Local Languages
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1007/
Jardine, Adam and Heinz, Jeffrey
null
87--98
The Tier-based Strictly 2-Local (TSL2) languages are a class of formal languages which have been shown to model long-distance phonotactic generalizations in natural language (Heinz et al., 2011). This paper introduces the Tier-based Strictly 2-Local Inference Algorithm (2TSLIA), the first nonenumerative learner for the TSL2 languages. We prove the 2TSLIA is guaranteed to converge in polynomial time on a data sample whose size is bounded by a constant.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00085
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,612
article
cuong-etal-2016-adapting
Adapting to All Domains at Once: Rewarding Domain Invariance in {SMT}
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1008/
Cuong, Hoang and Sima{'}an, Khalil and Titov, Ivan
null
99--112
Existing work on domain adaptation for statistical machine translation has consistently assumed access to a small sample from the test distribution (target domain) at training time. In practice, however, the target domain may not be known at training time or it may change to match user needs. In such situations, it is natural to push the system to make safer choices, giving higher preference to domain-invariant translations, which work well across domains, over risky domain-specific alternatives. We encode this intuition by (1) inducing latent subdomains from the training data only; (2) introducing features which measure how specialized phrases are to individual induced sub-domains; (3) estimating feature weights on out-of-domain data (rather than on the target domain). We conduct experiments on three language pairs and a number of different domains. We observe consistent improvements over a baseline which does not explicitly reward domain invariance.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00086
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,613
article
sultan-etal-2016-joint
A Joint Model for Answer Sentence Ranking and Answer Extraction
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1009/
Sultan, Md Arafat and Castelli, Vittorio and Florian, Radu
null
113--125
Answer sentence ranking and answer extraction are two key challenges in question answering that have traditionally been treated in isolation, i.e., as independent tasks. In this article, we (1) explain how both tasks are related at their core by a common quantity, and (2) propose a simple and intuitive joint probabilistic model that addresses both via joint computation but task-specific application of that quantity. In our experiments with two TREC datasets, our joint model substantially outperforms state-of-the-art systems in both tasks.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00087
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,614
article
reddy-etal-2016-transforming
Transforming Dependency Structures to Logical Forms for Semantic Parsing
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1010/
Reddy, Siva and T{\"ackstr{\"om, Oscar and Collins, Michael and Kwiatkowski, Tom and Das, Dipanjan and Steedman, Mark and Lapata, Mirella
null
127--140
The strongly typed syntax of grammar formalisms such as CCG, TAG, LFG and HPSG offers a synchronous framework for deriving syntactic structures and semantic logical forms. In contrast{---}partly due to the lack of a strong type system{---}dependency structures are easy to annotate and have become a widely used form of syntactic analysis for many languages. However, the lack of a type system makes a formal mechanism for deriving logical forms from dependency structures challenging. We address this by introducing a robust system based on the lambda calculus for deriving neo-Davidsonian logical forms from dependency trees. These logical forms are then used for semantic parsing of natural language to Freebase. Experiments on the Free917 and Web-Questions datasets show that our representation is superior to the original dependency trees and that it outperforms a CCG-based representation on this task. Compared to prior work, we obtain the strongest result to date on Free917 and competitive results on WebQuestions.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00088
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,615
article
tsai-roth-2016-concept
Concept Grounding to Multiple Knowledge Bases via Indirect Supervision
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1011/
Tsai, Chen-Tse and Roth, Dan
null
141--154
We consider the problem of disambiguating concept mentions appearing in documents and grounding them in multiple knowledge bases, where each knowledge base addresses some aspects of the domain. This problem poses a few additional challenges beyond those addressed in the popular Wikification problem. Key among them is that most knowledge bases do not contain the rich textual and structural information Wikipedia does; consequently, the main supervision signal used to train Wikification rankers does not exist anymore. In this work we develop an algorithmic approach that, by carefully examining the relations between various related knowledge bases, generates an indirect supervision signal it uses to train a ranking model that accurately chooses knowledge base entries for a given mention; moreover, it also induces prior knowledge that can be used to support a global coherent mapping of all the concepts in a given document to the knowledge bases. Using the biomedical domain as our application, we show that our indirectly supervised ranking model outperforms other unsupervised baselines and that the quality of this indirect supervision scheme is very close to a supervised model. We also show that considering multiple knowledge bases together has an advantage over grounding concepts to each knowledge base individually.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00089
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,616
article
richardson-kuhn-2016-learning
Learning to Make Inferences in a Semantic Parsing Task
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1012/
Richardson, Kyle and Kuhn, Jonas
null
155--168
We introduce a new approach to training a semantic parser that uses textual entailment judgements as supervision. These judgements are based on high-level inferences about whether the meaning of one sentence follows from another. When applied to an existing semantic parsing task, they prove to be a useful tool for revealing semantic distinctions and background knowledge not captured in the target representations. This information is used to improve the quality of the semantic representations being learned and to acquire generic knowledge for reasoning. Experiments are done on the benchmark Sportscaster corpus (Chen and Mooney, 2008), and a novel RTE-inspired inference dataset is introduced. On this new dataset our method strongly outperforms several strong baselines. Separately, we obtain state-of-the-art results on the original Sportscaster semantic parsing task.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00090
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,617
article
sakaguchi-etal-2016-reassessing
Reassessing the Goals of Grammatical Error Correction: Fluency Instead of Grammaticality
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1013/
Sakaguchi, Keisuke and Napoles, Courtney and Post, Matt and Tetreault, Joel
null
169--182
The field of grammatical error correction (GEC) has grown substantially in recent years, with research directed at both evaluation metrics and improved system performance against those metrics. One unvisited assumption, however, is the reliance of GEC evaluation on error-coded corpora, which contain specific labeled corrections. We examine current practices and show that GEC`s reliance on such corpora unnaturally constrains annotation and automatic evaluation, resulting in (a) sentences that do not sound acceptable to native speakers and (b) system rankings that do not correlate with human judgments. In light of this, we propose an alternate approach that jettisons costly error coding in favor of unannotated, whole-sentence rewrites. We compare the performance of existing metrics over different gold-standard annotations, and show that automatic evaluation with our new annotation scheme has very strong correlation with expert rankings ({\ensuremath{\rho}} = 0.82). As a result, we advocate for a fundamental and necessary shift in the goal of GEC, from correcting small, labeled error types, to producing text that has native fluency.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00091
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,618
article
vaswani-sagae-2016-efficient
Efficient Structured Inference for Transition-Based Parsing with Neural Networks and Error States
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1014/
Vaswani, Ashish and Sagae, Kenji
null
183--196
Transition-based approaches based on local classification are attractive for dependency parsing due to their simplicity and speed, despite producing results slightly below the state-of-the-art. In this paper, we propose a new approach for approximate structured inference for transition-based parsing that produces scores suitable for global scoring using local models. This is accomplished with the introduction of error states in local training, which add information about incorrect derivation paths typically left out completely in locally-trained models. Using neural networks for our local classifiers, our approach achieves 93.61{\%} accuracy for transition-based dependency parsing in English.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00092
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,619
article
hartmann-etal-2016-generating
Generating Training Data for Semantic Role Labeling based on Label Transfer from Linked Lexical Resources
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1015/
Hartmann, Silvana and Eckle-Kohler, Judith and Gurevych, Iryna
null
197--213
We present a new approach for generating role-labeled training data using Linked Lexical Resources, i.e., integrated lexical resources that combine several resources (e.g., Word-Net, FrameNet, Wiktionary) by linking them on the sense or on the role level. Unlike resource-based supervision in relation extraction, we focus on complex linguistic annotations, more specifically FrameNet senses and roles. The automatically labeled training data (www.ukp.tu-darmstadt.de/knowledge-based-srl/) are evaluated on four corpora from different domains for the tasks of word sense disambiguation and semantic role classification. Results show that classifiers trained on our generated data equal those resulting from a standard supervised setting.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00093
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,620