entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | reichel-lendvai-2016-veracity | Veracity Computing from Lexical Cues and Perceived Certainty Trends | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3907/ | Reichel, Uwe and Lendvai, Piroska | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 33--42 | We present a data-driven method for determining the veracity of a set of rumorous claims on social media data. Tweets from different sources pertaining to a rumor are processed on three levels: first, factuality values are assigned to each tweet based on four textual cue categories relevant for our journalism use case; these amalgamate speaker support in terms of polarity and commitment in terms of certainty and speculation. Next, the proportions of these lexical cues are utilized as predictors for tweet certainty in a generalized linear regression model. Subsequently, lexical cue proportions, predicted certainty, as well as their time course characteristics are used to compute veracity for each rumor in terms of the identity of the rumor-resolving tweet and its binary resolution value judgment. The system operates without access to extralinguistic resources. Evaluated on the data portion for which hand-labeled examples were available, it achieves .74 F1-score on identifying rumor resolving tweets and .76 F1-score on predicting if a rumor is resolved as true or false. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,823 |
inproceedings | van-der-wees-etal-2016-simple | A Simple but Effective Approach to Improve {A}rabizi-to-{E}nglish Statistical Machine Translation | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3908/ | van der Wees, Marlies and Bisazza, Arianna and Monz, Christof | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 43--50 | A major challenge for statistical machine translation (SMT) of Arabic-to-English user-generated text is the prevalence of text written in Arabizi, or Romanized Arabic. When facing such texts, a translation system trained on conventional Arabic-English data will suffer from extremely low model coverage. In addition, Arabizi is not regulated by any official standardization and therefore highly ambiguous, which prevents rule-based approaches from achieving good translation results. In this paper, we improve Arabizi-to-English machine translation by presenting a simple but effective Arabizi-to-Arabic transliteration pipeline that does not require knowledge by experts or native Arabic speakers. We incorporate this pipeline into a phrase-based SMT system, and show that translation quality after automatically transliterating Arabizi to Arabic yields results that are comparable to those achieved after human transliteration. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,824 |
inproceedings | andy-etal-2016-name | Name Variation in Community Question Answering Systems | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3909/ | Andy, Anietie and Sekine, Satoshi and Rwebangira, Mugizi and Dredze, Mark | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 51--60 | Name Variation in Community Question Answering Systems Abstract Community question answering systems are forums where users can ask and answer questions in various categories. Examples are Yahoo! Answers, Quora, and Stack Overflow. A common challenge with such systems is that a significant percentage of asked questions are left unanswered. In this paper, we propose an algorithm to reduce the number of unanswered questions in Yahoo! Answers by reusing the answer to the most similar past resolved question to the unanswered question, from the site. Semantically similar questions could be worded differently, thereby making it difficult to find questions that have shared needs. For example, {\textquotedblleft}Who is the best player for the Reds?{\textquotedblright} and {\textquotedblleft}Who is currently the biggest star at Manchester United?{\textquotedblright} have a shared need but are worded differently; also, {\textquotedblleft}Reds{\textquotedblright} and {\textquotedblleft}Manchester United{\textquotedblright} are used to refer to the soccer team Manchester United football club. In this research, we focus on question categories that contain a large number of named entities and entity name variations. We show that in these categories, entity linking can be used to identify relevant past resolved questions with shared needs as a given question by disambiguating named entities and matching these questions based on the disambiguated entities, identified entities, and knowledge base information related to these entities. We evaluated our algorithm on a new dataset constructed from Yahoo! Answers. The dataset contains annotated question pairs, (Qgiven, [Qpast, Answer]). We carried out experiments on several question categories and show that an entity-based approach gives good performance when searching for similar questions in entity rich categories. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,825 |
inproceedings | wang-etal-2016-whose | Whose Nickname is This? Recognizing Politicians from Their Aliases | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3910/ | Wang, Wei-Chung and Chen, Hung-Chen and Ji, Zhi-Kai and Hsiao, Hui-I and Chiu, Yu-Shian and Ku, Lun-Wei | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 61--69 | Using aliases to refer to public figures is one way to make fun of people, to express sarcasm, or even to sidestep legal issues when expressing opinions on social media. However, linking an alias back to the real name is difficult, as it entails phonemic, graphemic, and semantic challenges. In this paper, we propose a phonemic-based approach and inject semantic information to align aliases with politicians' Chinese formal names. The proposed approach creates an HMM model for each name to model its phonemes and takes into account document-level pairwise mutual information to capture the semantic relations to the alias. In this work we also introduce two new datasets consisting of 167 phonemic pairs and 279 mixed pairs of aliases and formal names. Experimental results show that the proposed approach models both phonemic and semantic information and outperforms previous work on both the phonemic and mixed datasets with the best top-1 accuracies of 0.78 and 0.59 respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,826 |
inproceedings | jain-etal-2016-towards | Towards Accurate Event Detection in Social Media: A Weakly Supervised Approach for Learning Implicit Event Indicators | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3911/ | Jain, Ajit and Kasiviswanathan, Girish and Huang, Ruihong | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 70--77 | Accurate event detection in social media is very challenging because user generated contents are extremely noisy and sparse in content. Event indicators are generally words or phrases that act as a trigger that help us understand the semantics of the context they occur in. We present a weakly supervised approach that relies on using a single strong event indicator phrase as a seed to acquire a variety of additional event cues. We propose to leverage various types of implicit event indicators, such as props, actors and precursor events, to achieve precise event detection. We experimented with civil unrest events and show that the automatically learnt event indicators are effective in identifying specific types of events. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,827 |
inproceedings | albogamy-ramsay-2016-unsupervised | Unsupervised Stemmer for {A}rabic Tweets | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3912/ | Albogamy, Fahad and Ramsay, Allan | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 78--84 | Stemming is an essential processing step in a wide range of high level text processing applications such as information extraction, machine translation and sentiment analysis. It is used to reduce words to their stems. Many stemming algorithms have been developed for Modern Standard Arabic (MSA). Although Arabic tweets and MSA are closely related and share many characteristics, there are substantial differences between them in lexicon and syntax. In this paper, we introduce a light Arabic stemmer for Arabic tweets. Our results show improvements over the performance of a number of well-known stemmers for Arabic. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,828 |
inproceedings | su-etal-2016-topic | Topic Stability over Noisy Sources | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3913/ | Su, Jing and Greene, Derek and Boydell, Ois{\'i}n | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 85--93 | Topic modelling techniques such as LDA have recently been applied to speech transcripts and OCR output. These corpora may contain noisy or erroneous texts which may undermine topic stability. Therefore, it is important to know how well a topic modelling algorithm will perform when applied to noisy data. In this paper we show that different types of textual noise can have diverse effects on the stability of topic models. On the other hand, topic model stability is not consistent with the same type but different levels of noise. We introduce a dictionary filtering approach to address this challenge, with the result that a topic model with the correct number of topics is always identified across different levels of noise. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,829 |
inproceedings | pain-etal-2016-analysis | Analysis of {T}witter Data for Postmarketing Surveillance in Pharmacovigilance | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3914/ | Pain, Julie and Levacher, Jessie and Quinquenel, Adam and Belz, Anja | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 94--101 | Postmarketing surveillance (PMS) has the vital aim to monitor effects of drugs after release for use by the general population, but suffers from under-reporting and limited coverage. Automatic methods for detecting drug effect reports, especially for social media, could vastly increase the scope of PMS. Very few automatic PMS methods are currently available, in particular for the messy text types encountered on Twitter. In this paper we describe first results for developing PMS methods specifically for tweets. We describe the corpus of 125,669 tweets we have created and annotated to train and test the tools. We find that generic tools perform well for tweet-level language identification and tweet-level sentiment analysis (both 0.94 F1-Score). For detection of effect mentions we are able to achieve 0.87 F1-Score, while effect-level adverse-vs.-beneficial analysis proves harder with an F1-Score of 0.64. Among other things, our results indicate that MetaMap semantic types provide a very promising basis for identifying drug effect mentions in tweets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,830 |
inproceedings | belainine-etal-2016-named | Named Entity Recognition and Hashtag Decomposition to Improve the Classification of Tweets | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3915/ | Belainine, Billal and Fonseca, Alexsandro and Sadat, Fatiha | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 102--111 | In social networks services like Twitter, users are overwhelmed with huge amount of social data, most of which are short, unstructured and highly noisy. Identifying accurate information from this huge amount of data is indeed a hard task. Classification of tweets into organized form will help the user to easily access these required information. Our first contribution relates to filtering parts of speech and preprocessing this kind of highly noisy and short data. Our second contribution concerns the named entity recognition (NER) in tweets. Thus, the adaptation of existing language tools for natural languages, noisy and not accurate language tweets, is necessary. Our third contribution involves segmentation of hashtags and a semantic enrichment using a combination of relations from WordNet, which helps the performance of our classification system, including disambiguation of named entities, abbreviations and acronyms. Graph theory is used to cluster the words extracted from WordNet and tweets, based on the idea of connected components. We test our automatic classification system with four categories: politics, economy, sports and the medical field. We evaluate and compare several automatic classification systems using part or all of the items described in our contributions and found that filtering by part of speech and named entity recognition dramatically increase the classification precision to 77.3 {\%}. Moreover, a classification system incorporating segmentation of hashtags and semantic enrichment by two relations from WordNet, synonymy and hyperonymy, increase classification precision up to 83.4 {\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,831 |
inproceedings | costa-bertaglia-volpe-nunes-2016-exploring | Exploring Word Embeddings for Unsupervised Textual User-Generated Content Normalization | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3916/ | Costa Bertaglia, Thales Felipe and Volpe Nunes, Maria das Gra{\c{c}}as | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 112--120 | Text normalization techniques based on rules, lexicons or supervised training requiring large corpora are not scalable nor domain interchangeable, and this makes them unsuitable for normalizing user-generated content (UGC). Current tools available for Brazilian Portuguese make use of such techniques. In this work we propose a technique based on distributed representation of words (or word embeddings). It generates continuous numeric vectors of high-dimensionality to represent words. The vectors explicitly encode many linguistic regularities and patterns, as well as syntactic and semantic word relationships. Words that share semantic similarity are represented by similar vectors. Based on these features, we present a totally unsupervised, expandable and language and domain independent method for learning normalization lexicons from word embeddings. Our approach obtains high correction rate of orthographic errors and internet slang in product reviews, outperforming the current available tools for Brazilian Portuguese. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,832 |
inproceedings | boudin-etal-2016-document | How Document Pre-processing affects Keyphrase Extraction Performance | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3917/ | Boudin, Florian and Mougard, Hugo and Cram, Damien | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 121--128 | The SemEval-2010 benchmark dataset has brought renewed attention to the task of automatic keyphrase extraction. This dataset is made up of scientific articles that were automatically converted from PDF format to plain text and thus require careful preprocessing so that irrevelant spans of text do not negatively affect keyphrase extraction performance. In previous work, a wide range of document preprocessing techniques were described but their impact on the overall performance of keyphrase extraction models is still unexplored. Here, we re-assess the performance of several keyphrase extraction models and measure their robustness against increasingly sophisticated levels of document preprocessing. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,833 |
inproceedings | ikeda-etal-2016-japanese | {J}apanese Text Normalization with Encoder-Decoder Model | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3918/ | Ikeda, Taishi and Shindo, Hiroyuki and Matsumoto, Yuji | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 129--137 | Text normalization is the task of transforming lexical variants to their canonical forms. We model the problem of text normalization as a character-level sequence to sequence learning problem and present a neural encoder-decoder model for solving it. To train the encoder-decoder model, many sentences pairs are generally required. However, Japanese non-standard canonical pairs are scarce in the form of parallel corpora. To address this issue, we propose a method of data augmentation to increase data size by converting existing resources into synthesized non-standard forms using handcrafted rules. We conducted an experiment to demonstrate that the synthesized corpus contributes to stably train an encoder-decoder model and improve the performance of Japanese text normalization. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,834 |
inproceedings | limsopatham-collier-2016-bidirectional | Bidirectional {LSTM} for Named Entity Recognition in {T}witter Messages | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3920/ | Limsopatham, Nut and Collier, Nigel | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 145--152 | In this paper, we present our approach for named entity recognition in Twitter messages that we used in our participation in the Named Entity Recognition in Twitter shared task at the COLING 2016 Workshop on Noisy User-generated text (WNUT). The main challenge that we aim to tackle in our participation is the short, noisy and colloquial nature of tweets, which makes named entity recognition in Twitter message a challenging task. In particular, we investigate an approach for dealing with this problem by enabling bidirectional long short-term memory (LSTM) to automatically learn orthographic features without requiring feature engineering. In comparison with other systems participating in the shared task, our system achieved the most effective performance on both the {\textquoteleft}segmentation and categorisation' and the {\textquoteleft}segmentation only' sub-tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,836 |
inproceedings | espinosa-etal-2016-learning | Learning to recognise named entities in tweets by exploiting weakly labelled data | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3921/ | Espinosa, Kurt Junshean and Batista-Navarro, Riza Theresa and Ananiadou, Sophia | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 153--163 | Named entity recognition (NER) in social media (e.g., Twitter) is a challenging task due to the noisy nature of text. As part of our participation in the W-NUT 2016 Named Entity Recognition Shared Task, we proposed an unsupervised learning approach using deep neural networks and leverage a knowledge base (i.e., DBpedia) to bootstrap sparse entity types with weakly labelled data. To further boost the performance, we employed a more sophisticated tagging scheme and applied dropout as a regularisation technique in order to reduce overfitting. Even without hand-crafting linguistic features nor leveraging any of the W-NUT-provided gazetteers, we obtained robust performance with our approach, which ranked third amongst all shared task participants according to the official evaluation on a gold standard named entity-annotated corpus of 3,856 tweets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,837 |
inproceedings | sikdar-gamback-2016-feature | Feature-Rich {T}witter Named Entity Recognition and Classification | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3922/ | Sikdar, Utpal Kumar and Gamb{\"ack, Bj{\"orn | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 164--170 | Twitter named entity recognition is the process of identifying proper names and classifying them into some predefined labels/categories. The paper introduces a Twitter named entity system using a supervised machine learning approach, namely Conditional Random Fields. A large set of different features was developed and the system was trained using these. The Twitter named entity task can be divided into two parts: i) Named entity extraction from tweets and ii) Twitter name classification into ten different types. For Twitter named entity recognition on unseen test data, our system obtained the second highest F1 score in the shared task: 63.22{\%}. The system performance on the classification task was worse, with an F1 measure of 40.06{\%} on unseen test data, which was the fourth best of the ten systems participating in the shared task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,838 |
inproceedings | partalas-etal-2016-learning | Learning to Search for Recognizing Named Entities in {T}witter | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3923/ | Partalas, Ioannis and Lopez, C{\'e}dric and Derbas, Nadia and Kalitvianski, Ruslan | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 171--177 | We presented in this work our participation in the 2nd Named Entity Recognition for Twitter shared task. The task has been cast as a sequence labeling one and we employed a learning to search approach in order to tackle it. We also leveraged LOD for extracting rich contextual features for the named-entities. Our submission achieved F-scores of 46.16 and 60.24 for the classification and the segmentation tasks and ranked 2nd and 3rd respectively. The post-analysis showed that LOD features improved substantially the performance of our system as they counter-balance the lack of context in tweets. The shared task gave us the opportunity to test the performance of NER systems in short and noisy textual data. The results of the participated systems shows that the task is far to be considered as a solved one and methods with stellar performance in normal texts need to be revised. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,839 |
inproceedings | dugas-nichols-2016-deepnnner | {D}eep{NNNER}: Applying {BLSTM}-{CNN}s and Extended Lexicons to Named Entity Recognition in Tweets | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3924/ | Dugas, Fabrice and Nichols, Eric | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 178--187 | In this paper, we describe the DeepNNNER entry to The 2nd Workshop on Noisy User-generated Text (WNUT) Shared Task {\#}2: Named Entity Recognition in Twitter. Our shared task submission adopts the bidirectional LSTM-CNN model of Chiu and Nichols (2016), as it has been shown to perform well on both newswire and Web texts. It uses word embeddings trained on large-scale Web text collections together with text normalization to cope with the diversity in Web texts, and lexicons for target named entity classes constructed from publicly-available sources. Extended evaluation comparing the effectiveness of various word embeddings, text normalization, and lexicon settings shows that our system achieves a maximum F1-score of 47.24, performance surpassing that of the shared task`s second-ranked system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,840 |
inproceedings | gerguis-etal-2016-asu | {ASU}: An Experimental Study on Applying Deep Learning in {T}witter Named Entity Recognition. | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3925/ | Gerguis, Michel Naim and Salama, Cherif and El-Kharashi, M. Watheq | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 188--196 | This paper describes the ASU system submitted in the COLING W-NUT 2016 Twitter Named Entity Recognition (NER) task. We present an experimental study on applying deep learning to extracting named entities (NEs) from tweets. We built two Long Short-Term Memory (LSTM) models for the task. The first model was built to extract named entities without types while the second model was built to extract and then classify them into 10 fine-grained entity classes. In this effort, we show detailed experimentation results on the effectiveness of word embeddings, brown clusters, part-of-speech (POS) tags, shape features, gazetteers, and local context for the tweet input vector representation to the LSTM model. Also, we present a set of experiments, to better design the network parameters for the Twitter NER task. Our system was ranked the fifth out of ten participants with a final f1-score for the typed classes of 39{\%} and 55{\%} for the non typed ones. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,841 |
inproceedings | le-etal-2016-uqam | {UQAM}-{NTL}: Named entity recognition in {T}witter messages | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3926/ | Le, Ngoc Tan and Mallek, Fatma and Sadat, Fatiha | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 197--202 | This paper describes our system used in the 2nd Workshop on Noisy User-generated Text (WNUT) shared task for Named Entity Recognition (NER) in Twitter, in conjunction with Coling 2016. Our system is based on supervised machine learning by applying Conditional Random Fields (CRF) to train two classifiers for two evaluations. The first evaluation aims at predicting the 10 fine-grained types of named entities; while the second evaluation aims at predicting no type of named entities. The experimental results show that our method has significantly improved Twitter NER performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,842 |
inproceedings | mishra-diesner-2016-semi | Semi-supervised Named Entity Recognition in noisy-text | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3927/ | Mishra, Shubhanshu and Diesner, Jana | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 203--212 | Many of the existing Named Entity Recognition (NER) solutions are built based on news corpus data with proper syntax. These solutions might not lead to highly accurate results when being applied to noisy, user generated data, e.g., tweets, which can feature sloppy spelling, concept drift, and limited contextualization of terms and concepts due to length constraints. The models described in this paper are based on linear chain conditional random fields (CRFs), use the BIEOU encoding scheme, and leverage random feature dropout for up-sampling the training data. The considered features include word clusters and pre-trained distributed word representations, updated gazetteer features, and global context predictions. The latter feature allows for ingesting the meaning of new or rare tokens into the system via unsupervised learning and for alleviating the need to learn lexicon based features, which usually tend to be high dimensional. In this paper, we report on the solution [ST] we submitted to the WNUT 2016 NER shared task. We also present an improvement over our original submission [SI], which we built by using semi-supervised learning on labelled training data and pre-trained resourced constructed from unlabelled tweet data. Our ST solution achieved an F1 score of 1.2{\%} higher than the baseline (35.1{\%} F1) for the task of extracting 10 entity types. The SI resulted in an increase of 8.2{\%} in F1 score over the base-line (7.08{\%} over ST). Finally, the SI model`s evaluation on the test data achieved a F1 score of 47.3{\%} ({\textasciitilde}1.15{\%} increase over the 2nd best submitted solution). Our experimental setup and results are available as a standalone twitter NER tool at \url{https://github.com/napsternxg/TwitterNER}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,843 |
inproceedings | jayasinghe-etal-2016-csiro | {CSIRO} {D}ata61 at the {WNUT} Geo Shared Task | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3929/ | Jayasinghe, Gaya and Jin, Brian and Mchugh, James and Robinson, Bella and Wan, Stephen | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 218--226 | In this paper, we describe CSIRO Data61`s participation in the Geolocation shared task at the Workshop for Noisy User-generated Text. Our approach was to use ensemble methods to capitalise on four component methods: heuristics based on metadata, a label propagation method, timezone text classifiers, and an information retrieval approach. The ensembles we explored focused on examining the role of language technologies in geolocation prediction and also in examining the use of hard voting and cascading ensemble methods. Based on the accuracy of city-level predictions, our systems were the best performing submissions at this year`s shared task. Furthermore, when estimating the latitude and longitude of a user, our median error distance was accurate to within 30 kilometers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,845 |
inproceedings | chi-etal-2016-geolocation | Geolocation Prediction in {T}witter Using Location Indicative Words and Textual Features | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3930/ | Chi, Lianhua and Lim, Kwan Hui and Alam, Nebula and Butler, Christopher J. | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 227--234 | Knowing the location of a social media user and their posts is important for various purposes, such as the recommendation of location-based items/services, and locality detection of crisis/disasters. This paper describes our submission to the shared task {\textquotedblleft}Geolocation Prediction in Twitter{\textquotedblright} of the 2nd Workshop on Noisy User-generated Text. In this shared task, we propose an algorithm to predict the location of Twitter users and tweets using a multinomial Naive Bayes classifier trained on Location Indicative Words and various textual features (such as city/country names, {\#}hashtags and @mentions). We compared our approach against various baselines based on Location Indicative Words, city/country names, {\#}hashtags and @mentions as individual feature sets, and experimental results show that our approach outperforms these baselines in terms of classification accuracy, mean and median error distance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,846 |
inproceedings | miura-etal-2016-simple | A Simple Scalable Neural Networks based Model for Geolocation Prediction in {T}witter | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3931/ | Miura, Yasuhide and Taniguchi, Motoki and Taniguchi, Tomoki and Ohkuma, Tomoko | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 235--239 | This paper describes a model that we submitted to W-NUT 2016 Shared task {\#}1: Geolocation Prediction in Twitter. Our model classifies a tweet or a user to a city using a simple neural networks structure with fully-connected layers and average pooling processes. From the findings of previous geolocation prediction approaches, we integrated various user metadata along with message texts and trained the model with them. In the test run of the task, the model achieved the accuracy of 40.91{\%} and the median distance error of 69.50 km in message-level prediction and the accuracy of 47.55{\%} and the median distance error of 16.13 km in user-level prediction. These results are moderate performances in terms of accuracy and best performances in terms of distance. The results show a promising extension of neural networks based models for geolocation prediction where recent advances in neural networks can be added to enhance our current simple model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,847 |
inproceedings | tjong-kim-sang-2016-finding | Finding Rising and Falling Words | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4002/ | Tjong Kim Sang, Erik | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 2--9 | We examine two different methods for finding rising words (among which neologisms) and falling words (among which archaisms) in decades of magazine texts (millions of words) and in years of tweets (billions of words): one based on correlation coefficients of relative frequencies and time, and one based on comparing initial and final word frequencies of time intervals. We find that smoothing frequency scores improves the precision scores of both methods and that the correlation coefficients perform better on magazine text but worse on tweets. Since the two ranking methods find different words they can be used in side-by-side to study the behavior of words over time. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,850 |
inproceedings | sheng-etal-2016-dataset | A Dataset for Multimodal Question Answering in the Cultural Heritage Domain | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4003/ | Sheng, Shurong and Van Gool, Luc and Moens, Marie-Francine | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 10--17 | Multimodal question answering in the cultural heritage domain allows visitors to ask questions in a more natural way and thus provides better user experiences with cultural objects while visiting a museum, landmark or any other historical site. In this paper, we introduce the construction of a golden standard dataset that will aid research of multimodal question answering in the cultural heritage domain. The dataset, which will be soon released to the public, contains multimodal content including images of typical artworks from the fascinating old-Egyptian Amarna period, related image-containing documents of the artworks and over 800 multimodal queries integrating visual and textual questions. The multimodal questions and related documents are all in English. The multimodal questions are linked to relevant paragraphs in the related documents that contain the answer to the multimodal query. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,851 |
inproceedings | wohlgenannt-etal-2016-extracting | Extracting Social Networks from Literary Text with Word Embedding Tools | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4004/ | Wohlgenannt, Gerhard and Chernyak, Ekaterina and Ilvovsky, Dmitry | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 18--25 | In this paper a social network is extracted from a literary text. The social network shows, how frequent the characters interact and how similar their social behavior is. Two types of similarity measures are used: the first applies co-occurrence statistics, while the second exploits cosine similarity on different types of word embedding vectors. The results are evaluated by a paid micro-task crowdsourcing survey. The experiments suggest that specific types of word embeddings like word2vec are well-suited for the task at hand and the specific circumstances of literary fiction text. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,852 |
inproceedings | kutuzov-etal-2016-exploration | Exploration of register-dependent lexical semantics using word embeddings | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4005/ | Kutuzov, Andrey and Kuzmenko, Elizaveta and Marakasova, Anna | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 26--34 | We present an approach to detect differences in lexical semantics across English language registers, using word embedding models from distributional semantics paradigm. Models trained on register-specific subcorpora of the BNC corpus are employed to compare lists of nearest associates for particular words and draw conclusions about their semantic shifts depending on register in which they are used. The models are evaluated on the task of register classification with the help of the deep inverse regression approach. Additionally, we present a demo web service featuring most of the described models and allowing to explore word meanings in different English registers and to detect register affiliation for arbitrary texts. The code for the service can be easily adapted to any set of underlying models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,853 |
inproceedings | oka-kono-2016-original | Original-Transcribed Text Alignment for {M}anyosyu Written by {O}ld {J}apanese Language | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4006/ | Oka, Teruaki and Kono, Tomoaki | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 35--44 | We are constructing an annotated diachronic corpora of the Japanese language. In part of thiswork, we construct a corpus of Manyosyu, which is an old Japanese poetry anthology. In thispaper, we describe how to align the transcribed text and its original text semiautomatically to beable to cross-reference them in our Manyosyu corpus. Although we align the original charactersto the transcribed words manually, we preliminarily align the transcribed and original charactersby using an unsupervised automatic alignment technique of statistical machine translation toalleviate the work. We found that automatic alignment achieves an F1-measure of 0.83; thus, each poem has 1{--}2 alignment errors. However, finding these errors and modifying them are less workintensiveand more efficient than fully manual annotation. The alignment probabilities can beutilized in this modification. Moreover, we found that we can locate the uncertain transcriptionsin our corpus and compare them to other transcriptions, by using the alignment probabilities. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,854 |
inproceedings | belinkov-etal-2016-shamela | {S}hamela: A Large-Scale Historical {A}rabic Corpus | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4007/ | Belinkov, Yonatan and Magidow, Alexander and Romanov, Maxim and Shmidman, Avi and Koppel, Moshe | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 45--53 | Arabic is a widely-spoken language with a rich and long history spanning more than fourteen centuries. Yet existing Arabic corpora largely focus on the modern period or lack sufficient diachronic information. We develop a large-scale, historical corpus of Arabic of about 1 billion words from diverse periods of time. We clean this corpus, process it with a morphological analyzer, and enhance it by detecting parallel passages and automatically dating undated texts. We demonstrate its utility with selected case-studies in which we show its application to the digital humanities. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,855 |
inproceedings | buechel-etal-2016-feelings | Feelings from the {P}ast{---}{A}dapting Affective Lexicons for Historical Emotion Analysis | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4008/ | Buechel, Sven and Hellrich, Johannes and Hahn, Udo | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 54--61 | We here describe a novel methodology for measuring affective language in historical text by expanding an affective lexicon and jointly adapting it to prior language stages. We automatically construct a lexicon for word-emotion association of 18th and 19th century German which is then validated against expert ratings. Subsequently, this resource is used to identify distinct emotional patterns and trace long-term emotional trends in different genres of writing spanning several centuries. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,856 |
inproceedings | eckhoff-berdicevskis-2016-automatic | Automatic parsing as an efficient pre-annotation tool for historical texts | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4009/ | Eckhoff, Hanne Martine and Berdi{\v{c}}evskis, Aleksandrs | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 62--70 | Historical treebanks tend to be manually annotated, which is not surprising, since state-of-the-art parsers are not accurate enough to ensure high-quality annotation for historical texts. We test whether automatic parsing can be an efficient pre-annotation tool for Old East Slavic texts. We use the TOROT treebank from the PROIEL treebank family. We convert the PROIEL format to the CONLL format and use MaltParser to create syntactic pre-annotation. Using the most conservative evaluation method, which takes into account PROIEL-specific features, MaltParser by itself yields 0.845 unlabelled attachment score, 0.779 labelled attachment score and 0.741 secondary dependency accuracy (note, though, that the test set comes from a relatively simple genre and contains rather short sentences). Experiments with human annotators show that preparsing, if limited to sentences where no changes to word or sentence boundaries are required, increases their annotation rate. For experienced annotators, the speed gain varies from 5.80{\%} to 16.57{\%}, for inexperienced annotators from 14.61{\%} to 32.17{\%} (using conservative estimates). There are no strong reliable differences in the annotation accuracy, which means that there is no reason to suspect that using preparsing might lower the final annotation quality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,857 |
inproceedings | bucur-nisioi-2016-visual | A Visual Representation of {W}ittgenstein`s {T}ractatus Logico-Philosophicus | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4010/ | Bucur, Anca and Nisioi, Sergiu | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 71--75 | In this paper we will discuss a method for data visualization together with its potential usefulness in digital humanities and philosophy of language. We compiled a multilingual parallel corpus from different versions of \textit{Wittgenstein`s Tractatus Logico-philosophicus}, including the original in German and translations into English, Spanish, French, and Russian. Using this corpus, we compute a similarity measure between propositions and render a visual network of relations for different languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,858 |
inproceedings | eckart-de-castilho-etal-2016-web | A Web-based Tool for the Integrated Annotation of Semantic and Syntactic Structures | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4011/ | Eckart de Castilho, Richard and M{\'u}jdricza-Maydt, {\'E}va and Yimam, Seid Muhie and Hartmann, Silvana and Gurevych, Iryna and Frank, Anette and Biemann, Chris | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 76--84 | We introduce the third major release of WebAnno, a generic web-based annotation tool for distributed teams. New features in this release focus on semantic annotation tasks (e.g. semantic role labelling or event annotation) and allow the tight integration of semantic annotations with syntactic annotations. In particular, we introduce the concept of slot features, a novel constraint mechanism that allows modelling the interaction between semantic and syntactic annotations, as well as a new annotation user interface. The new features were developed and used in an annotation project for semantic roles on German texts. The paper briefly introduces this project and reports on experiences performing annotations with the new tool. On a comparative evaluation, our tool reaches significant speedups over WebAnno 2 for a semantic annotation task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,859 |
inproceedings | erdmann-etal-2016-challenges | Challenges and Solutions for {L}atin Named Entity Recognition | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4012/ | Erdmann, Alexander and Brown, Christopher and Joseph, Brian and Janse, Mark and Ajaka, Petra and Elsner, Micha and de Marneffe, Marie-Catherine | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 85--93 | Although spanning thousands of years and genres as diverse as liturgy, historiography, lyric and other forms of prose and poetry, the body of Latin texts is still relatively sparse compared to English. Data sparsity in Latin presents a number of challenges for traditional Named Entity Recognition techniques. Solving such challenges and enabling reliable Named Entity Recognition in Latin texts can facilitate many down-stream applications, from machine translation to digital historiography, enabling Classicists, historians, and archaeologists for instance, to track the relationships of historical persons, places, and groups on a large scale. This paper presents the first annotated corpus for evaluating Named Entity Recognition in Latin, as well as a fully supervised model that achieves over 90{\%} F-score on a held-out test set, significantly outperforming a competitive baseline. We also present a novel active learning strategy that predicts how many and which sentences need to be annotated for named entities in order to attain a specified degree of accuracy when recognizing named entities automatically in a given text. This maximizes the productivity of annotators while simultaneously controlling quality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,860 |
inproceedings | petran-2016-geographical | Geographical Visualization of Search Results in Historical Corpora | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4013/ | Petran, Florian | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 94--100 | We present ANNISVis, a webapp for comparative visualization of geographical distribution of linguistic data, as well as a sample deployment for a corpus of Middle High German texts. Unlike existing geographical visualization solutions, which work with pre-existing data sets, or are bound to specific corpora, ANNISVis allows the user to formulate multiple ad-hoc queries and visualizes them on a map, and it can be configured for any corpus that can be imported into ANNIS. This enables explorative queries of the quantitative aspects of a corpus with geographical features. The tool will be made available to download in open source. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,861 |
inproceedings | jongejan-2016-implementation | Implementation of a Workflow Management System for Non-Expert Users | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4014/ | Jongejan, Bart | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 101--108 | In the Danish CLARIN-DK infrastructure, chaining language technology (LT) tools into a workflow is easy even for a non-expert user, because she only needs to specify the input and the desired output of the workflow. With this information and the registered input and output profiles of the available tools, the CLARIN-DK workflow management system (WMS) computes combinations of tools that will give the desired result. This advanced functionality was originally not envisaged, but came within reach by writing the WMS partly in Java and partly in a programming language for symbolic computation, Bracmat. Handling LT tool profiles, including the computation of workflows, is easier with Bracmat`s language constructs for tree pattern matching and tree construction than with the language constructs offered by mainstream programming languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,862 |
inproceedings | afli-way-2016-integrating | Integrating Optical Character Recognition and Machine Translation of Historical Documents | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4015/ | Afli, Haithem and Way, Andy | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 109--116 | Machine Translation (MT) plays a critical role in expanding capacity in the translation industry. However, many valuable documents, including digital documents, are encoded in non-accessible formats for machine processing (e.g., Historical or Legal documents). Such documents must be passed through a process of Optical Character Recognition (OCR) to render the text suitable for MT. No matter how good the OCR is, this process introduces recognition errors, which often renders MT ineffective. In this paper, we propose a new OCR to MT framework based on adding a new OCR error correction module to enhance the overall quality of translation. Experimentation shows that our new system correction based on the combination of Language Modeling and Translation methods outperforms the baseline system by nearly 30{\%} relative improvement. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,863 |
inproceedings | hunyadi-etal-2016-language | Language technology tools and resources for the analysis of multimodal communication | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4016/ | Hunyadi, L{\'a}szl{\'o} and V{\'a}radi, Tam{\'a}s and Szekr{\'e}nyes, Istv{\'a}n | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 117--124 | In this paper we describe how the complexity of human communication can be analysed with the help of language technology. We present the HuComTech corpus, a multimodal corpus containing 50 hours of videotaped interviews containing a rich annotation of about 2 million items annotated on 33 levels. The corpus serves as a general resource for a wide range of re-search addressing natural conversation between humans in their full complexity. It can benefit particularly digital humanities researchers working in the field of pragmatics, conversational analysis and discourse analysis. We will present a number of tools and automated methods that can help such enquiries. In particular, we will highlight the tool Theme, which is designed to uncover hidden temporal patterns (called T-patterns) in human interaction, and will show how it can applied to the study of multimodal communication. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,864 |
inproceedings | baumann-meyer-sickendiek-2016-large | Large-scale Analysis of Spoken Free-verse Poetry | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4017/ | Baumann, Timo and Meyer-Sickendiek, Burkhard | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 125--130 | Most modern and post-modern poems have developed a post-metrical idea of lyrical prosody that employs rhythmical features of everyday language and prose instead of a strict adherence to rhyme and metrical schemes. This development is subsumed under the term free verse prosody. We present our methodology for the large-scale analysis of modern and post-modern poetry in both their written form and as spoken aloud by the author. We employ language processing tools to align text and speech, to generate a null-model of how the poem would be spoken by a na{\"ive reader, and to extract contrastive prosodic features used by the poet. On these, we intend to build our model of free verse prosody, which will help to understand, differentiate and relate the different styles of free verse poetry. We plan to use our processing scheme on large amounts of data to iteratively build models of styles, to validate and guide manual style annotation, to identify further rhythmical categories, and ultimately to broaden our understanding of free verse poetry. In this paper, we report on a proof-of-concept of our methodology using smaller amounts of poems and a limited set of features. We find that our methodology helps to extract differentiating features in the authors' speech that can be explained by philological insight. Thus, our automatic method helps to guide the literary analysis and this in turn helps to improve our computational models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,865 |
inproceedings | van-der-sluis-etal-2016-pat | {PAT} workbench: Annotation and Evaluation of Text and Pictures in Multimodal Instructions | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4018/ | van der Sluis, Ielka and Kloppenburg, Lennart and Redeker, Gisela | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 131--139 | This paper presents a tool to investigate the design of multimodal instructions (MIs), i.e., instructions that contain both text and pictures. The benefit of including pictures in information presentation has been established, but the characteristics of those pictures and of their textual counterparts and the rela-tion(s) between them have not been researched in a systematic manner. We present the PAT Work-bench, a tool to store, annotate and retrieve MIs based on a validated coding scheme with currently 42 categories that describe instructions in terms of textual features, pictorial elements, and relations be-tween text and pictures. We describe how the PAT Workbench facilitates collaborative annotation and inter-annotator agreement calculation. Future work on the tool includes expanding its functionality and usability by (i) making the MI annotation scheme dynamic for adding relevant features based on empirical evaluations of the MIs, (ii) implementing algorithms for automatic tagging of MI features, and (iii) implementing automatic MI evaluation algorithms based on results obtained via e.g. crowdsourced assessments of MIs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,866 |
inproceedings | raganato-etal-2016-semantic | Semantic Indexing of Multilingual Corpora and its Application on the History Domain | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4019/ | Raganato, Alessandro and Camacho-Collados, Jose and Raganato, Antonio and Joung, Yunseo | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 140--147 | The increasing amount of multilingual text collections available in different domains makes its automatic processing essential for the development of a given field. However, standard processing techniques based on statistical clues and keyword searches have clear limitations. Instead, we propose a knowledge-based processing pipeline which overcomes most of the limitations of these techniques. This, in turn, enables direct comparison across texts in different languages without the need of translation. In this paper we show the potential of this approach for semantically indexing multilingual text collections in the history domain. In our experiments we used a version of the Bible translated in four different languages, evaluating the precision of our semantic indexing pipeline and showing its reliability on the cross-lingual text retrieval task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,867 |
inproceedings | tiedemann-etal-2016-tagging | Tagging {I}ngush - Language Technology For Low-Resource Languages Using Resources From Linguistic Field Work | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4020/ | Tiedemann, J{\"org and Nichols, Johanna and Sprouse, Ronald | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 148--155 | This paper presents on-going work on creating NLP tools for under-resourced languages from very sparse training data coming from linguistic field work. In this work, we focus on Ingush, a Nakh-Daghestanian language spoken by about 300,000 people in the Russian republics Ingushetia and Chechnya. We present work on morphosyntactic taggers trained on transcribed and linguistically analyzed recordings and dependency parsers using English glosses to project annotation for creating synthetic treebanks. Our preliminary results are promising, supporting the goal of bootstrapping efficient NLP tools with limited or no task-specific annotated data resources available. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,868 |
inproceedings | sadoun-etal-2016-multital | The {M}ulti{T}al {NLP} tool infrastructure | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4021/ | Sadoun, Driss and Mkhitaryan, Satenik and Nouvel, Damien and Valette, Mathieu | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 156--163 | This paper gives an overview of the MultiTal project, which aims to create a research infrastructure that ensures long-term distribution of NLP tools descriptions. The goal is to make NLP tools more accessible and usable to end-users of different disciplines. The infrastructure is built on a meta-data scheme modelling and standardising multilingual NLP tools documentation. The model is conceptualised using an OWL ontology. The formal representation of the ontology allows us to automatically generate organised and structured documentation in different languages for each represented tool. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,869 |
inproceedings | khan-etal-2016-tools | Tools and Instruments for Building and Querying Diachronic Computational Lexica | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4022/ | Khan, Fahad and Bellandi, Andrea and Monachini, Monica | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 164--171 | This article describes work on enabling the addition of temporal information to senses of words in linguistic linked open data lexica based on the lemonDia model. Our contribution in this article is twofold. On the one hand, we demonstrate how lemonDia enables the querying of diachronic lexical datasets using OWL-oriented Semantic Web based technologies. On the other hand, we present a preliminary version of an interactive interface intended to help users in creating lexical datasets that model meaning change over time. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,870 |
inproceedings | liu-luo-2016-tracking | Tracking Words in {C}hinese Poetry of {T}ang and {S}ong Dynasties with the {C}hina {B}iographical {D}atabase | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4023/ | Liu, Chao-Lin and Luo, Kuo-Feng | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 172--180 | (This is the abstract for the submission.) Large-scale comparisons between the poetry of Tang and Song dynasties shed light on how words and expressions were used and shared among the poets. That some words were used only in the Tang poetry and some only in the Song poetry could lead to interesting research in linguistics. That the most frequent colors are different in the Tang and Song poetry provides a trace of the changing social circumstances in the dynasties. Results of the current work link to research topics of lexicography, semantics, and social transitions. We discuss our findings and present our algorithms for efficient comparisons among the poems, which are crucial for completing billion times of comparisons within acceptable time. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,871 |
inproceedings | stahn-etal-2016-using | Using {TEI} for textbook research | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4024/ | Stahn, Lena-Luise and Hennicke, Steffen and De Luca, Ernesto William | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 181--186 | The following paper describes the first steps in the development of an ontology for the textbook research discipline. The aim of the project WorldViews is to establish a digital edition focussing on views of the world depicted in textbooks. For this purpose an initial TEI profile has been formalised and tested as a use case to enable the semantical encoding of the resource {\textquoteleft}textbook'. This profile shall provide a basic data model describing major facets of the textbook`s structure relevant to historians. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,872 |
inproceedings | ogrodniczuk-2016-web | Web services and data mining: combining linguistic tools for {P}olish with an analytical platform | Hinrichs, Erhard and Hinrichs, Marie and Trippel, Thorsten | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4025/ | Ogrodniczuk, Maciej | Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities ({LT}4{DH}) | 187--195 | In this paper we present a new combination of existing language tools for Polish with a popular data mining platform intended to help researchers from digital humanities perform computational analyses without any programming. The toolset includes RapidMiner Studio, a software solution offering graphical setup of integrated analytical processes and Multiservice, a Web service offering access to several state-of-the-art linguistic tools for Polish. The setting is verified in a simple task of counting frequencies of unknown words in a small corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,873 |
inproceedings | jimenez-lopez-becerra-bonache-2016-machine | Could Machine Learning Shed Light on Natural Language Complexity? | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4101/ | Jim{\'e}nez-L{\'o}pez, Maria Dolores and Becerra-Bonache, Leonor | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 1--11 | In this paper, we propose to use a subfield of machine learning {--}grammatical inference{--} to measure linguistic complexity from a developmental point of view. We focus on relative complexity by considering a child learner in the process of first language acquisition. The relevance of grammatical inference models for measuring linguistic complexity from a developmental point of view is based on the fact that algorithms proposed in this area can be considered computational models for studying first language acquisition. Even though it will be possible to use different techniques from the field of machine learning as computational models for dealing with linguistic complexity -since in any model we have algorithms that can learn from data-, we claim that grammatical inference models offer some advantages over other tools. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,875 |
inproceedings | chersoni-etal-2016-towards | Towards a Distributional Model of Semantic Complexity | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4102/ | Chersoni, Emmanuele and Blache, Philippe and Lenci, Alessandro | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 12--22 | In this paper, we introduce for the first time a Distributional Model for computing semantic complexity, inspired by the general principles of the Memory, Unification and Control framework(Hagoort, 2013; Hagoort, 2016). We argue that sentence comprehension is an incremental process driven by the goal of constructing a coherent representation of the event represented by the sentence. The composition cost of a sentence depends on the semantic coherence of the event being constructed and on the activation degree of the linguistic constructions. We also report the results of a first evaluation of the model on the Bicknell dataset (Bicknell et al., 2010). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,876 |
inproceedings | marcus-etal-2016-cocogen | {C}o{C}o{G}en - Complexity Contour Generator: Automatic Assessment of Linguistic Complexity Using a Sliding-Window Technique | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4103/ | Marcus, Str{\"obel and Kerz, Elma and Wiechmann, Daniel and Neumann, Stella | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 23--31 | We present a novel approach to the automatic assessment of text complexity based on a sliding-window technique that tracks the distribution of complexity within a text. Such distribution is captured by what we term {\textquotedblleft}complexity contours{\textquotedblright} derived from a series of measurements for a given linguistic complexity measure. This approach is implemented in an automatic computational tool, CoCoGen {--} Complexity Contour Generator, which in its current version supports 32 indices of linguistic complexity. The goal of the paper is twofold: (1) to introduce the design of our computational tool based on a sliding-window technique and (2) to showcase this approach in the area of second language (L2) learning, i.e. more specifically, in the area of L2 writing. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,877 |
inproceedings | van-schijndel-schuler-2016-addressing | Addressing surprisal deficiencies in reading time models | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4104/ | van Schijndel, Marten and Schuler, William | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 32--37 | This study demonstrates a weakness in how n-gram and PCFG surprisal are used to predict reading times in eye-tracking data. In particular, the information conveyed by words skipped during saccades is not usually included in the surprisal measures. This study shows that correcting the surprisal calculation improves n-gram surprisal and that upcoming n-grams affect reading times, replicating previous findings of how lexical frequencies affect reading times. In contrast, the predictivity of PCFG surprisal does not benefit from the surprisal correction despite the fact that lexical sequences skipped by saccades are processed by readers, as demonstrated by the corrected n-gram measure. These results raise questions about the formulation of information-theoretic measures of syntactic processing such as PCFG surprisal and entropy reduction when applied to reading times. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,878 |
inproceedings | vajjala-etal-2016-towards | Towards grounding computational linguistic approaches to readability: Modeling reader-text interaction for easy and difficult texts | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4105/ | Vajjala, Sowmya and Meurers, Detmar and Eitel, Alexander and Scheiter, Katharina | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 38--48 | Computational approaches to readability assessment are generally built and evaluated using gold standard corpora labeled by publishers or teachers rather than being grounded in observations about human performance. Considering that both the reading process and the outcome can be observed, there is an empirical wealth that could be used to ground computational analysis of text readability. This will also support explicit readability models connecting text complexity and the reader`s language proficiency to the reading process and outcomes. This paper takes a step in this direction by reporting on an experiment to study how the relation between text complexity and reader`s language proficiency affects the reading process and performance outcomes of readers after reading We modeled the reading process using three eye tracking variables: fixation count, average fixation count, and second pass reading duration. Our models for these variables explained 78.9{\%}, 74{\%} and 67.4{\%} variance, respectively. Performance outcome was modeled through recall and comprehension questions, and these models explained 58.9{\%} and 27.6{\%} of the variance, respectively. While the online models give us a better understanding of the cognitive correlates of reading with text complexity and language proficiency, modeling of the offline measures can be particularly relevant for incorporating user aspects into readability models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,879 |
inproceedings | shain-etal-2016-memory-access | Memory access during incremental sentence processing causes reading time latency | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4106/ | Shain, Cory and van Schijndel, Marten and Futrell, Richard and Gibson, Edward and Schuler, William | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 49--58 | Studies on the role of memory as a predictor of reading time latencies (1) differ in their predictions about when memory effects should occur in processing and (2) have had mixed results, with strong positive effects emerging from isolated constructed stimuli and weak or even negative effects emerging from naturally-occurring stimuli. Our study addresses these concerns by comparing several implementations of prominent sentence processing theories on an exploratory corpus and evaluating the most successful of these on a confirmatory corpus, using a new self-paced reading corpus of seemingly natural narratives constructed to contain an unusually high proportion of memory-intensive constructions. We show highly significant and complementary broad-coverage latency effects both for predictors based on the Dependency Locality Theory and for predictors based on a left-corner parsing model of sentence processing. Our results indicate that memory access during sentence processing does take time, but suggest that stimuli requiring many memory access events may be necessary in order to observe the effect. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,880 |
inproceedings | gala-ziegler-2016-reducing | Reducing lexical complexity as a tool to increase text accessibility for children with dyslexia | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4107/ | Gala, N{\'u}ria and Ziegler, Johannes | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 59--66 | Lexical complexity plays a central role in readability, particularly for dyslexic children and poor readers because of their slow and laborious decoding and word recognition skills. Although some features to aid readability may be common to most languages (e.g., the majority of {\textquoteleft}easy' words are of low frequency), we believe that lexical complexity is mainly language-specific. In this paper, we define lexical complexity for French and we present a pilot study on the effects of text simplification in dyslexic children. The participants were asked to read out loud original and manually simplified versions of a standardized French text corpus and to answer comprehension questions after reading each text. The analysis of the results shows that the simplifications performed were beneficial in terms of reading speed and they reduced the number of reading errors (mainly lexical ones) without a loss in comprehension. Although the number of participants in this study was rather small (N=10), the results are promising and contribute to the development of applications in computational linguistics. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,881 |
inproceedings | delmonte-2016-syntactic | Syntactic and Lexical Complexity in {I}talian Noncanonical Structures | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4108/ | Delmonte, Rodolfo | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 67--78 | In this paper we will be dealing with different levels of complexity in the processing of Italian, a Romance language inheriting many properties from Latin which make it an almost free word order language . The paper is concerned with syntactic complexity as measurable on the basis of the cognitive parser that incrementally builds up a syntactic representation to be used by the semantic component. The theory behind will be LFG and parsing preferences will be used to justify one choice both from a principled and a processing point of view. LFG is a transformationless theory in which there is no deep structure separate from surface syntactic structure. This is partially in accordance with constructional theories in which noncanonical structures containing non-argument functions FOCUS/TOPIC are treated as multifunctional constituents. Complexity is computed on a processing basis following suggestions made by Blache and demonstrated by Kluender and Chesi | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,882 |
inproceedings | shi-etal-2016-real | Real Multi-Sense or Pseudo Multi-Sense: An Approach to Improve Word Representation | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4109/ | Shi, Haoyue and Li, Caihua and Hu, Junfeng | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 79--88 | Previous researches have shown that learning multiple representations for polysemous words can improve the performance of word embeddings on many tasks. However, this leads to another problem. Several vectors of a word may actually point to the same meaning, namely pseudo multi-sense. In this paper, we introduce the concept of pseudo multi-sense, and then propose an algorithm to detect such cases. With the consideration of the detected pseudo multi-sense cases, we try to refine the existing word embeddings to eliminate the influence of pseudo multi-sense. Moreover, we apply our algorithm on previous released multi-sense word embeddings and tested it on artificial word similarity tasks and the analogy task. The result of the experiments shows that diminishing pseudo multi-sense can improve the quality of word representations. Thus, our method is actually an efficient way to reduce linguistic complexity. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,883 |
inproceedings | gonzalez-dios-etal-2016-preliminary | A Preliminary Study of Statistically Predictive Syntactic Complexity Features and Manual Simplifications in {B}asque | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4110/ | Gonzalez-Dios, Itziar and Aranzabe, Mar{\'i}a Jes{\'u}s and D{\'i}az de Ilarraza, Arantza | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 89--97 | In this paper, we present a comparative analysis of statistically predictive syntactic features of complexity and the treatment of these features by humans when simplifying texts. To that end, we have used a list of the most five statistically predictive features obtained automatically and the Corpus of Basque Simplified Texts (CBST) to analyse how the syntactic phenomena in these features have been manually simplified. Our aim is to go beyond the descriptions of operations found in the corpus and relate the multidisciplinary findings to understand text complexity from different points of view. We also present some issues that can be important when analysing linguistic complexity. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,884 |
inproceedings | heilmann-neumann-2016-dynamic | Dynamic pause assessment of keystroke logged data for the detection of complexity in translation and monolingual text production | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4111/ | Heilmann, Arndt and Neumann, Stella | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 98--103 | Pause analysis of key-stroke logged translations is a hallmark of process based translation studies. However, an exact definition of what a cognitively effortful pause during the translation process is has not been found yet (Saldanha and O`Brien, 2013). This paper investigates the design of a key-stroke and subject dependent identification system of cognitive effort to track complexity in translation with keystroke logging (cf. also (Dragsted, 2005) (Couto-Vale, in preparation)). It is an elastic measure that takes into account idiosyncratic pause duration of translators as well as further confounds such as bi-gram frequency, letter frequency and some motor tasks involved in writing. The method is compared to a common static threshold of 1000 ms in an analysis of cognitive effort during the translation of grammatical functions from English to German. Additionally, the results are triangulated with eye tracking data for further validation. The findings show that at least for smaller sets of data a dynamic pause assessment may lead to more accurate results than a generic static pause threshold of similar duration. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,885 |
inproceedings | falkenjack-jonsson-2016-implicit | Implicit readability ranking using the latent variable of a {B}ayesian Probit model | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4112/ | Falkenjack, Johan and J{\"onsson, Arne | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 104--112 | Data driven approaches to readability analysis for languages other than English has been plagued by a scarcity of suitable corpora. Often, relevant corpora consist only of easy-to-read texts with no rank information or empirical readability scores, making only binary approaches, such as classification, applicable. We propose a Bayesian, latent variable, approach to get the most out of these kinds of corpora. In this paper we present results on using such a model for readability ranking. The model is evaluated on a preliminary corpus of ranked student texts with encouraging results. We also assess the model by showing that it performs readability classification on par with a state of the art classifier while at the same being transparent enough to allow more sophisticated interpretations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,886 |
inproceedings | chen-meurers-2016-ctap | {CTAP}: A Web-Based Tool Supporting Automatic Complexity Analysis | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4113/ | Chen, Xiaobin and Meurers, Detmar | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 113--119 | Informed by research on readability and language acquisition, computational linguists have developed sophisticated tools for the analysis of linguistic complexity. While some tools are starting to become accessible on the web, there still is a disconnect between the features that can in principle be identified based on state-of-the-art computational linguistic analysis, and the analyses a second language acquisition researcher, teacher, or textbook writer can readily obtain and visualize for their own collection of texts. This short paper presents a web-based tool development that aims to meet this challenge. The Common Text Analysis Platform (CTAP) is designed to support fully configurable linguistic feature extraction for a wide range of complexity analyses. It features a user-friendly interface, modularized and reusable analysis component integration, and flexible corpus and feature management. Building on the Unstructured Information Management framework (UIMA), CTAP readily supports integration of state-of-the-art NLP and complexity feature extraction maintaining modularization and reusability. CTAP thereby aims at providing a common platform for complexity analysis, encouraging research collaboration and sharing of feature extraction components{---}to jointly advance the state-of-the-art in complexity analysis in a form that readily supports real-life use by ordinary users. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,887 |
inproceedings | pilan-etal-2016-coursebook | Coursebook Texts as a Helping Hand for Classifying Linguistic Complexity in Language Learners' Writings | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4114/ | Pil{\'a}n, Ildik{\'o} and Alfter, David and Volodina, Elena | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 120--126 | We bring together knowledge from two different types of language learning data, texts learners read and texts they write, to improve linguistic complexity classification in the latter. Linguistic complexity in the foreign and second language learning context can be expressed in terms of proficiency levels. We show that incorporating features capturing lexical complexity information from reading passages can boost significantly the machine learning based classification of learner-written texts into proficiency levels. With an F1 score of .8 our system rivals state-of-the-art results reported for other languages for this task. Finally, we present a freely available web-based tool for proficiency level classification and lexical complexity visualization for both learner writings and reading texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,888 |
inproceedings | zaghouani-etal-2016-using | Using Ambiguity Detection to Streamline Linguistic Annotation | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4115/ | Zaghouani, Wajdi and Hawwari, Abdelati and Alqahtani, Sawsan and Bouamor, Houda and Ghoneim, Mahmoud and Diab, Mona and Oflazer, Kemal | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 127--136 | Arabic writing is typically underspecified for short vowels and other markups, referred to as diacritics. In addition to the lexical ambiguity exhibited in most languages, the lack of diacritics in written Arabic adds another layer of ambiguity which is an artifact of the orthography. In this paper, we present the details of three annotation experimental conditions designed to study the impact of automatic ambiguity detection, on annotation speed and quality in a large scale annotation project. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,889 |
inproceedings | bjerva-borstell-2016-morphological | Morphological Complexity Influences Verb-Object Order in {S}wedish {S}ign {L}anguage | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4116/ | Bjerva, Johannes and B{\"orstell, Carl | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 137--141 | Computational linguistic approaches to sign languages could benefit from investigating how complexity influences structure. We investigate whether morphological complexity has an effect on the order of Verb (V) and Object (O) in Swedish Sign Language (SSL), on the basis of elicited data from five Deaf signers. We find a significant difference in the distribution of the orderings OV vs. VO, based on an analysis of morphological weight. While morphologically heavy verbs exhibit a general preference for OV, humanness seems to affect the ordering in the opposite direction, with [+human] Objects pushing towards a preference for VO. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,890 |
inproceedings | bentz-etal-2016-comparison | A Comparison Between Morphological Complexity Measures: Typological Data vs. Language Corpora | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4117/ | Bentz, Christian and Ruzsics, Tatyana and Koplenig, Alexander and Samard{\v{z}}i{\'c}, Tanja | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 142--153 | Language complexity is an intriguing phenomenon argued to play an important role in both language learning and processing. The need to compare languages with regard to their complexity resulted in a multitude of approaches and methods, ranging from accounts targeting specific structural features to global quantification of variation more generally. In this paper, we investigate the degree to which morphological complexity measures are mutually correlated in a sample of more than 500 languages of 101 language families. We use human expert judgements from the World Atlas of Language Structures (WALS), and compare them to four quantitative measures automatically calculated from language corpora. These consist of three previously defined corpus-derived measures, which are all monolingual, and one new measure based on automatic word-alignment across pairs of languages. We find strong correlations between all the measures, illustrating that both expert judgements and automated approaches converge to similar complexity ratings, and can be used interchangeably. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,891 |
inproceedings | albertsson-etal-2016-similarity | Similarity-Based Alignment of Monolingual Corpora for Text Simplification Purposes | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4118/ | Albertsson, Sarah and Rennes, Evelina and J{\"onsson, Arne | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 154--163 | Comparable or parallel corpora are beneficial for many NLP tasks. The automatic collection of corpora enables large-scale resources, even for less-resourced languages, which in turn can be useful for deducing rules and patterns for text rewriting algorithms, a subtask of automatic text simplification. We present two methods for the alignment of Swedish easy-to-read text segments to text segments from a reference corpus. The first method (M1) was originally developed for the task of text reuse detection, measuring sentence similarity by a modified version of a TF-IDF vector space model. A second method (M2), also accounting for part-of-speech tags, was developed, and the methods were compared. For evaluation, a crowdsourcing platform was built for human judgement data collection, and preliminary results showed that cosine similarity relates better to human ranks than the Dice coefficient. We also saw a tendency that including syntactic context to the TF-IDF vector space model is beneficial for this kind of paraphrase alignment task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,892 |
inproceedings | wagner-filho-etal-2016-automatic | Automatic Construction of Large Readability Corpora | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4119/ | Wagner Filho, Jorge Alberto and Wilkens, Rodrigo and Villavicencio, Aline | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 164--173 | This work presents a framework for the automatic construction of large Web corpora classified by readability level. We compare different Machine Learning classifiers for the task of readability assessment focusing on Portuguese and English texts, analysing the impact of variables like the feature inventory used in the resulting corpus. In a comparison between shallow and deeper features, the former already produce F-measures of over 0.75 for Portuguese texts, but the use of additional features results in even better results, in most cases. For English, shallow features also perform well as do classic readability formulas. Comparing different classifiers for the task, logistic regression obtained, in general, the best results, but with considerable differences between the results for two and those for three-classes, especially regarding the intermediary class. Given the large scale of the resulting corpus, for evaluation we adopt the agreement between different classifiers as an indication of readability assessment certainty. As a result of this work, a large corpus for Brazilian Portuguese was built, including 1.7 million documents and about 1.6 billion tokens, already parsed and annotated with 134 different textual attributes, along with the agreement among the various classifiers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,893 |
inproceedings | bloem-2016-testing | Testing the Processing Hypothesis of word order variation using a probabilistic language model | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4120/ | Bloem, Jelke | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 174--185 | This work investigates the application of a measure of surprisal to modeling a grammatical variation phenomenon between near-synonymous constructions. We investigate a particular variation phenomenon, word order variation in Dutch two-verb clusters, where it has been established that word order choice is affected by processing cost. Several multifactorial corpus studies of Dutch verb clusters have used other measures of processing complexity to show that this factor affects word order choice. This previous work allows us to compare the surprisal measure, which is based on constraint satisfaction theories of language modeling, to those previously used measures, which are more directly linked to empirical observations of processing complexity. Our results show that surprisal does not predict the word order choice by itself, but is a significant predictor when used in a measure of uniform information density (UID). This lends support to the view that human language processing is facilitated not so much by predictable sequences of words but more by sequences of words in which information is spread evenly. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,894 |
inproceedings | li-etal-2016-temporal | Temporal Lobes as Combinatory Engines for both Form and Meaning | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4121/ | Li, Jixing and Brennan, Jonathan and Mahar, Adam and Hale, John | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 186--191 | The relative contributions of meaning and form to sentence processing remains an outstanding issue across the language sciences. We examine this issue by formalizing four incremental complexity metrics and comparing them against freely-available ROI timecourses. Syntax-related metrics based on top-down parsing and structural dependency-distance turn out to significantly improve a regression model, compared to a simpler model that formalizes only conceptual combination using a distributional vector-space model. This confirms the view of the anterior temporal lobes as combinatory engines that deal in both form (see e.g. Brennan et al., 2012; Mazoyer, 1993) and meaning (see e.g., Patterson et al., 2007). This same characterization applies to a posterior temporal region in roughly {\textquotedblleft}Wernicke`s Area.{\textquotedblright} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,895 |
inproceedings | mirzaei-etal-2016-automatic | Automatic Speech Recognition Errors as a Predictor of {L}2 Listening Difficulties | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4122/ | Mirzaei, Maryam Sadat and Meshgi, Kourosh and Kawahara, Tatsuya | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 192--201 | This paper investigates the use of automatic speech recognition (ASR) errors as indicators of the second language (L2) learners' listening difficulties and in doing so strives to overcome the shortcomings of Partial and Synchronized Caption (PSC) system. PSC is a system that generates a partial caption including difficult words detected based on high speech rate, low frequency, and specificity. To improve the choice of words in this system, and explore a better method to detect speech challenges, ASR errors were investigated as a model of the L2 listener, hypothesizing that some of these errors are similar to those of language learners' when transcribing the videos. To investigate this hypothesis, ASR errors in transcription of several TED talks were analyzed and compared with PSC`s selected words. Both the overlapping and mismatching cases were analyzed to investigate possible improvement for the PSC system. Those ASR errors that were not detected by PSC as cases of learners' difficulties were further analyzed and classified into four categories: homophones, minimal pairs, breached boundaries and negatives. These errors were embedded into the baseline PSC to make the enhanced version and were evaluated in an experiment with L2 learners. The results indicated that the enhanced version, which encompasses the ASR errors addresses most of the L2 learners' difficulties and better assists them in comprehending challenging video segments as compared with the baseline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,896 |
inproceedings | singh-etal-2016-quantifying | Quantifying sentence complexity based on eye-tracking measures | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4123/ | Singh, Abhinav Deep and Mehta, Poojan and Husain, Samar and Rajakrishnan, Rajkumar | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 202--212 | Eye-tracking reading times have been attested to reflect cognitive processes underlying sentence comprehension. However, the use of reading times in NLP applications is an underexplored area of research. In this initial work we build an automatic system to assess sentence complexity using automatically predicted eye-tracking reading time measures and demonstrate the efficacy of these reading times for a well known NLP task, namely, readability assessment. We use a machine learning model and a set of features known to be significant predictors of reading times in order to learn per-word reading times from a corpus of English text having reading times of human readers. Subsequently, we use the model to predict reading times for novel text in the context of the aforementioned task. A model based only on reading times gave competitive results compared to the systems that use extensive syntactic features to compute linguistic complexity. Our work, to the best of our knowledge, is the first study to show that automatically predicted reading times can successfully model the difficulty of a text and can be deployed in practical text processing applications. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,897 |
inproceedings | takahira-etal-2016-upper | Upper Bound of Entropy Rate Revisited {---}{A} New Extrapolation of Compressed Large-Scale Corpora{---} | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4124/ | Takahira, Ryosuke and Tanaka-Ishii, Kumiko and D{\k{e}}bowski, {\L}ukasz | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 213--221 | The article presents results of entropy rate estimation for human languages across six languages by using large, state-of-the-art corpora of up to 7.8 gigabytes. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes of this kind were proposed in previous research papers, here we introduce a stretched exponential extrapolation function that has a smaller error of fit. In this way, we uncover a possibility that the entropy rates of human languages are positive but 20{\%} smaller than previously reported. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,898 |
inproceedings | bentz-berdicevskis-2016-learning | Learning pressures reduce morphological complexity: Linking corpus, computational and experimental evidence | Brunato, Dominique and Dell{'}Orletta, Felice and Venturi, Giulia and Fran{\c{c}}ois, Thomas and Blache, Philippe | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4125/ | Bentz, Christian and Berdicevskis, Aleksandrs | Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity ({CL}4{LC}) | 222--232 | The morphological complexity of languages differs widely and changes over time. Pathways of change are often driven by the interplay of multiple competing factors, and are hard to disentangle. We here focus on a paradigmatic scenario of language change: the reduction of morphological complexity from Latin towards the Romance languages. To establish a causal explanation for this phenomenon, we employ three lines of evidence: 1) analyses of parallel corpora to measure the complexity of words in actual language production, 2) applications of NLP tools to further tease apart the contribution of inflectional morphology to word complexity, and 3) experimental data from artificial language learning, which illustrate the learning pressures at play when morphology simplifies. These three lines of evidence converge to show that pressures associated with imperfect language learning are good candidates to causally explain the reduction in morphological complexity in the Latin-to-Romance scenario. More generally, we argue that combining corpus, computational and experimental evidence is the way forward in historical linguistics and linguistic typology. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,899 |
inproceedings | weegar-etal-2016-impact | The impact of simple feature engineering in multilingual medical {NER} | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4201/ | Weegar, Rebecka and Casillas, Arantza and Diaz de Ilarraza, Arantza and Oronoz, Maite and P{\'e}rez, Alicia and Gojenola, Koldo | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 1--6 | The goal of this paper is to examine the impact of simple feature engineering mechanisms before applying more sophisticated techniques to the task of medical NER. Sometimes papers using scientifically sound techniques present raw baselines that could be improved adding simple and cheap features. This work focuses on entity recognition for the clinical domain for three languages: English, Swedish and Spanish. The task is tackled using simple features, starting from the window size, capitalization, prefixes, and moving to POS and semantic tags. This work demonstrates that a simple initial step of feature engineering can improve the baseline results significantly. Hence, the contributions of this paper are: first, a short list of guidelines well supported with experimental results on three languages and, second, a detailed description of the relevance of these features for medical NER. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,901 |
inproceedings | chalapathy-etal-2016-bidirectional | Bidirectional {LSTM}-{CRF} for Clinical Concept Extraction | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4202/ | Chalapathy, Raghavendra and Zare Borzeshi, Ehsan and Piccardi, Massimo | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 7--12 | Automated extraction of concepts from patient clinical records is an essential facilitator of clinical research. For this reason, the 2010 i2b2/VA Natural Language Processing Challenges for Clinical Records introduced a concept extraction task aimed at identifying and classifying concepts into predefined categories (i.e., treatments, tests and problems). State-of-the-art concept extraction approaches heavily rely on handcrafted features and domain-specific resources which are hard to collect and define. For this reason, this paper proposes an alternative, streamlined approach: a recurrent neural network (the bidirectional LSTM with CRF decoding) initialized with general-purpose, off-the-shelf word embeddings. The experimental results achieved on the 2010 i2b2/VA reference corpora using the proposed framework outperform all recent methods and ranks closely to the best submission from the original 2010 i2b2/VA challenge. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,902 |
inproceedings | aramaki-etal-2016-mednlpdoc | {M}ed{NLPD}oc: {J}apanese Shared Task for Clinical {NLP} | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4203/ | Aramaki, Eiji and Kano, Yoshinobu and Ohkuma, Tomoko and Morita, Mizuki | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 13--16 | Due to the recent replacements of physical documents with electronic medical records (EMR), the importance of information processing in medical fields has been increased. We have been organizing the MedNLP task series in NTCIR-10 and 11. These workshops were the first shared tasks which attempt to evaluate technologies that retrieve important information from medical reports written in Japanese. In this report, we describe the NTCIR-12 MedNLPDoc task which is designed for more advanced and practical use for the medical fields. This task is considered as a multi-labeling task to a patient record. This report presents results of the shared task, discusses and illustrates remained issues in the medical natural language processing field. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,903 |
inproceedings | lee-etal-2016-feature | Feature-Augmented Neural Networks for Patient Note De-identification | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4204/ | Lee, Ji Young and Dernoncourt, Franck and Uzuner, {\"Ozlem and Szolovits, Peter | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 17--22 | Patient notes contain a wealth of information of potentially great interest to medical investigators. However, to protect patients' privacy, Protected Health Information (PHI) must be removed from the patient notes before they can be legally released, a process known as patient note de-identification. The main objective for a de-identification system is to have the highest possible recall. Recently, the first neural-network-based de-identification system has been proposed, yielding state-of-the-art results. Unlike other systems, it does not rely on human-engineered features, which allows it to be quickly deployed, but does not leverage knowledge from human experts or from electronic health records (EHRs). In this work, we explore a method to incorporate human-engineered features as well as features derived from EHRs to a neural-network-based de-identification system. Our results show that the addition of features, especially the EHR-derived features, further improves the state-of-the-art in patient note de-identification, including for some of the most sensitive PHI types such as patient names. Since in a real-life setting patient notes typically come with EHRs, we recommend developers of de-identification systems to leverage the information EHRs contain. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,904 |
inproceedings | sahoo-etal-2016-semi | Semi-supervised Clustering of Medical Text | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4205/ | Sahoo, Pracheta and Ekbal, Asif and Saha, Sriparna and Moll{\'a}, Diego and Nandan, Kaushik | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 23--31 | Semi-supervised clustering is an attractive alternative for traditional (unsupervised) clustering in targeted applications. By using the information of a small annotated dataset, semi-supervised clustering can produce clusters that are customized to the application domain. In this paper, we present a semi-supervised clustering technique based on a multi-objective evolutionary algorithm (NSGA-II-clus). We apply this technique to the task of clustering medical publications for Evidence Based Medicine (EBM) and observe an improvement of the results against unsupervised and other semi-supervised clustering techniques. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,905 |
inproceedings | yadav-etal-2016-deep | Deep Learning Architecture for Patient Data De-identification in Clinical Records | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4206/ | Yadav, Shweta and Ekbal, Asif and Saha, Sriparna and Bhattacharyya, Pushpak | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 32--41 | Rapid growth in Electronic Medical Records (EMR) has emerged to an expansion of data in the clinical domain. The majority of the available health care information is sealed in the form of narrative documents which form the rich source of clinical information. Text mining of such clinical records has gained huge attention in various medical applications like treatment and decision making. However, medical records enclose patient Private Health Information (PHI) which can reveal the identities of the patients. In order to retain the privacy of patients, it is mandatory to remove all the PHI information prior to making it publicly available. The aim is to de-identify or encrypt the PHI from the patient medical records. In this paper, we propose an algorithm based on deep learning architecture to solve this problem. We perform de-identification of seven PHI terms from the clinical records. Experiments on benchmark datasets show that our proposed approach achieves encouraging performance, which is better than the baseline model developed with Conditional Random Field. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,906 |
inproceedings | hasan-etal-2016-neural | Neural Clinical Paraphrase Generation with Attention | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4207/ | Hasan, Sadid A. and Liu, Bo and Liu, Joey and Qadir, Ashequl and Lee, Kathy and Datla, Vivek and Prakash, Aaditya and Farri, Oladimeji | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 42--53 | Paraphrase generation is important in various applications such as search, summarization, and question answering due to its ability to generate textual alternatives while keeping the overall meaning intact. Clinical paraphrase generation is especially vital in building patient-centric clinical decision support (CDS) applications where users are able to understand complex clinical jargons via easily comprehensible alternative paraphrases. This paper presents Neural Clinical Paraphrase Generation (NCPG), a novel approach that casts the task as a monolingual neural machine translation (NMT) problem. We propose an end-to-end neural network built on an attention-based bidirectional Recurrent Neural Network (RNN) architecture with an encoder-decoder framework to perform the task. Conventional bilingual NMT models mostly rely on word-level modeling and are often limited by out-of-vocabulary (OOV) issues. In contrast, we represent the source and target paraphrase pairs as character sequences to address this limitation. To the best of our knowledge, this is the first work that uses attention-based RNNs for clinical paraphrase generation and also proposes an end-to-end character-level modeling for this task. Extensive experiments on a large curated clinical paraphrase corpus show that the attention-based NCPG models achieve improvements of up to 5.2 BLEU points and 0.5 METEOR points over a non-attention based strong baseline for word-level modeling, whereas further gains of up to 6.1 BLEU points and 1.3 METEOR points are obtained by the character-level NCPG models over their word-level counterparts. Overall, our models demonstrate comparable performance relative to the state-of-the-art phrase-based non-neural models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,907 |
inproceedings | roberts-2016-assessing | Assessing the Corpus Size vs. Similarity Trade-off for Word Embeddings in Clinical {NLP} | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4208/ | Roberts, Kirk | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 54--63 | The proliferation of deep learning methods in natural language processing (NLP) and the large amounts of data they often require stands in stark contrast to the relatively data-poor clinical NLP domain. In particular, large text corpora are necessary to build high-quality word embeddings, yet often large corpora that are suitably representative of the target clinical data are unavailable. This forces a choice between building embeddings from small clinical corpora and less representative, larger corpora. This paper explores this trade-off, as well as intermediate compromise solutions. Two standard clinical NLP tasks (the i2b2 2010 concept and assertion tasks) are evaluated with commonly used deep learning models (recurrent neural networks and convolutional neural networks) using a set of six corpora ranging from the target i2b2 data to large open-domain datasets. While combinations of corpora are generally found to work best, the single-best corpus is generally task-dependent. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,908 |
inproceedings | sakishita-kano-2016-inference | Inference of {ICD} Codes from {J}apanese Medical Records by Searching Disease Names | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4209/ | Sakishita, Masahito and Kano, Yoshinobu | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 64--68 | Importance of utilizing medical information is getting increased as electronic health records (EHRs) are widely used nowadays. We aim to assign international standardized disease codes, ICD-10, to Japanese textual information in EHRs for users to reuse the information accurately. In this paper, we propose methods to automatically extract diagnosis and to assign ICD codes to Japanese medical records. Due to the lack of available training data, we dare employed rule-based methods rather than machine learning. We observed characteristics of medical records carefully, writing rules to make effective methods by hand. We applied our system to the NTCIR-12 MedNLPDoc shared task data where participants are required to assign ICD-10 codes of possible diagnosis in given EHRs. In this shared task, our system achieved the highest F-measure score among all participants in the most severe evaluation criteria. Through comparison with other approaches, we show that our approach could be a useful milestone for the future development of Japanese medical record processing. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,909 |
inproceedings | roller-etal-2016-fine | A fine-grained corpus annotation schema of {G}erman nephrology records | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4210/ | Roller, Roland and Uszkoreit, Hans and Xu, Feiyu and Seiffe, Laura and Mikhailov, Michael and Staeck, Oliver and Budde, Klemens and Halleck, Fabian and Schmidt, Danilo | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 69--77 | In this work we present a fine-grained annotation schema to detect named entities in German clinical data of chronically ill patients with kidney diseases. The annotation schema is driven by the needs of our clinical partners and the linguistic aspects of German language. In order to generate annotations within a short period, the work also presents a semi-automatic annotation which uses additional sources of knowledge such as UMLS, to pre-annotate concepts in advance. The presented schema will be used to apply novel techniques from natural language processing and machine learning to support doctors treating their patients by improved information access from unstructured German texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,910 |
inproceedings | shibata-etal-2016-detecting | Detecting {J}apanese Patients with {A}lzheimer`s Disease based on Word Category Frequencies | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4211/ | Shibata, Daisaku and Wakamiya, Shoko and Kinoshita, Ayae and Aramaki, Eiji | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 78--85 | In recent years, detecting Alzheimer disease (AD) in early stages based on natural language processing (NLP) has drawn much attention. To date, vocabulary size, grammatical complexity, and fluency have been studied using NLP metrics. However, the content analysis of AD narratives is still unreachable for NLP. This study investigates features of the words that AD patients use in their spoken language. After recruiting 18 examinees of 53{--}90 years old (mean: 76.89), they were divided into two groups based on MMSE scores. The AD group comprised 9 examinees with scores of 21 or lower. The healthy control group comprised 9 examinees with a score of 22 or higher. Linguistic Inquiry and Word Count (LIWC) classified words were used to categorize the words that the examinees used. The word frequency was found from observation. Significant differences were confirmed for the usage of impersonal pronouns in the AD group. This result demonstrated the basic feasibility of the proposed NLP-based detection approach. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,911 |
inproceedings | yamashita-etal-2016-prediction | Prediction of Key Patient Outcome from Sentence and Word of Medical Text Records | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4212/ | Yamashita, Takanori and Wakata, Yoshifumi and Soejima, Hidehisa and Nakashima, Naoki and Hirokawa, Sachio | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 86--90 | The number of unstructured medical records kept in hospital information systems is increasing. The conditions of patients are formulated as outcomes in clinical pathway. A variance of an outcome describes deviations from standards of care like a patient`s bad condition. The present paper applied text mining to extract feature words and phrases of the variance from admission records. We report the cases the variances of {\textquotedblleft}pain control{\textquotedblright} and {\textquotedblleft}no neuropathy worsening{\textquotedblright} in cerebral infarction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,912 |
inproceedings | kreuzthaler-etal-2016-unsupervised | Unsupervised Abbreviation Detection in Clinical Narratives | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4213/ | Kreuzthaler, Markus and Oleynik, Michel and Avian, Alexander and Schulz, Stefan | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 91--98 | Clinical narratives in electronic health record systems are a rich resource of patient-based information. They constitute an ongoing challenge for natural language processing, due to their high compactness and abundance of short forms. German medical texts exhibit numerous ad-hoc abbreviations that terminate with a period character. The disambiguation of period characters is therefore an important task for sentence and abbreviation detection. This task is addressed by a combination of co-occurrence information of word types with trailing period characters, a large domain dictionary, and a simple rule engine, thus merging statistical and dictionary-based disambiguation strategies. An F-measure of 0.95 could be reached by using the unsupervised approach presented in this paper. The results are promising for a domain-independent abbreviation detection strategy, because our approach avoids retraining of models or use case specific feature engineering efforts required for supervised machine learning approaches. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,913 |
inproceedings | yuwono-etal-2016-automated | Automated Anonymization as Spelling Variant Detection | Rumshisky, Anna and Roberts, Kirk and Bethard, Steven and Naumann, Tristan | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4214/ | Yuwono, Steven Kester and Ng, Hwee Tou and Ngiam, Kee Yuan | Proceedings of the Clinical Natural Language Processing Workshop ({C}linical{NLP}) | 99--103 | The issue of privacy has always been a concern when clinical texts are used for research purposes. Personal health information (PHI) (such as name and identification number) needs to be removed so that patients cannot be identified. Manual anonymization is not feasible due to the large number of clinical texts to be anonymized. In this paper, we tackle the task of anonymizing clinical texts written in sentence fragments and which frequently contain symbols, abbreviations, and misspelled words. Our clinical texts therefore differ from those in the i2b2 shared tasks which are in prose form with complete sentences. Our clinical texts are also part of a structured database which contains patient name and identification number in structured fields. As such, we formulate our anonymization task as spelling variant detection, exploiting patients' personal information in the structured fields to detect their spelling variants in clinical texts. We successfully anonymized clinical texts consisting of more than 200 million words, using minimum edit distance and regular expression patterns. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,914 |
inproceedings | garimella-mihalcea-2016-zooming | Zooming in on Gender Differences in Social Media | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4301/ | Garimella, Aparna and Mihalcea, Rada | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 1--10 | Men are from Mars and women are from Venus - or so the genre of relationship literature would have us believe. But there is some truth in this idea, and researchers in fields as diverse as psychology, sociology, and linguistics have explored ways to better understand the differences between genders. In this paper, we take another look at the problem of gender discrimination and attempt to move beyond the typical surface-level text classification approach, by (1) identifying semantic and psycholinguistic word classes that reflect systematic differences between men and women and (2) finding differences between genders in the ways they use the same words. We describe several experiments and report results on a large collection of blogs authored by men and women. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,916 |
inproceedings | schneevogt-paggio-2016-effect | The Effect of Gender and Age Differences on the Recognition of Emotions from Facial Expressions | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4302/ | Schneevogt, Daniela and Paggio, Patrizia | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 11--19 | Recent studies have demonstrated gender and cultural differences in the recognition of emotions in facial expressions. However, most studies were conducted on American subjects. In this paper, we explore the generalizability of several findings to a non-American culture in the form of Danish subjects. We conduct an emotion recognition task followed by two stereotype questionnaires with different genders and age groups. While recent findings (Krems et al., 2015) suggest that women are biased to see anger in neutral facial expressions posed by females, in our sample both genders assign higher ratings of anger to all emotions expressed by females. Furthermore, we demonstrate an effect of gender on the fear-surprise-confusion observed by Tomkins and McCarter (1964); females overpredict fear, while males overpredict surprise. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,917 |
inproceedings | liu-etal-2016-recurrent | A Recurrent and Compositional Model for Personality Trait Recognition from Short Texts | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4303/ | Liu, Fei and Perez, Julien and Nowson, Scott | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 20--29 | Many methods have been used to recognise author personality traits from text, typically combining linguistic feature engineering with shallow learning models, e.g. linear regression or Support Vector Machines. This work uses deep-learning-based models and atomic features of text, the characters, to build hierarchical, vectorial word and sentence representations for trait inference. This method, applied to a corpus of tweets, shows state-of-the-art performance across five traits compared with prior work. The results, supported by preliminary visualisation work, are encouraging for the ability to detect complex human traits. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,918 |
inproceedings | pool-nissim-2016-distant | Distant supervision for emotion detection using {F}acebook reactions | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4304/ | Pool, Chris and Nissim, Malvina | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 30--39 | We exploit the Facebook reaction feature in a distant supervised fashion to train a support vector machine classifier for emotion detection, using several feature combinations and combining different Facebook pages. We test our models on existing benchmarks for emotion detection and show that employing only information that is derived completely automatically, thus without relying on any handcrafted lexicon as it`s usually done, we can achieve competitive results. The results also show that there is large room for improvement, especially by gearing the collection of Facebook pages, with a view to the target domain. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,919 |
inproceedings | mullick-etal-2016-graphical | A graphical framework to detect and categorize diverse opinions from online news | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4305/ | Mullick, Ankan and Goyal, Pawan and Ganguly, Niloy | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 40--49 | This paper proposes a graphical framework to extract opinionated sentences which highlight different contexts within a given news article by introducing the concept of diversity in a graphical model for opinion detection. We conduct extensive evaluations and find that the proposed modification leads to impressive improvement in performance and makes the final results of the model much more usable. The proposed method (OP-D) not only performs much better than the other techniques used for opinion detection as well as introducing diversity, but is also able to select opinions from different categories (Asher et al. 2009). By developing a classification model which categorizes the identified sentences into various opinion categories, we find that OP-D is able to push opinions from different categories uniformly among the top opinions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,920 |
inproceedings | skeppstedt-etal-2016-active | Active learning for detection of stance components | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4306/ | Skeppstedt, Maria and Sahlgren, Magnus and Paradis, Carita and Kerren, Andreas | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 50--59 | Automatic detection of five language components, which are all relevant for expressing opinions and for stance taking, was studied: positive sentiment, negative sentiment, speculation, contrast and condition. A resource-aware approach was taken, which included manual annotation of 500 training samples and the use of limited lexical resources. Active learning was compared to random selection of training data, as well as to a lexicon-based method. Active learning was successful for the categories speculation, contrast and condition, but not for the two sentiment categories, for which results achieved when using active learning were similar to those achieved when applying a random selection of training data. This difference is likely due to a larger variation in how sentiment is expressed than in how speakers express the other three categories. This larger variation was also shown by the lower recall results achieved by the lexicon-based approach for sentiment than for the categories speculation, contrast and condition. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,921 |
inproceedings | kaljahi-foster-2016-detecting | Detecting Opinion Polarities using Kernel Methods | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4307/ | Kaljahi, Rasoul and Foster, Jennifer | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 60--69 | We investigate the application of kernel methods to representing both structural and lexical knowledge for predicting polarity of opinions in consumer product review. We introduce any-gram kernels which model lexical information in a significantly faster way than the traditional n-gram features, while capturing all possible orders of n-grams n in a sequence without the need to explicitly present a pre-specified set of such orders. We also present an effective format to represent constituency and dependency structure together with aspect terms and sentiment polarity scores. Furthermore, we modify the traditional tree kernel function to compute the similarity based on word embedding vectors instead of exact string match and present experiments using the new models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,922 |
inproceedings | cattle-ma-2016-effects | Effects of Semantic Relatedness between Setups and Punchlines in {T}witter Hashtag Games | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4308/ | Cattle, Andrew and Ma, Xiaojuan | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 70--79 | This paper explores humour recognition for Twitter-based hashtag games. Given their popularity, frequency, and relatively formulaic nature, these games make a good target for computational humour research and can leverage Twitter likes and retweets as humour judgments. In this work, we use pair-wise relative humour judgments to examine several measures of semantic relatedness between setups and punchlines on a hashtag game corpus we collected and annotated. Results show that perplexity, Normalized Google Distance, and free-word association-based features are all useful in identifying {\textquotedblleft}funnier{\textquotedblright} hashtag game responses. In fact, we provide empirical evidence that funnier punchlines tend to be more obscure, although more obscure punchlines are not necessarily rated funnier. Furthermore, the asymmetric nature of free-word association features allows us to see that while punchlines should be harder to predict given a setup, they should also be relatively easy to understand in context. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,923 |
inproceedings | sidarenka-stede-2016-generating | Generating Sentiment Lexicons for {G}erman {T}witter | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4309/ | Sidarenka, Uladzimir and Stede, Manfred | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 80--90 | Despite a substantial progress made in developing new sentiment lexicon generation (SLG) methods for English, the task of transferring these approaches to other languages and domains in a sound way still remains open. In this paper, we contribute to the solution of this problem by systematically comparing semi-automatic translations of common English polarity lists with the results of the original automatic SLG algorithms, which were applied directly to German data. We evaluate these lexicons on a corpus of 7,992 manually annotated tweets. In addition to that, we also collate the results of dictionary- and corpus-based SLG methods in order to find out which of these paradigms is better suited for the inherently noisy domain of social media. Our experiments show that semi-automatic translations notably outperform automatic systems (reaching a macro-averaged F1-score of 0.589), and that dictionary-based techniques produce much better polarity lists as compared to corpus-based approaches (whose best F1-scores run up to 0.479 and 0.419 respectively) even for the non-standard Twitter genre. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,924 |
inproceedings | canales-etal-2016-innovative | Innovative Semi-Automatic Methodology to Annotate Emotional Corpora | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4310/ | Canales, Lea and Strapparava, Carlo and Boldrini, Ester and Mart{\'i}nez-Barco, Patricio | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 91--100 | Detecting depression or personality traits, tutoring and student behaviour systems, or identifying cases of cyber-bulling are a few of the wide range of the applications, in which the automatic detection of emotion is a crucial element. Emotion detection has the potential of high impact by contributing the benefit of business, society, politics or education. Given this context, the main objective of our research is to contribute to the resolution of one of the most important challenges in textual emotion detection task: the problems of emotional corpora annotation. This will be tackled by proposing of a new semi-automatic methodology. Our innovative methodology consists in two main phases: (1) an automatic process to pre-annotate the unlabelled sentences with a reduced number of emotional categories; and (2) a refinement manual process where human annotators will determine which is the predominant emotion between the emotional categories selected in the phase 1. Our proposal in this paper is to show and evaluate the pre-annotation process to analyse the feasibility and the benefits by the methodology proposed. The results obtained are promising and allow obtaining a substantial improvement of annotation time and cost and confirm the usefulness of our pre-annotation process to improve the annotation task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,925 |
inproceedings | kamijo-etal-2016-personality | Personality Estimation from {J}apanese Text | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4311/ | Kamijo, Koichi and Nasukawa, Tetsuya and Kitamura, Hideya | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 101--109 | We created a model to estimate personality trait from authors' text written in Japanese and measured its performance by conducting surveys and analyzing the Twitter data of 1,630 users. We used the Big Five personality traits for personality trait estimation. Our approach is a combination of category- and Word2Vec-based approaches. For the category-based element, we added several unique Japanese categories along with the ones regularly used in the English model, and for the Word2Vec-based element, we used a model called GloVe. We found that some of the newly added categories have a stronger correlation with personality traits than other categories do and that the combination of the category- and Word2Vec-based approaches improves the accuracy of the personality trait estimation compared with the case of using just one of them. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,926 |
inproceedings | celli-etal-2016-predicting | Predicting {B}rexit: Classifying Agreement is Better than Sentiment and Pollsters | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4312/ | Celli, Fabio and Stepanov, Evgeny and Poesio, Massimo and Riccardi, Giuseppe | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 110--118 | On June 23rd 2016, UK held the referendum which ratified the exit from the EU. While most of the traditional pollsters failed to forecast the final vote, there were online systems that hit the result with high accuracy using opinion mining techniques and big data. Starting one month before, we collected and monitored millions of posts about the referendum from social media conversations, and exploited Natural Language Processing techniques to predict the referendum outcome. In this paper we discuss the methods used by traditional pollsters and compare it to the predictions based on different opinion mining techniques. We find that opinion mining based on agreement/disagreement classification works better than opinion mining based on polarity classification in the forecast of the referendum outcome. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,927 |
inproceedings | bali-singh-2016-sarcasm | Sarcasm Detection : Building a Contextual Hierarchy | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4313/ | Bali, Taradheesh and Singh, Navjyoti | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 119--127 | The conundrum of understanding and classifying sarcasm has been dealt with by the traditional theorists as an analysis of a sarcastic utterance and the ironic situation that surrounds it. The problem with such an approach is that it is too narrow, as it is unable to sufficiently utilize the two indispensable agents in making such an utterance, viz. the speaker and the listener. It undermines the necessary context required to comprehend a sarcastic utterance. In this paper, we propose a novel approach towards understanding sarcasm in terms of the existing knowledge hierarchy between the two participants, which forms the basis of the context that both agents share. The difference in relationship of the speaker of the sarcastic utterance and the disparate audience found on social media, such as Twitter, is also captured. We then apply our model on a corpus of tweets to achieve significant results and consequently, shed light on subjective nature of context, which is contingent on the relation between the speaker and the listener. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,928 |
inproceedings | litvak-etal-2016-social | Social and linguistic behavior and its correlation to trait empathy | Nissim, Malvina and Patti, Viviana and Plank, Barbara | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-4314/ | Litvak, Marina and Otterbacher, Jahna and Ang, Chee Siang and Atkins, David | Proceedings of the Workshop on Computational Modeling of People`s Opinions, Personality, and Emotions in Social Media ({PEOPLES}) | 128--137 | A growing body of research exploits social media behaviors to gauge psychological character-istics, though trait empathy has received little attention. Because of its intimate link to the abil-ity to relate to others, our research aims to predict participants' levels of empathy, given their textual and friending behaviors on Facebook. Using Poisson regression, we compared the vari-ance explained in Davis' Interpersonal Reactivity Index (IRI) scores on four constructs (em-pathic concern, personal distress, fantasy, perspective taking), by two classes of variables: 1) post content and 2) linguistic style. Our study lays the groundwork for a greater understanding of empathy`s role in facilitating interactions on social media. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,929 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.