entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
article
nguyen-etal-2016-j
{J}-{NERD}: Joint Named Entity Recognition and Disambiguation with Rich Linguistic Features
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1016/
Nguyen, Dat Ba and Theobald, Martin and Weikum, Gerhard
null
215--229
Methods for Named Entity Recognition and Disambiguation (NERD) perform NER and NED in two separate stages. Therefore, NED may be penalized with respect to precision by NER false positives, and suffers in recall from NER false negatives. Conversely, NED does not fully exploit information computed by NER such as types of mentions. This paper presents J-NERD, a new approach to perform NER and NED jointly, by means of a probabilistic graphical model that captures mention spans, mention types, and the mapping of mentions to entities in a knowledge base. We present experiments with different kinds of texts from the CoNLL`03, ACE`05, and ClueWeb`09-FACC1 corpora. J-NERD consistently outperforms state-of-the-art competitors in end-to-end NERD precision, recall, and F1.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00094
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,621
article
marcheggiani-titov-2016-discrete
Discrete-State Variational Autoencoders for Joint Discovery and Factorization of Relations
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1017/
Marcheggiani, Diego and Titov, Ivan
null
231--244
We present a method for unsupervised open-domain relation discovery. In contrast to previous (mostly generative and agglomerative clustering) approaches, our model relies on rich contextual features and makes minimal independence assumptions. The model is composed of two parts: a feature-rich relation extractor, which predicts a semantic relation between two entities, and a factorization model, which reconstructs arguments (i.e., the entities) relying on the predicted relation. The two components are estimated jointly so as to minimize errors in recovering arguments. We study factorization models inspired by previous work in relation factorization and selectional preference modeling. Our models substantially outperform the generative and agglomerative-clustering counterparts and achieve state-of-the-art performance.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00095
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,622
article
stratos-etal-2016-unsupervised
Unsupervised Part-Of-Speech Tagging with Anchor Hidden {M}arkov Models
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1018/
Stratos, Karl and Collins, Michael and Hsu, Daniel
null
245--257
We tackle unsupervised part-of-speech (POS) tagging by learning hidden Markov models (HMMs) that are particularly well-suited for the problem. These HMMs, which we call anchor HMMs, assume that each tag is associated with at least one word that can have no other tag, which is a relatively benign condition for POS tagging (e.g., {\textquotedblleft}the{\textquotedblright} is a word that appears only under the determiner tag). We exploit this assumption and extend the non-negative matrix factorization framework of Arora et al. (2013) to design a consistent estimator for anchor HMMs. In experiments, our algorithm is competitive with strong baselines such as the clustering method of Brown et al. (1992) and the log-linear model of Berg-Kirkpatrick et al. (2010). Furthermore, it produces an interpretable model in which hidden states are automatically lexicalized by words.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00096
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,623
article
yin-etal-2016-abcnn
{ABCNN}: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1019/
Yin, Wenpeng and Sch{\"utze, Hinrich and Xiang, Bing and Zhou, Bowen
null
259--272
How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence`s representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) The ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNNs achieve state-of-the-art performance on AS, PI and TE tasks. We release code at: \url{https://github.com/yinwenpeng/Answer_Selection}.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00097
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,624
article
hashimoto-etal-2016-word
Word Embeddings as Metric Recovery in Semantic Spaces
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1020/
Hashimoto, Tatsunori B. and Alvarez-Melis, David and Jaakkola, Tommi S.
null
273--286
Continuous word representations have been remarkably useful across NLP tasks but remain poorly understood. We ground word embeddings in semantic spaces studied in the cognitive-psychometric literature, taking these spaces as the primary objects to recover. To this end, we relate log co-occurrences of words in large corpora to semantic similarity assessments and show that co-occurrences are indeed consistent with an Euclidean semantic space hypothesis. Framing word embedding as metric recovery of a semantic space unifies existing word embedding algorithms, ties them to manifold learning, and demonstrates that existing algorithms are consistent metric recovery methods given co-occurrence counts from random walks. Furthermore, we propose a simple, principled, direct metric recovery algorithm that performs on par with the state-of-the-art word embedding and manifold learning methods. Finally, we complement recent focus on analogies by constructing two new inductive reasoning datasets{---}series completion and classification{---}and demonstrate that word embeddings can be used to solve them as well.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00098
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,625
article
schofield-mimno-2016-comparing
Comparing Apples to Apple: The Effects of Stemmers on Topic Models
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1021/
Schofield, Alexandra and Mimno, David
null
287--300
Rule-based stemmers such as the Porter stemmer are frequently used to preprocess English corpora for topic modeling. In this work, we train and evaluate topic models on a variety of corpora using several different stemming algorithms. We examine several different quantitative measures of the resulting models, including likelihood, coherence, model stability, and entropy. Despite their frequent use in topic modeling, we find that stemmers produce no meaningful improvement in likelihood and coherence and in fact can degrade topic stability.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00099
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,626
article
agic-etal-2016-multilingual
Multilingual Projection for Parsing Truly Low-Resource Languages
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1022/
Agi{\'c}, {\v{Z}}eljko and Johannsen, Anders and Plank, Barbara and Mart{\'i}nez Alonso, H{\'e}ctor and Schluter, Natalie and S{\o}gaard, Anders
null
301--312
We propose a novel approach to cross-lingual part-of-speech tagging and dependency parsing for truly low-resource languages. Our annotation projection-based approach yields tagging and parsing models for over 100 languages. All that is needed are freely available parallel texts, and taggers and parsers for resource-rich languages. The empirical evaluation across 30 test languages shows that our method consistently provides top-level accuracies, close to established upper bounds, and outperforms several competitive baselines.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00100
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,627
article
kiperwasser-goldberg-2016-simple
Simple and Accurate Dependency Parsing Using Bidirectional {LSTM} Feature Representations
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1023/
Kiperwasser, Eliyahu and Goldberg, Yoav
null
313--327
We present a simple and effective scheme for dependency parsing which is based on bidirectional-LSTMs (BiLSTMs). Each sentence token is associated with a BiLSTM vector representing the token in its sentential context, and feature vectors are constructed by concatenating a few BiLSTM vectors. The BiLSTM is trained jointly with the parser objective, resulting in very effective feature extractors for parsing. We demonstrate the effectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. The resulting parsers have very simple architectures, and match or surpass the state-of-the-art accuracies on English and Chinese.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00101
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,628
article
pelemans-etal-2016-sparse
Sparse Non-negative Matrix Language Modeling
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1024/
Pelemans, Joris and Shazeer, Noam and Chelba, Ciprian
null
329--342
We present Sparse Non-negative Matrix (SNM) estimation, a novel probability estimation technique for language modeling that can efficiently incorporate arbitrary features. We evaluate SNM language models on two corpora: the One Billion Word Benchmark and a subset of the LDC English Gigaword corpus. Results show that SNM language models trained with n-gram features are a close match for the well-established Kneser-Ney models. The addition of skip-gram features yields a model that is in the same league as the state-of-the-art recurrent neural network language models, as well as complementary: combining the two modeling techniques yields the best known result on the One Billion Word Benchmark. On the Gigaword corpus further improvements are observed using features that cross sentence boundaries. The computational advantages of SNM estimation over both maximum entropy and neural network estimation are probably its main strength, promising an approach that has large flexibility in combining arbitrary features and yet scales gracefully to large amounts of data.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00102
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,629
article
gulordava-merlo-2016-multi
Multi-lingual Dependency Parsing Evaluation: a Large-scale Analysis of Word Order Properties using Artificial Data
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1025/
Gulordava, Kristina and Merlo, Paola
null
343--356
The growing work in multi-lingual parsing faces the challenge of fair comparative evaluation and performance analysis across languages and their treebanks. The difficulty lies in teasing apart the properties of treebanks, such as their size or average sentence length, from those of the annotation scheme, and from the linguistic properties of languages. We propose a method to evaluate the effects of word order of a language on dependency parsing performance, while controlling for confounding treebank properties. The method uses artificially-generated treebanks that are minimal permutations of actual treebanks with respect to two word order properties: word order variation and dependency lengths. Based on these artificial data on twelve languages, we show that longer dependencies and higher word order variability degrade parsing performance. Our method also extends to minimal pairs of individual sentences, leading to a finer-grained understanding of parsing errors.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00103
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,630
article
chiu-nichols-2016-named
Named Entity Recognition with Bidirectional {LSTM}-{CNN}s
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1026/
Chiu, Jason P.C. and Nichols, Eric
null
357--370
Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00104
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,631
article
zhou-etal-2016-deep
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1027/
Zhou, Jie and Cao, Ying and Wang, Xuguang and Li, Peng and Xu, Wei
null
371--383
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT`14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT`14 English-to-German task.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00105
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,632
article
arora-etal-2016-latent
A Latent Variable Model Approach to {PMI}-based Word Embeddings
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1028/
Arora, Sanjeev and Li, Yuanzhi and Liang, Yingyu and Ma, Tengyu and Risteski, Andrej
null
385--399
Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods. Many use nonlinear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. This paper proposes a new generative model, a dynamic version of the log-linear topic model of Mnih and Hinton (2007). The methodological novelty is to use the prior to compute closed form expressions for word statistics. This provides a theoretical justification for nonlinear models like PMI, word2vec, and GloVe, as well as some hyperparameter choices. It also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by Mikolov et al. (2013a) and many subsequent papers. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed in space.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00106
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,633
article
xu-etal-2016-optimizing
Optimizing Statistical Machine Translation for Text Simplification
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1029/
Xu, Wei and Napoles, Courtney and Pavlick, Ellie and Chen, Quanze and Callison-Burch, Chris
null
401--415
Most recent sentence simplification systems use basic machine translation models to learn lexical and syntactic paraphrases from a manually simplified parallel corpus. These methods are limited by the quality and quantity of manually simplified corpora, which are expensive to build. In this paper, we conduct an in-depth adaptation of statistical machine translation to perform text simplification, taking advantage of large-scale paraphrases learned from bilingual texts and a small amount of manual simplifications with multiple references. Our work is the first to design automatic metrics that are effective for tuning and evaluating simplification systems, which will facilitate iterative development for this task.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00107
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,634
article
osborne-etal-2016-encoding
Encoding Prior Knowledge with Eigenword Embeddings
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1030/
Osborne, Dominique and Narayan, Shashi and Cohen, Shay B.
null
417--430
Canonical correlation analysis (CCA) is a method for reducing the dimension of data represented using two views. It has been previously used to derive word embeddings, where one view indicates a word, and the other view indicates its context. We describe a way to incorporate prior knowledge into CCA, give a theoretical justification for it, and test it by deriving word embeddings and evaluating them on a myriad of datasets.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00108
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,635
article
ammar-etal-2016-many
Many Languages, One Parser
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1031/
Ammar, Waleed and Mulcaire, George and Ballesteros, Miguel and Dyer, Chris and Smith, Noah A.
null
431--444
We train one multilingual model for dependency parsing and use it to parse sentences in several languages. The parsing model uses (i) multilingual word clusters and embeddings; (ii) token-level language information; and (iii) language-specific features (fine-grained POS tags). This input representation enables the parser not only to parse effectively in multiple languages, but also to generalize across languages based on linguistic universals and typological similarities, making it more effective to learn from limited annotations. Our parser`s performance compares favorably to strong baselines in a range of data scenarios, including when the target language has a large treebank, a small treebank, or no treebank for training.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00109
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,636
article
kiperwasser-goldberg-2016-easy
Easy-First Dependency Parsing with Hierarchical Tree {LSTM}s
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1032/
Kiperwasser, Eliyahu and Goldberg, Yoav
null
445--461
We suggest a compositional vector representation of parse trees that relies on a recursive combination of recurrent-neural network encoders. To demonstrate its effectiveness, we use the representation as the backbone of a greedy, bottom-up dependency parser, achieving very strong accuracies for English and Chinese, without relying on external word embeddings. The parser`s implementation is available for download at the first author`s webpage.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00110
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,637
article
althoff-etal-2016-large
Large-scale Analysis of Counseling Conversations: An Application of Natural Language Processing to Mental Health
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1033/
Althoff, Tim and Clark, Kevin and Leskovec, Jure
null
463--476
Mental illness is one of the most pressing public health issues of our time. While counseling and psychotherapy can be effective treatments, our knowledge about how to conduct successful counseling conversations has been limited due to lack of large-scale data with labeled outcomes of the conversations. In this paper, we present a large-scale, quantitative study on the discourse of text-message-based counseling conversations. We develop a set of novel computational discourse analysis methods to measure how various linguistic aspects of conversations are correlated with conversation outcomes. Applying techniques such as sequence-based conversation models, language model comparisons, message clustering, and psycholinguistics-inspired word frequency analyses, we discover actionable conversation strategies that are associated with better conversation outcomes.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00111
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,638
article
shareghi-etal-2016-fast
Fast, Small and Exact: Infinite-order Language Modelling with Compressed Suffix Trees
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1034/
Shareghi, Ehsan and Petri, Matthias and Haffari, Gholamreza and Cohn, Trevor
null
477--490
Efficient methods for storing and querying are critical for scaling high-order m-gram language models to large corpora. We propose a language model based on compressed suffix trees, a representation that is highly compact and can be easily held in memory, while supporting queries needed in computing language model probabilities on-the-fly. We present several optimisations which improve query runtimes up to 2500{\texttimes}, despite only incurring a modest increase in construction time and memory usage. For large corpora and high Markov orders, our method is highly competitive with the state-of-the-art KenLM package. It imposes much lower memory requirements, often by orders of magnitude, and has runtimes that are either similar (for training) or comparable (for querying).
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00112
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,639
article
wang-eisner-2016-galactic
The Galactic Dependencies Treebanks: Getting More Data by Synthesizing New Languages
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1035/
Wang, Dingquan and Eisner, Jason
null
491--505
We release Galactic Dependencies 1.0{---}a large set of synthetic languages not found on Earth, but annotated in Universal Dependencies format. This new resource aims to provide training and development data for NLP methods that aim to adapt to unfamiliar languages. Each synthetic treebank is produced from a real treebank by stochastically permuting the dependents of nouns and/or verbs to match the word order of other real languages. We discuss the usefulness, realism, parsability, perplexity, and diversity of the synthetic languages. As a simple demonstration of the use of Galactic Dependencies, we consider single-source transfer, which attempts to parse a real target language using a parser trained on a {\textquotedblleft}nearby{\textquotedblright} source language. We find that including synthetic source languages somewhat increases the diversity of the source pool, which significantly improves results for most target languages.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00113
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,640
article
gorman-sproat-2016-minimally
Minimally Supervised Number Normalization
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1036/
Gorman, Kyle and Sproat, Richard
null
507--519
We propose two models for verbalizing numbers, a key component in speech recognition and synthesis systems. The first model uses an end-to-end recurrent neural network. The second model, drawing inspiration from the linguistics literature, uses finite-state transducers constructed with a minimal amount of training data. While both models achieve near-perfect performance, the latter model can be trained using several orders of magnitude less data than the former, making it particularly useful for low-resource languages.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00114
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,641
article
linzen-etal-2016-assessing
Assessing the Ability of {LSTM}s to Learn Syntax-Sensitive Dependencies
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1037/
Linzen, Tal and Dupoux, Emmanuel and Goldberg, Yoav
null
521--535
The success of long short-term memory (LSTM) neural networks in language processing is typically attributed to their ability to capture long-distance statistical regularities. Linguistic regularities are often sensitive to syntactic structure; can such dependencies be captured by LSTMs, which do not have explicit structural representations? We begin addressing this question using number agreement in English subject-verb dependencies. We probe the architecture`s grammatical competence both using training objectives with an explicit grammatical target (number prediction, grammaticality judgments) and using language models. In the strongly supervised settings, the LSTM achieved very high overall accuracy (less than 1{\%} errors), but errors increased when sequential and structural information conflicted. The frequency of such errors rose sharply in the language-modeling setting. We conclude that LSTMs can capture a non-trivial amount of grammatical structure given targeted supervision, but stronger architectures may be required to further reduce errors; furthermore, the language modeling signal is insufficient for capturing syntax-sensitive dependencies, and should be supplemented with more direct supervision if such dependencies need to be captured.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00115
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,642
article
goldwasser-zhang-2016-understanding
Understanding Satirical Articles Using Common-Sense
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1038/
Goldwasser, Dan and Zhang, Xiao
null
537--549
Automatic satire detection is a subtle text classification task, for machines and at times, even for humans. In this paper we argue that satire detection should be approached using common-sense inferences, rather than traditional text classification methods. We present a highly structured latent variable model capturing the required inferences. The model abstracts over the specific entities appearing in the articles, grouping them into generalized categories, thus allowing the model to adapt to previously unseen situations.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00116
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,643
article
tuan-etal-2016-utilizing
Utilizing Temporal Information for Taxonomy Construction
Lee, Lillian and Johnson, Mark and Toutanova, Kristina
null
2016
Cambridge, MA
MIT Press
https://aclanthology.org/Q16-1039/
Tuan, Luu Anh and Hui, Siu Cheung and Ng, See Kiong
null
551--564
Taxonomies play an important role in many applications by organizing domain knowledge into a hierarchy of {\textquoteleft}is-a' relations between terms. Previous work on automatic construction of taxonomies from text documents either ignored temporal information or used fixed time periods to discretize the time series of documents. In this paper, we propose a time-aware method to automatically construct and effectively maintain a taxonomy from a given series of documents preclustered for a domain of interest. The method extracts temporal information from the documents and uses a timestamp contribution function to score the temporal relevance of the evidence from source texts when identifying the taxonomic relations for constructing the taxonomy. Experimental results show that our proposed method outperforms the state-of-the-art methods by increasing F-measure up to 7{\%}{--}20{\%}. Furthermore, the proposed method can incrementally update the taxonomy by adding fresh relations from new data and removing outdated relations using an information decay function. It thus avoids rebuilding the whole taxonomy from scratch for every update and keeps the taxonomy effectively up-to-date in order to track the latest information trends in the rapidly evolving domain.
Transactions of the Association for Computational Linguistics
4
10.1162/tacl_a_00117
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
59,644
inproceedings
elliott-etal-2016-multimodal
Multimodal Learning and Reasoning
Birch, Alexandra and Zuidema, Willem
aug
2016
Berlin, Germany
Association for Computational Linguistics
https://aclanthology.org/P16-5001/
Elliott, Desmond and Kiela, Douwe and Lazaridou, Angeliki
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts
null
Natural Language Processing has broadened in scope to tackle more and more challenging language understanding and reasoning tasks. The core NLP tasks remain predominantly unimodal, focusing on linguistic input, despite the fact that we, humans, acquire and use language while communicating in perceptually rich environments. Moving towards human-level AI will require the integration and modeling of multiple modalities beyond language. With this tutorial, our aim is to introduce researchers to the areas of NLP that have dealt with multimodal signals. The key advantage of using multimodal signals in NLP tasks is the complementarity of the data in different modalities. For example, we are less likely to nd descriptions of yellow bananas or wooden chairs in text corpora, but these visual attributes can be readily extracted directly from images. Multimodal signals, such as visual, auditory or olfactory data, have proven useful for models of word similarity and relatedness, automatic image and video description, and even predicting the associated smells of words. Finally, multimodality offers a practical opportunity to study and apply multitask learning, a general machine learning paradigm that improves generalization performance of a task by using training signals of other related tasks.All material associated to the tutorial will be available at \url{http://multimodalnlp.github.io/}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,027
inproceedings
koehn-2016-computer
Computer Aided Translation
Birch, Alexandra and Zuidema, Willem
aug
2016
Berlin, Germany
Association for Computational Linguistics
https://aclanthology.org/P16-5003/
Koehn, Philipp
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts
null
Moving beyond post-editing machine translation, a number of recent research efforts have advanced computer aided translation methods that allow for more interactivity, richer information such as confidence scores, and the completed feedback loop of instant adaptation of machine translation models to user translations.This tutorial will explain the main techniques for several aspects of computer aided translation: confidence measures;interactive machine translation (interactive translation prediction);bilingual concordancers;translation option display;paraphrasing (alternative translation suggestions);visualization of word alignment;online adaptation;automatic reviewing;integration of translation memory;eye tracking, logging, and cognitive user models;For each of these, the state of the art and open challenges are presented. The tutorial will also look under the hood of the open source CASMACAT toolkit that is based on MATECAT, and available as a ``Home Edition'' to be installed on a desktop machine. The target audience of this tutorials are researchers interested in computer aided machine translation and practitioners who want to use or deploy advanced CAT technology.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,029
inproceedings
camacho-collados-etal-2016-semantic
Semantic Representations of Word Senses and Concepts
Birch, Alexandra and Zuidema, Willem
aug
2016
Berlin, Germany
Association for Computational Linguistics
https://aclanthology.org/P16-5004/
Camacho-Collados, Jos{\'e} and Iacobacci, Ignacio and Navigli, Roberto and Taher Pilehvar, Mohammad
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts
null
Representing the semantics of linguistic items in a machine {\-}interpretable form has been a major goal of Natural Language Processing since its earliest days. Among the range of different linguistic items, words have attracted the most research attention. However, word representations have an important limitation: they conflate different meanings of a word into a single vector. Representations of word senses have the potential to overcome this inherent limitation. Indeed, the representation of individual word senses and concepts has recently gained in popularity with several experimental results showing that a considerable performance improvement can be achieved across different NLP applications upon moving from word level to the deeper sense and concept levels. Another interesting point regarding the representation of concepts and word senses is that these models can be seamlessly applied to other linguistic items, such as words, phrases, sentences, etc.This tutorial will first provide a brief overview of the recent literature concerning word representation (both count based and neural network based). It will then describe the advantages of moving from the word level to the deeper level of word senses and concepts, providing an extensive review of state {\-}of {\-}the {\-}art systems. Approaches covered will not only include those which draw upon knowledge resources such as WordNet, Wikipedia, BabelNet or FreeBase as reference, but also the so {\-}called multi {\-}prototype approaches which learn sense distinctions by using different clustering techniques. Our tutorial will discuss the advantages and potential limitations of all approaches, showing their most successful applications to date. We will conclude by presenting current open problems and lines of future work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,030
inproceedings
wang-wang-2016-understanding
Understanding Short Texts
Birch, Alexandra and Zuidema, Willem
aug
2016
Berlin, Germany
Association for Computational Linguistics
https://aclanthology.org/P16-5007/
Wang, Zhongyuan and Wang, Haixun
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts
null
Billions of short texts are produced every day, in the form of search queries, ad keywords, tags, tweets, messenger conversations, social network posts, etc. Unlike documents, short texts have some unique characteristics which make them difficult to handle. First, short texts, especially search queries, do not always observe the syntax of a written language. This means traditional NLP techniques, such as syntactic parsing, do not always apply to short texts. Second, short texts contain limited context. The majority of search queries contain less than 5 words, and tweets can have no more than 140 characters. Because of the above reasons, short texts give rise to a significant amount of ambiguity, which makes them extremely difficult to handle. On the other hand, many applications, including search engines, ads, automatic question answering, online advertising, recommendation systems, etc., rely on short text understanding. In all these applications, the necessary first step is to transform an input text into a machine-interpretable representation, namely to ``understand'' the short text. A growing number of approaches leverage external knowledge to address the issue of inadequate contextual information that accompanies the short texts. These approaches can be classified into two categories: Explicit Representation Model (ERM) and Implicit Representation Model (IRM). In this tutorial, we will present a comprehensive overview of short text understanding based on explicit semantics (knowledge graph representation, acquisition, and reasoning) and implicit semantics (embedding and deep learning). Specifically, we will go over various techniques in knowledge acquisition, representation, and inferencing has been proposed for text understanding, and we will describe massive structured and semi-structured data that have been made available in the recent decade that directly or indirectly encode human knowledge, turning the knowledge representation problems into a computational grand challenge with feasible solutions insight.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,033
inproceedings
petruck-dodge-2016-metanet
{M}eta{N}et: Repository, Identification System, and Applications
Birch, Alexandra and Zuidema, Willem
aug
2016
Berlin, Germany
Association for Computational Linguistics
https://aclanthology.org/P16-5008/
Petruck, Miriam R L and Dodge, Ellen K
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts
null
The ubiquity of metaphor in language (Lakoff and Johnson 1980) has served as impetus for cognitive linguistic approaches to the study of language, mind, and the study of mind (e.g. Thibodeau {\&} Boroditsky 2011). While native speakers use metaphor naturally and easily, the treatment and interpretation of metaphor in computational systems remains challenging because such systems have not succeeded in developing ways to recognize the semantic elements that define metaphor. This tutorial demonstrates MetaNet`s frame-based semantic analyses, and their informing of MetaNet`s automatic metaphor identification system. Participants will gain a complete understanding of the theoretical basis and the practical workings of MetaNet, and acquire relevant information about the Frame Semantics basis of that knowledge base and the way that FrameNet handles the widespread phenomenon of metaphor in language. The tutorial is geared to researchers and practitioners of language technology, not necessarily experts in metaphor analysis or knowledgeable about either FrameNet or MetaNet, but who are interested in natural language processing tasks that involve automatic metaphor processing, or could benefit from exposure to tools and resources that support frame-based deep semantic, analyses of language, including metaphor as a widespread phenomenon in human language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,034
inproceedings
gaudio-etal-2016-evaluating
Evaluating Machine Translation in a Usage Scenario
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1001/
Gaudio, Rosa and Burchardt, Aljoscha and Branco, Ant{\'o}nio
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1--8
In this document we report on a user-scenario-based evaluation aiming at assessing the performance of machine translation (MT) systems in a real context of use. We describe a sequel of experiments that has been performed to estimate the usefulness of MT and to test if improvements of MT technology lead to better performance in the usage scenario. One goal is to find the best methodology for evaluating the eventual benefit of a machine translation system in an application. The evaluation is based on the QTLeap corpus, a novel multilingual language resource that was collected through a real-life support service via chat. It is composed of naturally occurring utterances produced by users while interacting with a human technician providing answers. The corpus is available in eight different languages: Basque, Bulgarian, Czech, Dutch, English, German, Portuguese and Spanish.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,311
inproceedings
du-etal-2016-using
Using {B}abel{N}et to Improve {OOV} Coverage in {SMT}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1002/
Du, Jinhua and Way, Andy and Zydron, Andrzej
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
9--15
Out-of-vocabulary words (OOVs) are a ubiquitous and difficult problem in statistical machine translation (SMT). This paper studies different strategies of using BabelNet to alleviate the negative impact brought about by OOVs. BabelNet is a multilingual encyclopedic dictionary and a semantic network, which not only includes lexicographic and encyclopedic terms, but connects concepts and named entities in a very large network of semantic relations. By taking advantage of the knowledge in BabelNet, three different methods {\textemdash} using direct training data, domain-adaptation techniques and the BabelNet API {\textemdash} are proposed in this paper to obtain translations for OOVs to improve system performance. Experimental results on English{\textemdash}Polish and English{\textemdash}Chinese language pairs show that domain adaptation can better utilize BabelNet knowledge and performs better than other methods. The results also demonstrate that BabelNet is a really useful tool for improving translation performance of SMT systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,312
inproceedings
kordoni-etal-2016-enhancing
Enhancing Access to Online Education: Quality Machine Translation of {MOOC} Content
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1003/
Kordoni, Valia and van den Bosch, Antal and Kermanidis, Katia Lida and Sosoni, Vilelmini and Cholakov, Kostadin and Hendrickx, Iris and Huck, Matthias and Way, Andy
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
16--22
The present work is an overview of the TraMOOC (Translation for Massive Open Online Courses) research and innovation project, a machine translation approach for online educational content. More specifically, videolectures, assignments, and MOOC forum text is automatically translated from English into eleven European and BRIC languages. Unlike previous approaches to machine translation, the output quality in TraMOOC relies on a multimodal evaluation schema that involves crowdsourcing, error type markup, an error taxonomy for translation model comparison, and implicit evaluation via text mining, i.e. entity recognition and its performance comparison between the source and the translated text, and sentiment analysis on the students' forum posts. Finally, the evaluation output will result in more and better quality in-domain parallel data that will be fed back to the translation engine for higher quality output. The translation service will be incorporated into the Iversity MOOC platform and into the VideoLectures.net digital library portal.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,313
inproceedings
marg-2016-trials
The Trials and Tribulations of Predicting Post-Editing Productivity
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1004/
Marg, Lena
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
23--26
While an increasing number of (automatic) metrics is available to assess the linguistic quality of machine translations, their interpretation remains cryptic to many users, specifically in the translation community. They are clearly useful for indicating certain overarching trends, but say little about actual improvements for translation buyers or post-editors. However, these metrics are commonly referenced when discussing pricing and models, both with translation buyers and service providers. With the aim of focusing on automatic metrics that are easier to understand for non-research users, we identified Edit Distance (or Post-Edit Distance) as a good fit. While Edit Distance as such does not express cognitive effort or time spent editing machine translation suggestions, we found that it correlates strongly with the productivity tests we performed, for various language pairs and domains. This paper aims to analyse Edit Distance and productivity data on a segment level based on data gathered over some years. Drawing from these findings, we want to then explore how Edit Distance could help in predicting productivity on new content. Some further analysis is proposed, with findings to be presented at the conference.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,314
inproceedings
popovic-arcan-2016-pe2rr
{PE}2rr Corpus: Manual Error Annotation of Automatically Pre-annotated {MT} Post-edits
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1005/
Popovi{\'c}, Maja and Ar{\v{c}}an, Mihael
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
27--32
We present a freely available corpus containing source language texts from different domains along with their automatically generated translations into several distinct morphologically rich languages, their post-edited versions, and error annotations of the performed post-edit operations. We believe that the corpus will be useful for many different applications. The main advantage of the approach used for creation of the corpus is the fusion of post-editing and error classification tasks, which have usually been seen as two independent tasks, although naturally they are not. We also show benefits of coupling automatic and manual error classification which facilitates the complex manual error annotation task as well as the development of automatic error classification tools. In addition, the approach facilitates annotation of language pair related issues.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,315
inproceedings
mohammad-etal-2016-sentiment
Sentiment Lexicons for {A}rabic Social Media
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1006/
Mohammad, Saif and Salameh, Mohammad and Kiritchenko, Svetlana
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
33--37
Existing Arabic sentiment lexicons have low coverage{\textemdash}with only a few thousand entries. In this paper, we present several large sentiment lexicons that were automatically generated using two different methods: (1) by using distant supervision techniques on Arabic tweets, and (2) by translating English sentiment lexicons into Arabic using a freely available statistical machine translation system. We compare the usefulness of new and old sentiment lexicons in the downstream application of sentence-level sentiment analysis. Our baseline sentiment analysis system uses numerous surface form features. Nonetheless, the system benefits from using additional features drawn from sentiment lexicons. The best result is obtained using the automatically generated Dialectal Hashtag Lexicon and the Arabic translations of the NRC Emotion Lexicon (accuracy of 66.6{\%}). Finally, we describe a qualitative study of the automatic translations of English sentiment lexicons into Arabic, which shows that about 88{\%} of the automatically translated entries are valid for English as well. Close to 10{\%} of the invalid entries are caused by gross mistranslations, close to 40{\%} by translations into a related word, and about 50{\%} by differences in how the word is used in Arabic.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,316
inproceedings
castellucci-etal-2016-language
A Language Independent Method for Generating Large Scale Polarity Lexicons
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1007/
Castellucci, Giuseppe and Croce, Danilo and Basili, Roberto
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
38--45
Sentiment Analysis systems aims at detecting opinions and sentiments that are expressed in texts. Many approaches in literature are based on resources that model the prior polarity of words or multi-word expressions, i.e. a polarity lexicon. Such resources are defined by teams of annotators, i.e. a manual annotation is provided to associate emotional or sentiment facets to the lexicon entries. The development of such lexicons is an expensive and language dependent process, making them often not covering all the linguistic sentiment phenomena. Moreover, once a lexicon is defined it can hardly be adopted in a different language or even a different domain. In this paper, we present several Distributional Polarity Lexicons (DPLs), i.e. large-scale polarity lexicons acquired with an unsupervised methodology based on Distributional Models of Lexical Semantics. Given a set of heuristically annotated sentences from Twitter, we transfer the sentiment information from sentences to words. The approach is mostly unsupervised, and experimental evaluations on Sentiment Analysis tasks in two languages show the benefits of the generated resources. The generated DPLs are publicly available in English and Italian.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,317
inproceedings
naskar-etal-2016-sentiment
Sentiment Analysis in Social Networks through Topic modeling
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1008/
Naskar, Debashis and Mokaddem, Sidahmed and Rebollo, Miguel and Onaindia, Eva
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
46--53
In this paper, we analyze the sentiments derived from the conversations that occur in social networks. Our goal is to identify the sentiments of the users in the social network through their conversations. We conduct a study to determine whether users of social networks (twitter in particular) tend to gather together according to the likeness of their sentiments. In our proposed framework, (1) we use ANEW, a lexical dictionary to identify affective emotional feelings associated to a message according to the Russell`s model of affection; (2) we design a topic modeling mechanism called Sent{\_}LDA, based on the Latent Dirichlet Allocation (LDA) generative model, which allows us to find the topic distribution in a general conversation and we associate topics with emotions; (3) we detect communities in the network according to the density and frequency of the messages among the users; and (4) we compare the sentiments of the communities by using the Russell`s model of affect versus polarity and we measure the extent to which topic distribution strengthen likeness in the sentiments of the users of a community. This works contributes with a topic modeling methodology to analyze the sentiments in conversations that take place in social networks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,318
inproceedings
garcia-pablos-etal-2016-comparison
A Comparison of Domain-based Word Polarity Estimation using different Word Embeddings
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1009/
Garc{\'i}a Pablos, Aitor and Cuadros, Montse and Rigau, German
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
54--60
A key point in Sentiment Analysis is to determine the polarity of the sentiment implied by a certain word or expression. In basic Sentiment Analysis systems this sentiment polarity of the words is accounted and weighted in different ways to provide a degree of positivity/negativity. Currently words are also modelled as continuous dense vectors, known as word embeddings, which seem to encode interesting semantic knowledge. With regard to Sentiment Analysis, word embeddings are used as features to more complex supervised classification systems to obtain sentiment classifiers. In this paper we compare a set of existing sentiment lexicons and sentiment lexicon generation techniques. We also show a simple but effective technique to calculate a word polarity value for each word in a domain using existing continuous word embeddings generation methods. Further, we also show that word embeddings calculated on in-domain corpus capture the polarity better than the ones calculated on general-domain corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,319
inproceedings
sidorov-etal-2016-speaker
Could Speaker, Gender or Age Awareness be beneficial in Speech-based Emotion Recognition?
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1010/
Sidorov, Maxim and Schmitt, Alexander and Semenkin, Eugene and Minker, Wolfgang
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
61--68
Emotion Recognition (ER) is an important part of dialogue analysis which can be used in order to improve the quality of Spoken Dialogue Systems (SDSs). The emotional hypothesis of the current response of an end-user might be utilised by the dialogue manager component in order to change the SDS strategy which could result in a quality enhancement. In this study additional speaker-related information is used to improve the performance of the speech-based ER process. The analysed information is the speaker identity, gender and age of a user. Two schemes are described here, namely, using additional information as an independent variable within the feature vector and creating separate emotional models for each speaker, gender or age-cluster independently. The performances of the proposed approaches were compared against the baseline ER system, where no additional information has been used, on a number of emotional speech corpora of German, English, Japanese and Russian. The study revealed that for some of the corpora the proposed approach significantly outperforms the baseline methods with a relative difference of up to 11.9{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,320
inproceedings
takamura-etal-2016-discriminative
Discriminative Analysis of Linguistic Features for Typological Study
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1011/
Takamura, Hiroya and Nagata, Ryo and Kawasaki, Yoshifumi
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
69--76
We address the task of automatically estimating the missing values of linguistic features by making use of the fact that some linguistic features in typological databases are informative to each other. The questions to address in this work are (i) how much predictive power do features have on the value of another feature? (ii) to what extent can we attribute this predictive power to genealogical or areal factors, as opposed to being provided by tendencies or implicational universals? To address these questions, we conduct a discriminative or predictive analysis on the typological database. Specifically, we use a machine-learning classifier to estimate the value of each feature of each language using the values of the other features, under different choices of training data: all the other languages, or all the other languages except for the ones having the same origin or area with the target language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,321
inproceedings
hupkes-bod-2016-pos
{POS}-tagging of Historical {D}utch
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1012/
Hupkes, Dieuwke and Bod, Rens
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
77--82
We present a study of the adequacy of current methods that are used for POS-tagging historical Dutch texts, as well as an exploration of the influence of employing different techniques to improve upon the current practice. The main focus of this paper is on (unsupervised) methods that are easily adaptable for different domains without requiring extensive manual input. It was found that modernising the spelling of corpora prior to tagging them with a tagger trained on contemporary Dutch results in a large increase in accuracy, but that spelling normalisation alone is not sufficient to obtain state-of-the-art results. The best results were achieved by training a POS-tagger on a corpus automatically annotated by projecting (automatically assigned) POS-tags via word alignments from a contemporary corpus. This result is promising, as it was reached without including any domain knowledge or context dependencies. We argue that the insights of this study combined with semi-supervised learning techniques for domain adaptation can be used to develop a general-purpose diachronic tagger for Dutch.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,322
inproceedings
rauschenberger-etal-2016-language
A Language Resource of {G}erman Errors Written by Children with Dyslexia
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1013/
Rauschenberger, Maria and Rello, Luz and F{\"uchsel, Silke and Thomaschewski, J{\"org
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
83--87
In this paper we present a language resource for German, composed of a list of 1,021 unique errors extracted from a collection of texts written by people with dyslexia. The errors were annotated with a set of linguistic characteristics as well as visual and phonetic features. We present the compilation and the annotation criteria for the different types of dyslexic errors. This language resource has many potential uses since errors written by people with dyslexia reflect their difficulties. For instance, it has already been used to design language exercises to treat dyslexia in German. To the best of our knowledge, this is first resource of this kind in German.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,323
inproceedings
barbagli-etal-2016-cita
{CI}t{A}: an {L}1 {I}talian Learners Corpus to Study the Development of Writing Competence
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1014/
Barbagli, Alessia and Lucisano, Pietro and Dell{'}Orletta, Felice and Montemagni, Simonetta and Venturi, Giulia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
88--95
In this paper, we present the CItA corpus (Corpus Italiano di Apprendenti L1), a collection of essays written by Italian L1 learners collected during the first and second year of lower secondary school. The corpus was built in the framework of an interdisciplinary study jointly carried out by computational linguistics and experimental pedagogists and aimed at tracking the development of written language competence over the years and students' background information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,324
inproceedings
yu-etal-2016-even
If You {E}ven Don`t Have a Bit of {B}ible: Learning Delexicalized {POS} Taggers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1015/
Yu, Zhiwei and Mare{\v{c}}ek, David and {\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k and Zeman, Daniel
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
96--103
Part-of-speech (POS) induction is one of the most popular tasks in research on unsupervised NLP. Various unsupervised and semi-supervised methods have been proposed to tag an unseen language. However, many of them require some partial understanding of the target language because they rely on dictionaries or parallel corpora such as the Bible. In this paper, we propose a different method named delexicalized tagging, for which we only need a raw corpus of the target language. We transfer tagging models trained on annotated corpora of one or more resource-rich languages. We employ language-independent features such as word length, frequency, neighborhood entropy, character classes (alphabetic vs. numeric vs. punctuation) etc. We demonstrate that such features can, to certain extent, serve as predictors of the part of speech, represented by the universal POS tag.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,325
inproceedings
lopes-etal-2016-spedial
The {S}pe{D}ial datasets: datasets for Spoken Dialogue Systems analytics
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1016/
Lopes, Jos{\'e} and Chorianopoulou, Arodami and Palogiannidi, Elisavet and Moniz, Helena and Abad, Alberto and Louka, Katerina and Iosif, Elias and Potamianos, Alexandros
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
104--110
The SpeDial consortium is sharing two datasets that were used during the SpeDial project. By sharing them with the community we are providing a resource to reduce the duration of cycle of development of new Spoken Dialogue Systems (SDSs). The datasets include audios and several manual annotations, i.e., miscommunication, anger, satisfaction, repetition, gender and task success. The datasets were created with data from real users and cover two different languages: English and Greek. Detectors for miscommunication, anger and gender were trained for both systems. The detectors were particularly accurate in tasks where humans have high annotator agreement such as miscommunication and gender. As expected due to the subjectivity of the task, the anger detector had a less satisfactory performance. Nevertheless, we proved that the automatic detection of situations that can lead to problems in SDSs is possible and can be a promising direction to reduce the duration of SDS`s development cycle.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,326
inproceedings
amanova-etal-2016-creating
Creating Annotated Dialogue Resources: Cross-domain Dialogue Act Classification
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1017/
Amanova, Dilafruz and Petukhova, Volha and Klakow, Dietrich
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
111--117
This paper describes a method to automatically create dialogue resources annotated with dialogue act information by reusing existing dialogue corpora. Numerous dialogue corpora are available for research purposes and many of them are annotated with dialogue act information that captures the intentions encoded in user utterances. Annotated dialogue resources, however, differ in various respects: data collection settings and modalities used, dialogue task domains and scenarios (if any) underlying the collection, number and roles of dialogue participants involved and dialogue act annotation schemes applied. The presented study encompasses three phases of data-driven investigation. We, first, assess the importance of various types of features and their combinations for effective cross-domain dialogue act classification. Second, we establish the best predictive model comparing various cross-corpora training settings. Finally, we specify models adaptation procedures and explore late fusion approaches to optimize the overall classification decision taking process. The proposed methodology accounts for empirically motivated and technically sound classification procedures that may reduce annotation and training costs significantly.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,327
inproceedings
collins-traum-2016-towards
Towards a Multi-dimensional Taxonomy of Stories in Dialogue
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1018/
Collins, Kathryn J. and Traum, David
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
118--124
In this paper, we present a taxonomy of stories told in dialogue. We based our scheme on prior work analyzing narrative structure and method of telling, relation to storyteller identity, as well as some categories particular to dialogue, such as how the story gets introduced. Our taxonomy currently has 5 major dimensions, with most having sub-dimensions - each dimension has an associated set of dimension-specific labels. We adapted an annotation tool for this taxonomy and have annotated portions of two different dialogue corpora, Switchboard and the Distress Analysis Interview Corpus. We present examples of some of the tags and concepts with stories from Switchboard, and some initial statistics of frequencies of the tags.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,328
inproceedings
zarriess-etal-2016-pentoref
{P}ento{R}ef: A Corpus of Spoken References in Task-oriented Dialogues
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1019/
Zarrie{\ss}, Sina and Hough, Julian and Kennington, Casey and Manuvinakurike, Ramesh and DeVault, David and Fern{\'a}ndez, Raquel and Schlangen, David
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
125--131
PentoRef is a corpus of task-oriented dialogues collected in systematically manipulated settings. The corpus is multilingual, with English and German sections, and overall comprises more than 20000 utterances. The dialogues are fully transcribed and annotated with referring expressions mapped to objects in corresponding visual scenes, which makes the corpus a rich resource for research on spoken referring expressions in generation and resolution. The corpus includes several sub-corpora that correspond to different dialogue situations where parameters related to interactivity, visual access, and verbal channel have been manipulated in systematic ways. The corpus thus lends itself to very targeted studies of reference in spontaneous dialogue.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,329
inproceedings
chowdhury-etal-2016-transfer
Transfer of Corpus-Specific Dialogue Act Annotation to {ISO} Standard: Is it worth it?
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1020/
Chowdhury, Shammur Absar and Stepanov, Evgeny and Riccardi, Giuseppe
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
132--135
Spoken conversation corpora often adapt existing Dialogue Act (DA) annotation specifications, such as DAMSL, DIT++, etc., to task specific needs, yielding incompatible annotations; thus, limiting corpora re-usability. Recently accepted ISO standard for DA annotation {--} Dialogue Act Markup Language (DiAML) {--} is designed as domain and application independent. Moreover, the clear separation of dialogue dimensions and communicative functions, coupled with the hierarchical organization of the latter, allows for classification at different levels of granularity. However, re-annotating existing corpora with the new scheme might require significant effort. In this paper we test the utility of the ISO standard through comparative evaluation of the corpus-specific legacy and the semi-automatically transferred DiAML DA annotations on supervised dialogue act classification task. To test the domain independence of the resulting annotations, we perform cross-domain and data aggregation evaluation. Compared to the legacy annotation scheme, on the Italian LUNA Human-Human corpus, the DiAML annotation scheme exhibits better cross-domain and data aggregation classification performance, while maintaining comparable in-domain performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,330
inproceedings
ghaddar-langlais-2016-wikicoref
{W}iki{C}oref: An {E}nglish Coreference-annotated Corpus of {W}ikipedia Articles
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1021/
Ghaddar, Abbas and Langlais, Phillippe
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
136--142
This paper presents WikiCoref, an English corpus annotated for anaphoric relations, where all documents are from the English version of Wikipedia. Our annotation scheme follows the one of OntoNotes with a few disparities. We annotated each markable with coreference type, mention type and the equivalent Freebase topic. Since most similar annotation efforts concentrate on very specific types of written text, mainly newswire, there is a lack of resources for otherwise over-used Wikipedia texts. The corpus described in this paper addresses this issue. We present a freely available resource we initially devised for improving coreference resolution algorithms dedicated to Wikipedia texts. Our corpus has no restriction on the topics of the documents being annotated, and documents of various sizes have been considered for annotation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,331
inproceedings
schlechtweg-2016-exploitation
Exploitation of Co-reference in Distributional Semantics
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1022/
Schlechtweg, Dominik
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
143--149
The aim of distributional semantics is to model the similarity of the meaning of words via the words they occur with. Thereby, it relies on the distributional hypothesis implying that similar words have similar contexts. Deducing meaning from the distribution of words is interesting as it can be done automatically on large amounts of freely available raw text. It is because of this convenience that most current state-of-the-art-models of distributional semantics operate on raw text, although there have been successful attempts to integrate other kinds of{\textemdash}e.g., syntactic{\textemdash}information to improve distributional semantic models. In contrast, less attention has been paid to semantic information in the research community. One reason for this is that the extraction of semantic information from raw text is a complex, elaborate matter and in great parts not yet satisfyingly solved. Recently, however, there have been successful attempts to integrate a certain kind of semantic information, i.e., co-reference. Two basically different kinds of information contributed by co-reference with respect to the distribution of words will be identified. We will then focus on one of these and examine its general potential to improve distributional semantic models as well as certain more specific hypotheses.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,332
inproceedings
roesiger-kuhn-2016-ims
{IMS} {H}ot{C}oref {DE}: A Data-driven Co-reference Resolver for {G}erman
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1024/
Roesiger, Ina and Kuhn, Jonas
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
155--160
This paper presents a data-driven co-reference resolution system for German that has been adapted from IMS HotCoref, a co-reference resolver for English. It describes the difficulties when resolving co-reference in German text, the adaptation process and the features designed to address linguistic challenges brought forth by German. We report performance on the reference dataset T{\"uBa-D/Z and include a post-task SemEval 2010 evaluation, showing that the resolver achieves state-of-the-art performance. We also include ablation experiments that indicate that integrating linguistic features increases results. The paper also describes the steps and the format necessary to use the resolver on new texts. The tool is freely available for download.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,334
inproceedings
mujadia-etal-2016-coreference
Coreference Annotation Scheme and Relation Types for {H}indi
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1025/
Mujadia, Vandan and Gupta, Palash and Sharma, Dipti Misra
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
161--168
This paper describes a coreference annotation scheme, coreference annotation specific issues and their solutions through our proposed annotation scheme for Hindi. We introduce different co-reference relation types between continuous mentions of the same coreference chain such as {\textquotedblleft}Part-of{\textquotedblright}, {\textquotedblleft}Function-value pair{\textquotedblright} etc. We used Jaccard similarity based Krippendorff{\textquoteleft}s' alpha to demonstrate consistency in annotation scheme, annotation and corpora. To ease the coreference annotation process, we built a semi-automatic Coreference Annotation Tool (CAT). We also provide statistics of coreference annotation on Hindi Dependency Treebank (HDTB).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,335
inproceedings
nedoluzhko-etal-2016-coreference
Coreference in {P}rague {C}zech-{E}nglish {D}ependency {T}reebank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1026/
Nedoluzhko, Anna and Nov{\'a}k, Michal and Cinkov{\'a}, Silvie and Mikulov{\'a}, Marie and M{\'i}rovsk{\'y}, Ji{\v{r}}{\'i}
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
169--176
We present coreference annotation on parallel Czech-English texts of the Prague Czech-English Dependency Treebank (PCEDT). The paper describes innovations made to PCEDT 2.0 concerning coreference, as well as coreference information already present there. We characterize the coreference annotation scheme, give the statistics and compare our annotation with the coreference annotation in Ontonotes and Prague Dependency Treebank for Czech. We also present the experiments made using this corpus to improve the alignment of coreferential expressions, which helps us to collect better statistics of correspondences between types of coreferential relations in Czech and English. The corpus released as PCEDT 2.0 Coref is publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,336
inproceedings
bell-etal-2016-sieve
Sieve-based Coreference Resolution in the Biomedical Domain
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1027/
Bell, Dane and Hahn-Powell, Gus and Valenzuela-Esc{\'a}rcega, Marco A. and Surdeanu, Mihai
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
177--183
We describe challenges and advantages unique to coreference resolution in the biomedical domain, and a sieve-based architecture that leverages domain knowledge for both entity and event coreference resolution. Domain-general coreference resolution algorithms perform poorly on biomedical documents, because the cues they rely on such as gender are largely absent in this domain, and because they do not encode domain-specific knowledge such as the number and type of participants required in chemical reactions. Moreover, it is difficult to directly encode this knowledge into most coreference resolution algorithms because they are not rule-based. Our rule-based architecture uses sequentially applied hand-designed {\textquotedblleft}sieves{\textquotedblright}, with the output of each sieve informing and constraining subsequent sieves. This architecture provides a 3.2{\%} increase in throughput to our Reach event extraction system with precision parallel to that of the stricter system that relies solely on syntactic patterns for extraction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,337
inproceedings
vala-etal-2016-annotating
Annotating Characters in Literary Corpora: A Scheme, the {CHARLES} Tool, and an Annotated Novel
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1028/
Vala, Hardik and Dimitrov, Stefan and Jurgens, David and Piper, Andrew and Ruths, Derek
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
184--189
Characters form the focus of various studies of literary works, including social network analysis, archetype induction, and plot comparison. The recent rise in the computational modelling of literary works has produced a proportional rise in the demand for character-annotated literary corpora. However, automatically identifying characters is an open problem and there is low availability of literary texts with manually labelled characters. To address the latter problem, this work presents three contributions: (1) a comprehensive scheme for manually resolving mentions to characters in texts. (2) A novel collaborative annotation tool, CHARLES (CHAracter Resolution Label-Entry System) for character annotation and similiar cross-document tagging tasks. (3) The character annotations resulting from a pilot study on the novel Pride and Prejudice, demonstrating the scheme and tool facilitate the efficient production of high-quality annotations. We expect this work to motivate the further production of annotated literary corpora to help meet the demand of the community.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,338
inproceedings
garnier-saint-dizier-2016-error
Error Typology and Remediation Strategies for Requirements Written in {E}nglish by Non-Native Speakers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1029/
Garnier, Marie and Saint-Dizier, Patrick
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
190--197
In most international industries, English is the main language of communication for technical documents. These documents are designed to be as unambiguous as possible for their users. For international industries based in non-English speaking countries, the professionals in charge of writing requirements are often non-native speakers of English, who rarely receive adequate training in the use of English for this task. As a result, requirements can contain a relatively large diversity of lexical and grammatical errors, which are not eliminated by the use of guidelines from controlled languages. This article investigates the distribution of errors in a corpus of requirements written in English by native speakers of French. Errors are defined on the basis of grammaticality and acceptability principles, and classified using comparable categories. Results show a high proportion of errors in the Noun Phrase, notably through modifier stacking, and errors consistent with simplification strategies. Comparisons with similar corpora in other genres reveal the specificity of the distribution of errors in requirements. This research also introduces possible applied uses, in the form of strategies for the automatic detection of errors, and in-person training provided by certification boards in requirements authoring.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,339
inproceedings
keiper-etal-2016-improving
Improving {POS} Tagging of {G}erman Learner Language in a Reading Comprehension Scenario
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1030/
Keiper, Lena and Horbach, Andrea and Thater, Stefan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
198--205
We present a novel method to automatically improve the accurracy of part-of-speech taggers on learner language. The key idea underlying our approach is to exploit the structure of a typical language learner task and automatically induce POS information for out-of-vocabulary (OOV) words. To evaluate the effectiveness of our approach, we add manual POS and normalization information to an existing language learner corpus. Our evaluation shows an increase in accurracy from 72.4{\%} to 81.5{\%} on OOV words.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,340
inproceedings
volodina-etal-2016-swell
{S}we{LL} on the rise: {S}wedish Learner Language corpus for {E}uropean Reference Level studies
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1031/
Volodina, Elena and Pil{\'an, Ildik{\'o and Enstr{\"om, Ingegerd and Llozhi, Lorena and Lundkvist, Peter and Sundberg, Gunl{\"og and Sandell, Monica
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
206--212
We present a new resource for Swedish, SweLL, a corpus of Swedish Learner essays linked to learners' performance according to the Common European Framework of Reference (CEFR). SweLL consists of three subcorpora {\textemdash} SpIn, SW1203 and Tisus, collected from three different educational establishments. The common metadata for all subcorpora includes age, gender, native languages, time of residence in Sweden, type of written task. Depending on the subcorpus, learner texts may contain additional information, such as text genres, topics, grades. Five of the six CEFR levels are represented in the corpus: A1, A2, B1, B2 and C1 comprising in total 339 essays. C2 level is not included since courses at C2 level are not offered. The work flow consists of collection of essays and permits, essay digitization and registration, meta-data annotation, automatic linguistic annotation. Inter-rater agreement is presented on the basis of SW1203 subcorpus. The work on SweLL is still ongoing with more that 100 essays waiting in the pipeline. This article both describes the resource and the {\textquotedblleft}how-to{\textquotedblright} behind the compilation of SweLL.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,341
inproceedings
francois-etal-2016-svalex
{SVAL}ex: a {CEFR}-graded Lexical Resource for {S}wedish Foreign and Second Language Learners
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1032/
Fran{\c{cois, Thomas and Volodina, Elena and Pil{\'an, Ildik{\'o and Tack, Ana{\"is
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
213--219
The paper introduces SVALex, a lexical resource primarily aimed at learners and teachers of Swedish as a foreign and second language that describes the distribution of 15,681 words and expressions across the Common European Framework of Reference (CEFR). The resource is based on a corpus of coursebook texts, and thus describes receptive vocabulary learners are exposed to during reading activities, as opposed to productive vocabulary they use when speaking or writing. The paper describes the methodology applied to create the list and to estimate the frequency distribution. It also discusses some characteristics of the resulting resource and compares it to other lexical resources for Swedish. An interesting feature of this resource is the possibility to separate the wheat from the chaff, identifying the core vocabulary at each level, i.e. vocabulary shared by several coursebook writers at each level, from peripheral vocabulary which is used by the minority of the coursebook writers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,342
inproceedings
shiue-chen-2016-detecting
Detecting Word Usage Errors in {C}hinese Sentences for Learning {C}hinese as a Foreign Language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1033/
Shiue, Yow-Ting and Chen, Hsin-Hsi
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
220--224
Automated grammatical error detection, which helps users improve their writing, is an important application in NLP. Recently more and more people are learning Chinese, and an automated error detection system can be helpful for the learners. This paper proposes n-gram features, dependency count features, dependency bigram features, and single-character features to determine if a Chinese sentence contains word usage errors, in which a word is written as a wrong form or the word selection is inappropriate. With marking potential errors on the level of sentence segments, typically delimited by punctuation marks, the learner can try to correct the problems without the assistant of a language teacher. Experiments on the HSK corpus show that the classifier combining all sets of features achieves an accuracy of 0.8423. By utilizing certain combination of the sets of features, we can construct a system that favors precision or recall. The best precision we achieve is 0.9536, indicating that our system is reliable and seldom produces misleading results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,343
inproceedings
zhang-etal-2016-libn3l
{L}ib{N}3{L}:A Lightweight Package for Neural {NLP}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1034/
Zhang, Meishan and Yang, Jie and Teng, Zhiyang and Zhang, Yue
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
225--229
We present a light-weight machine learning tool for NLP research. The package supports operations on both discrete and dense vectors, facilitating implementation of linear models as well as neural models. It provides several basic layers which mainly aims for single-layer linear and non-linear transformations. By using these layers, we can conveniently implement linear models and simple neural models. Besides, this package also integrates several complex layers by composing those basic layers, such as RNN, Attention Pooling, LSTM and gated RNN. Those complex layers can be used to implement deep neural models directly.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,344
inproceedings
tack-etal-2016-evaluating
Evaluating Lexical Simplification and Vocabulary Knowledge for Learners of {F}rench: Possibilities of Using the {FLEL}ex Resource
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1035/
Tack, Ana{\"is and Fran{\c{cois, Thomas and Ligozat, Anne-Laure and Fairon, C{\'edrick
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
230--236
This study examines two possibilities of using the FLELex graded lexicon for the automated assessment of text complexity in French as a foreign language learning. From the lexical frequency distributions described in FLELex, we derive a single level of difficulty for each word in a parallel corpus of original and simplified texts. We then use this data to automatically address the lexical complexity of texts in two ways. On the one hand, we evaluate the degree of lexical simplification in manually simplified texts with respect to their original version. Our results show a significant simplification effect, both in the case of French narratives simplified for non-native readers and in the case of simplified Wikipedia texts. On the other hand, we define a predictive model which identifies the number of words in a text that are expected to be known at a particular learning level. We assess the accuracy with which these predictions are able to capture actual word knowledge as reported by Dutch-speaking learners of French. Our study shows that although the predictions seem relatively accurate in general (87.4{\%} to 92.3{\%}), they do not yet seem to cover the learners' lack of knowledge very well.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,345
inproceedings
baur-etal-2016-shared
A Shared Task for Spoken {CALL}?
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1036/
Baur, Claudia and Gerlach, Johanna and Rayner, Manny and Russell, Martin and Strik, Helmer
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
237--244
We argue that the field of spoken CALL needs a shared task in order to facilitate comparisons between different groups and methodologies, and describe a concrete example of such a task, based on data collected from a speech-enabled online tool which has been used to help young Swiss German teens practise skills in English conversation. Items are prompt-response pairs, where the prompt is a piece of German text and the response is a recorded English audio file. The task is to label pairs as {\textquotedblleft}accept{\textquotedblright} or {\textquotedblleft}reject{\textquotedblright}, accepting responses which are grammatically and linguistically correct to match a set of hidden gold standard answers as closely as possible. Initial resources are provided so that a scratch system can be constructed with a minimal investment of effort, and in particular without necessarily using a speech recogniser. Training data for the task will be released in June 2016, and test data in January 2017.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,346
inproceedings
khalifa-etal-2016-joining
Joining-in-type Humanoid Robot Assisted Language Learning System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1037/
Khalifa, AlBara and Kato, Tsuneo and Yamamoto, Seiichi
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
245--249
Dialogue robots are attractive to people, and in language learning systems, they motivate learners and let them practice conversational skills in more realistic environment. However, automatic speech recognition (ASR) of the second language (L2) learners is still a challenge, because their speech contains not just pronouncing, lexical, grammatical errors, but is sometimes totally disordered. Hence, we propose a novel robot assisted language learning (RALL) system using two robots, one as a teacher and the other as an advanced learner. The system is designed to simulate multiparty conversation, expecting implicit learning and enhancement of predictability of learners' utterance through an alignment similar to {\textquotedblleft}interactive alignment{\textquotedblright}, which is observed in human-human conversation. We collected a database with the prototypes, and measured how much the alignment phenomenon observed in the database with initial analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,347
inproceedings
el-haj-rayson-2016-osman
{OSMAN} {\textemdash} A Novel {A}rabic Readability Metric
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1038/
El-Haj, Mahmoud and Rayson, Paul
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
250--255
We present OSMAN (Open Source Metric for Measuring Arabic Narratives) - a novel open source Arabic readability metric and tool. It allows researchers to calculate readability for Arabic text with and without diacritics. OSMAN is a modified version of the conventional readability formulas such as Flesch and Fog. In our work we introduce a novel approach towards counting short, long and stress syllables in Arabic which is essential for judging readability of Arabic narratives. We also introduce an additional factor called {\textquotedblleft}Faseeh{\textquotedblright} which considers aspects of script usually dropped in informal Arabic writing. To evaluate our methods we used Spearman`s correlation metric to compare text readability for 73,000 parallel sentences from English and Arabic UN documents. The Arabic sentences were written with the absence of diacritics and in order to count the number of syllables we added the diacritics in using an open source tool called Mishkal. The results show that OSMAN readability formula correlates well with the English ones making it a useful tool for researchers and educators working with Arabic text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,348
inproceedings
geoffrois-2016-evaluating
Evaluating Interactive System Adaptation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1039/
Geoffrois, Edouard
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
256--260
Enabling users of intelligent systems to enhance the system performance by providing feedback on their errors is an important need. However, the ability of systems to learn from user feedback is difficult to evaluate in an objective and comparative way. Indeed, the involvement of real users in the adaptation process is an impediment to objective evaluation. This issue can be solved by using an oracle approach, where users are simulated by oracles having access to the reference test data. Another difficulty is to find a meaningful metric despite the fact that system improvements depend on the feedback provided and on the system itself. A solution is to measure the minimal amount of information needed to correct all system errors. It can be shown that for any well defined non interactive task, the interactively supervised version of the task can be evaluated by combining such an oracle-based approach and a minimum supervision rate metric. This new evaluation protocol for adaptive systems is not only expected to drive progress for such systems, but also to pave the way for a specialisation of actors along the value chain of their technological development.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,349
inproceedings
derczynski-2016-complementarity
Complementarity, {F}-score, and {NLP} Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1040/
Derczynski, Leon
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
261--266
This paper addresses the problem of quantifying the differences between entity extraction systems, where in general only a small proportion a document should be selected. Comparing overall accuracy is not very useful in these cases, as small differences in accuracy may correspond to huge differences in selections over the target minority class. Conventionally, one may use per-token complementarity to describe these differences, but it is not very useful when the set is heavily skewed. In such situations, which are common in information retrieval and entity recognition, metrics like precision and recall are typically used to describe performance. However, precision and recall fail to describe the differences between sets of objects selected by different decision strategies, instead just describing the proportional amount of correct and incorrect objects selected. This paper presents a method for measuring complementarity for precision, recall and F-score, quantifying the difference between entity extraction approaches.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,350
inproceedings
dragoni-etal-2016-dranziera
{DRANZIERA}: An Evaluation Protocol For Multi-Domain Opinion Mining
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1041/
Dragoni, Mauro and Tettamanzi, Andrea and da Costa Pereira, C{\'e}lia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
267--272
Opinion Mining is a topic which attracted a lot of interest in the last years. By observing the literature, it is often hard to replicate system evaluation due to the unavailability of the data used for the evaluation or to the lack of details about the protocol used in the campaign. In this paper, we propose an evaluation protocol, called DRANZIERA, composed of a multi-domain dataset and guidelines allowing both to evaluate opinion mining systems in different contexts (Closed, Semi-Open, and Open) and to compare them to each other and to a number of baselines.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,351
inproceedings
fothergill-etal-2016-evaluating
Evaluating a Topic Modelling Approach to Measuring Corpus Similarity
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1042/
Fothergill, Richard and Cook, Paul and Baldwin, Timothy
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
273--279
Web corpora are often constructed automatically, and their contents are therefore often not well understood. One technique for assessing the composition of such a web corpus is to empirically measure its similarity to a reference corpus whose composition is known. In this paper we evaluate a number of measures of corpus similarity, including a method based on topic modelling which has not been previously evaluated for this task. To evaluate these methods we use known-similarity corpora that have been previously used for this purpose, as well as a number of newly-constructed known-similarity corpora targeting differences in genre, topic, time, and region. Our findings indicate that, overall, the topic modelling approach did not improve on a chi-square method that had previously been found to work well for measuring corpus similarity.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,352
inproceedings
fandrych-etal-2016-user
User, who art thou? User Profiling for Oral Corpus Platforms
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1043/
Fandrych, Christian and Frick, Elena and Hedeland, Hanna and Iliash, Anna and Jettka, Daniel and Mei{\ss}ner, Cordula and Schmidt, Thomas and Wallner, Franziska and Weigert, Kathrin and Westpfahl, Swantje
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
280--287
This contribution presents the background, design and results of a study of users of three oral corpus platforms in Germany. Roughly 5.000 registered users of the Database for Spoken German (DGD), the GeWiss corpus and the corpora of the Hamburg Centre for Language Corpora (HZSK) were asked to participate in a user survey. This quantitative approach was complemented by qualitative interviews with selected users. We briefly introduce the corpus resources involved in the study in section 2. Section 3 describes the methods employed in the user studies. Section 4 summarizes results of the studies focusing on selected key topics. Section 5 attempts a generalization of these results to larger contexts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,353
inproceedings
costa-etal-2016-building
Building a Corpus of Errors and Quality in Machine Translation: Experiments on Error Impact
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1044/
Costa, {\^A}ngela and Correia, Rui and Coheur, Lu{\'i}sa
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
288--292
In this paper we describe a corpus of automatic translations annotated with both error type and quality. The 300 sentences that we have selected were generated by Google Translate, Systran and two in-house Machine Translation systems that use Moses technology. The errors present on the translations were annotated with an error taxonomy that divides errors in five main linguistic categories (Orthography, Lexis, Grammar, Semantics and Discourse), reflecting the language level where the error is located. After the error annotation process, we accessed the translation quality of each sentence using a four point comprehension scale from 1 to 5. Both tasks of error and quality annotation were performed by two different annotators, achieving good levels of inter-annotator agreement. The creation of this corpus allowed us to use it as training data for a translation quality classifier. We concluded on error severity by observing the outputs of two machine learning classifiers: a decision tree and a regression model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,354
inproceedings
yaneva-etal-2016-evaluating
Evaluating the Readability of Text Simplification Output for Readers with Cognitive Disabilities
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1045/
Yaneva, Victoria and Temnikova, Irina and Mitkov, Ruslan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
293--299
This paper presents an approach for automatic evaluation of the readability of text simplification output for readers with cognitive disabilities. First, we present our work towards the development of the EasyRead corpus, which contains easy-to-read documents created especially for people with cognitive disabilities. We then compare the EasyRead corpus to the simplified output contained in the LocalNews corpus (Feng, 2009), the accessibility of which has been evaluated through reading comprehension experiments including 20 adults with mild intellectual disability. This comparison is made on the basis of 13 disability-specific linguistic features. The comparison reveals that there are no major differences between the two corpora, which shows that the EasyRead corpus is to a similar reading level as the user-evaluated texts. We also discuss the role of Simple Wikipedia (Zhu et al., 2010) as a widely-used accessibility benchmark, in light of our finding that it is significantly more complex than both the EasyRead and the LocalNews corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,355
inproceedings
ghannay-etal-2016-word
Word Embedding Evaluation and Combination
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1046/
Ghannay, Sahar and Favre, Benoit and Est{\`e}ve, Yannick and Camelin, Nathalie
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
300--305
Word embeddings have been successfully used in several natural language processing tasks (NLP) and speech processing. Different approaches have been introduced to calculate word embeddings through neural networks. In the literature, many studies focused on word embedding evaluation, but for our knowledge, there are still some gaps. This paper presents a study focusing on a rigorous comparison of the performances of different kinds of word embeddings. These performances are evaluated on different NLP and linguistic tasks, while all the word embeddings are estimated on the same training data using the same vocabulary, the same number of dimensions, and other similar characteristics. The evaluation results reported in this paper match those in the literature, since they point out that the improvements achieved by a word embedding in one task are not consistently observed across all tasks. For that reason, this paper investigates and evaluates approaches to combine word embeddings in order to take advantage of their complementarity, and to look for the effective word embeddings that can achieve good performances on all tasks. As a conclusion, this paper provides new perceptions of intrinsic qualities of the famous word embedding families, which can be different from the ones provided by works previously published in the scientific literature.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,356
inproceedings
poignant-etal-2016-benchmarking
Benchmarking multimedia technologies with the {CAMOMILE} platform: the case of Multimodal Person Discovery at {M}edia{E}val 2015
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1047/
Poignant, Johann and Bredin, Herv{\'e} and Barras, Claude and Stefas, Mickael and Bruneau, Pierrick and Tamisier, Thomas
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
306--309
In this paper, we claim that the CAMOMILE collaborative annotation platform (developed in the framework of the eponymous CHIST-ERA project) eases the organization of multimedia technology benchmarks, automating most of the campaign technical workflow and enabling collaborative (hence faster and cheaper) annotation of the evaluation data. This is demonstrated through the successful organization of a new multimedia task at MediaEval 2015, Multimodal Person Discovery in Broadcast TV.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,357
inproceedings
castilho-obrien-2016-evaluating
Evaluating the Impact of Light Post-Editing on Usability
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1048/
Castilho, Sheila and O{'}Brien, Sharon
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
310--316
This paper discusses a methodology to measure the usability of machine translated content by end users, comparing lightly post-edited content with raw output and with the usability of source language content. The content selected consists of Online Help articles from a software company for a spreadsheet application, translated from English into German. Three groups of five users each used either the source text - the English version (EN) -, the raw MT version (DE{\_}MT), or the light PE version (DE{\_}PE), and were asked to carry out six tasks. Usability was measured using an eye tracker and cognitive, temporal and pragmatic measures of usability. Satisfaction was measured via a post-task questionnaire presented after the participants had completed the tasks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,358
inproceedings
salesky-etal-2016-operational
Operational Assessment of Keyword Search on Oral History
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1049/
Salesky, Elizabeth and Ray, Jessica and Shen, Wade
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
317--321
This project assesses the resources necessary to make oral history searchable by means of automatic speech recognition (ASR). There are many inherent challenges in applying ASR to conversational speech: smaller training set sizes and varying demographics, among others. We assess the impact of dataset size, word error rate and term-weighted value on human search capability through an information retrieval task on Mechanical Turk. We use English oral history data collected by StoryCorps, a national organization that provides all people with the opportunity to record, share and preserve their stories, and control for a variety of demographics including age, gender, birthplace, and dialect on four different training set sizes. We show comparable search performance using a standard speech recognition system as with hand-transcribed data, which is promising for increased accessibility of conversational speech and oral history archives.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,359
inproceedings
valenzuela-escarcega-etal-2016-odins
Odin`s Runes: A Rule Language for Information Extraction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1050/
Valenzuela-Esc{\'a}rcega, Marco A. and Hahn-Powell, Gus and Surdeanu, Mihai
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
322--329
Odin is an information extraction framework that applies cascades of finite state automata over both surface text and syntactic dependency graphs. Support for syntactic patterns allow us to concisely define relations that are otherwise difficult to express in languages such as Common Pattern Specification Language (CPSL), which are currently limited to shallow linguistic features. The interaction of lexical and syntactic automata provides robustness and flexibility when writing extraction rules. This paper describes Odin`s declarative language for writing these cascaded automata.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,360
inproceedings
lefever-hoste-2016-classification
A Classification-based Approach to Economic Event Detection in {D}utch News Text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1051/
Lefever, Els and Hoste, V{\'e}ronique
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
330--335
Breaking news on economic events such as stock splits or mergers and acquisitions has been shown to have a substantial impact on the financial markets. As it is important to be able to automatically identify events in news items accurately and in a timely manner, we present in this paper proof-of-concept experiments for a supervised machine learning approach to economic event detection in newswire text. For this purpose, we created a corpus of Dutch financial news articles in which 10 types of company-specific economic events were annotated. We trained classifiers using various lexical, syntactic and semantic features. We obtain good results based on a basic set of shallow features, thus showing that this method is a viable approach for economic event detection in news text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,361
inproceedings
francopoulo-etal-2016-predictive
Predictive Modeling: Guessing the {NLP} Terms of Tomorrow
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1052/
Francopoulo, Gil and Mariani, Joseph and Paroubek, Patrick
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
336--343
Predictive modeling, often called {\textquotedblleft}predictive analytics{\textquotedblright} in a commercial context, encompasses a variety of statistical techniques that analyze historical and present facts to make predictions about unknown events. Often the unknown events are in the future, but prediction can be applied to any type of unknown whether it be in the past or future. In our case, we present some experiments applying predictive modeling to the usage of technical terms within the NLP domain.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,362
inproceedings
sahlgren-etal-2016-gavagai
The Gavagai Living Lexicon
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1053/
Sahlgren, Magnus and Gyllensten, Amaru Cuba and Espinoza, Fredrik and Hamfors, Ola and Karlgren, Jussi and Olsson, Fredrik and Persson, Per and Viswanathan, Akshay and Holst, Anders
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
344--350
This paper presents the Gavagai Living Lexicon, which is an online distributional semantic model currently available in 20 different languages. We describe the underlying distributional semantic model, and how we have solved some of the challenges in applying such a model to large amounts of streaming data. We also describe the architecture of our implementation, and discuss how we deal with continuous quality assurance of the lexicon.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,363
inproceedings
mubarak-abdelali-2016-arabic
{A}rabic to {E}nglish Person Name Transliteration using {T}witter
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1054/
Mubarak, Hamdy and Abdelali, Ahmed
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
351--355
Social media outlets are providing new opportunities for harvesting valuable resources. We present a novel approach for mining data from Twitter for the purpose of building transliteration resources and systems. Such resources are crucial in translation and retrieval tasks. We demonstrate the benefits of the approach on Arabic to English transliteration. The contribution of this approach includes the size of data that can be collected and exploited within the span of a limited time; the approach is very generic and can be adopted to other languages and the ability of the approach to cope with new transliteration phenomena and trends. A statistical transliteration system built using this data improved a comparable system built from Wikipedia wikilinks data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,364
inproceedings
jeong-etal-2016-korean
{K}orean {T}ime{ML} and {K}orean {T}ime{B}ank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1055/
Jeong, Young-Seob and Joo, Won-Tae and Do, Hyun-Woo and Lim, Chae-Gyun and Choi, Key-Sun and Choi, Ho-Jin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
356--359
Many emerging documents usually contain temporal information. Because the temporal information is useful for various applications, it became important to develop a system of extracting the temporal information from the documents. Before developing the system, it first necessary to define or design the structure of temporal information. In other words, it is necessary to design a language which defines how to annotate the temporal information. There have been some studies about the annotation languages, but most of them was applicable to only a specific target language (e.g., English). Thus, it is necessary to design an individual annotation language for each language. In this paper, we propose a revised version of Koreain Time Mark-up Language (K-TimeML), and also introduce a dataset, named Korean TimeBank, that is constructed basd on the K-TimeML. We believe that the new K-TimeML and Korean TimeBank will be used in many further researches about extraction of temporal information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,365
inproceedings
seitner-etal-2016-large
A Large {D}ata{B}ase of Hypernymy Relations Extracted from the Web.
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1056/
Seitner, Julian and Bizer, Christian and Eckert, Kai and Faralli, Stefano and Meusel, Robert and Paulheim, Heiko and Ponzetto, Simone Paolo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
360--367
Hypernymy relations (those where an hyponym term shares a {\textquotedblleft}isa{\textquotedblright} relationship with his hypernym) play a key role for many Natural Language Processing (NLP) tasks, e.g. ontology learning, automatically building or extending knowledge bases, or word sense disambiguation and induction. In fact, such relations may provide the basis for the construction of more complex structures such as taxonomies, or be used as effective background knowledge for many word understanding applications. We present a publicly available database containing more than 400 million hypernymy relations we extracted from the CommonCrawl web corpus. We describe the infrastructure we developed to iterate over the web corpus for extracting the hypernymy relations and store them effectively into a large database. This collection of relations represents a rich source of knowledge and may be useful for many researchers. We offer the tuple dataset for public download and an Application Programming Interface (API) to help other researchers programmatically query the database.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,366
inproceedings
katris-etal-2016-using
Using a Cross-Language Information Retrieval System based on {OHSUMED} to Evaluate the {M}oses and {K}antan{MT} Statistical Machine Translation Systems
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1057/
Katris, Nikolaos and Sutcliffe, Richard and Kalamboukis, Theodore
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
368--372
The objective of this paper was to evaluate the performance of two statistical machine translation (SMT) systems within a cross-language information retrieval (CLIR) architecture and examine if there is a correlation between translation quality and CLIR performance. The SMT systems were KantanMT, a cloud-based machine translation (MT) platform, and Moses, an open-source MT application. First we trained both systems using the same language resources: the EMEA corpus for the translation model and language model and the QTLP corpus for tuning. Then we translated the 63 queries of the OHSUMED test collection from Greek into English using both MT systems. Next, we ran the queries on the document collection using Apache Solr to get a list of the top ten matches. The results were compared to the OHSUMED gold standard. KantanMT achieved higher average precision and F-measure than Moses, while both systems produced the same recall score. We also calculated the BLEU score for each system using the ECDC corpus. Moses achieved a higher BLEU score than KantanMT. Finally, we also tested the IR performance of the original English queries. This work overall showed that CLIR performance can be better even when BLEU score is worse.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,367
inproceedings
pardelli-etal-2016-two
Two Decades of Terminology: {E}uropean Framework Programmes Titles
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1058/
Pardelli, Gabriella and Goggi, Sara and Giannini, Silvia and Biagioni, Stefania
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
373--378
This work analyses a corpus made of the titles of research projects belonging to the last four European Commission Framework Programmes (FP4, FP5, FP6, FP7) during a time span of nearly two decades (1994-2012). The starting point is the idea of creating a corpus of titles which would constitute a terminological niche, a sort of {\textquotedblleft}cluster map{\textquotedblright} offering an overall vision on the terms used and the links between them. Moreover, by performing a terminological comparison over a period of time it is possible to trace the presence of obsolete words in outdated research areas as well as of neologisms in the most recent fields. Within this scenario, the minimal purpose is to build a corpus of titles of European projects belonging to the several Framework Programmes in order to obtain a terminological mapping of relevant words in the various research areas: particularly significant would be those terms spread across different domains or those extremely tied to a specific domain. A term could actually be found in many fields and being able to acknowledge and retrieve this cross-presence means being able to linking those different domains by means of a process of terminological mapping.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,368
inproceedings
peters-wyner-2016-legal
Legal Text Interpretation: Identifying Hohfeldian Relations from Text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1059/
Peters, Wim and Wyner, Adam
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
379--384
The paper investigates the extent of the support semi-automatic analysis can provide for the specific task of assigning Hohfeldian relations of Duty, using the General Architecture for Text Engineering tool for the automated extraction of Duty instances and the bearers of associated roles. The outcome of the analysis supports scholars in identifying Hohfeldian structures in legal text when performing close reading of the texts. A cyclic workflow involving automated annotation and expert feedback will incrementally increase the quality and coverage of the automatic extraction process, and increasingly reduce the amount of manual work required of the scholar.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,369
inproceedings
tachibana-komachi-2016-analysis
Analysis of {E}nglish Spelling Errors in a Word-Typing Game
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1060/
Tachibana, Ryuichi and Komachi, Mamoru
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
385--390
The emergence of the web has necessitated the need to detect and correct noisy consumer-generated texts. Most of the previous studies on English spelling-error extraction collected English spelling errors from web services such as Twitter by using the edit distance or from input logs utilizing crowdsourcing. However, in the former approach, it is not clear which word corresponds to the spelling error, and the latter approach requires an annotation cost for the crowdsourcing. One notable exception is Rodrigues and Rytting (2012), who proposed to extract English spelling errors by using a word-typing game. Their approach saves the cost of crowdsourcing, and guarantees an exact alignment between the word and the spelling error. However, they did not assert whether the extracted spelling error corpora reflect the usual writing process such as writing a document. Therefore, we propose a new correctable word-typing game that is more similar to the actual writing process. Experimental results showed that we can regard typing-game logs as a source of spelling errors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,370
inproceedings
kovar-etal-2016-finding
Finding Definitions in Large Corpora with {S}ketch {E}ngine
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1061/
Kov{\'a}{\v{r}}, Vojt{\v{e}}ch and Mo{\v{c}}iarikov{\'a}, Monika and Rychl{\'y}, Pavel
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
391--394
The paper describes automatic definition finding implemented within the leading corpus query and management tool, Sketch Engine. The implementation exploits complex pattern-matching queries in the corpus query language (CQL) and the indexing mechanism of word sketches for finding and storing definition candidates throughout the corpus. The approach is evaluated for Czech and English corpora, showing that the results are usable in practice: precision of the tool ranges between 30 and 75 percent (depending on the major corpus text types) and we were able to extract nearly 2 million definition candidates from an English corpus with 1.4 billion words. The feature is embedded into the interface as a concordance filter, so that users can search for definitions of any query to the corpus, including very specific multi-word queries. The results also indicate that ordinary texts (unlike explanatory texts) contain rather low number of definitions, which is perhaps the most important problem with automatic definition finding in general.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,371
inproceedings
rodriguez-ferreira-etal-2016-improving
Improving Information Extraction from {W}ikipedia Texts using Basic {E}nglish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1062/
Rodr{\'i}guez-Ferreira, Teresa and Rabad{\'a}n, Adri{\'a}n and Herv{\'a}s, Raquel and D{\'i}az, Alberto
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
395--400
The aim of this paper is to study the effect that the use of Basic English versus common English has on information extraction from online resources. The amount of online information available to the public grows exponentially, and is potentially an excellent resource for information extraction. The problem is that this information often comes in an unstructured format, such as plain text. In order to retrieve knowledge from this type of text, it must first be analysed to find the relevant details, and the nature of the language used can greatly impact the quality of the extracted information. In this paper, we compare triplets that represent definitions or properties of concepts obtained from three online collaborative resources (English Wikipedia, Simple English Wikipedia and Simple English Wiktionary) and study the differences in the results when Basic English is used instead of common English. The results show that resources written in Basic English produce less quantity of triplets, but with higher quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,372
inproceedings
caselli-etal-2016-nlp
{NLP} and Public Engagement: The Case of the {I}talian School Reform
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1063/
Caselli, Tommaso and Moretti, Giovanni and Sprugnoli, Rachele and Tonelli, Sara and Lanfrey, Damien and Kutzmann, Donatella Solda
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
401--406
In this paper we present PIERINO (PIattaforma per l`Estrazione e il Recupero di INformazione Online), a system that was implemented in collaboration with the Italian Ministry of Education, University and Research to analyse the citizens' comments given in {\#}labuonascuola survey. The platform includes various levels of automatic analysis such as key-concept extraction and word co-occurrences. Each analysis is displayed through an intuitive view using different types of visualizations, for example radar charts and sunburst. PIERINO was effectively used to support shaping the last Italian school reform, proving the potential of NLP in the context of policy making.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,373
inproceedings
saralegi-etal-2016-evaluating
Evaluating Translation Quality and {CLIR} Performance of Query Sessions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1064/
Saralegi, Xabier and Agirre, Eneko and Alegria, I{\~n}aki
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
407--411
This paper presents the evaluation of the translation quality and Cross-Lingual Information Retrieval (CLIR) performance when using session information as the context of queries. The hypothesis is that previous queries provide context that helps to solve ambiguous translations in the current query. We tested several strategies on the TREC 2010 Session track dataset, which includes query reformulations grouped by generalization, specification, and drifting types. We study the Basque to English direction, evaluating both the translation quality and CLIR performance, with positive results in both cases. The results show that the quality of translation improved, reducing error rate by 12{\%} (HTER) when using session information, which improved CLIR results 5{\%} (nDCG). We also provide an analysis of the improvements across the three kinds of sessions: generalization, specification, and drifting. Translation quality improved in all three types (generalization, specification, and drifting), and CLIR improved for generalization and specification sessions, preserving the performance in drifting sessions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,374
inproceedings
le-quasthoff-2016-construction
Construction and Analysis of a Large {V}ietnamese Text Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1065/
Le, Dieu-Thu and Quasthoff, Uwe
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
412--416
This paper presents a new Vietnamese text corpus which contains around 4.05 billion words. It is a collection of Wikipedia texts, newspaper articles and random web texts. The paper describes the process of collecting, cleaning and creating the corpus. Processing Vietnamese texts faced several challenges, for example, different from many Latin languages, Vietnamese language does not use blanks for separating words, hence using common tokenizers such as replacing blanks with word boundary does not work. A short review about different approaches of Vietnamese tokenization is presented together with how the corpus has been processed and created. After that, some statistical analysis on this data is reported including the number of syllable, average word length, sentence length and topic analysis. The corpus is integrated into a framework which allows searching and browsing. Using this web interface, users can find out how many times a particular word appears in the corpus, sample sentences where this word occurs, its left and right neighbors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,375
inproceedings
asooja-etal-2016-forecasting
Forecasting Emerging Trends from Scientific Literature
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1066/
Asooja, Kartik and Bordea, Georgeta and Vulcu, Gabriela and Buitelaar, Paul
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
417--420
Text analysis methods for the automatic identification of emerging technologies by analyzing the scientific publications, are gaining attention because of their socio-economic impact. The approaches so far have been mainly focused on retrospective analysis by mapping scientific topic evolution over time. We propose regression based approaches to predict future keyword distribution. The prediction is based on historical data of the keywords, which in our case, are LREC conference proceedings. Considering the insufficient number of data points available from LREC proceedings, we do not employ standard time series forecasting methods. We form a dataset by extracting the keywords from previous year proceedings and quantify their yearly relevance using tf-idf scores. This dataset additionally contains ranked lists of related keywords and experts for each keyword.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,376
inproceedings
choi-etal-2016-extracting
Extracting Structured Scholarly Information from the Machine Translation Literature
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1067/
Choi, Eunsol and Horvat, Matic and May, Jonathan and Knight, Kevin and Marcu, Daniel
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
421--425
Understanding the experimental results of a scientific paper is crucial to understanding its contribution and to comparing it with related work. We introduce a structured, queryable representation for experimental results and a baseline system that automatically populates this representation. The representation can answer compositional questions such as: {\textquotedblleft}Which are the best published results reported on the NIST 09 Chinese to English dataset?{\textquotedblright} and {\textquotedblleft}What are the most important methods for speeding up phrase-based decoding?{\textquotedblright} Answering such questions usually involves lengthy literature surveys. Current machine reading for academic papers does not usually consider the actual experiments, but mostly focuses on understanding abstracts. We describe annotation work to create an initial hscientific paper; experimental results representationi corpus. The corpus is composed of 67 papers which were manually annotated with a structured representation of experimental results by domain experts. Additionally, we present a baseline algorithm that characterizes the difficulty of the inference task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,377
inproceedings
wu-etal-2016-staggered
Staggered {NLP}-assisted refinement for Clinical Annotations of Chronic Disease Events
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1068/
Wu, Stephen and Wi, Chung-Il and Sohn, Sunghwan and Liu, Hongfang and Juhn, Young
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
426--429
Domain-specific annotations for NLP are often centered on real-world applications of text, and incorrect annotations may be particularly unacceptable. In medical text, the process of manual chart review (of a patient`s medical record) is error-prone due to its complexity. We propose a staggered NLP-assisted approach to the refinement of clinical annotations, an interactive process that allows initial human judgments to be verified or falsified by means of comparison with an improving NLP system. We show on our internal Asthma Timelines dataset that this approach improves the quality of the human-produced clinical annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,378
inproceedings
menini-etal-2016-pietro
{\textquotedblleft}Who was Pietro Badoglio?{\textquotedblright} Towards a {QA} system for {I}talian History
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1069/
Menini, Stefano and Sprugnoli, Rachele and Uva, Antonio
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
430--435
This paper presents QUANDHO (QUestion ANswering Data for italian HistOry), an Italian question answering dataset created to cover a specific domain, i.e. the history of Italy in the first half of the XX century. The dataset includes questions manually classified and annotated with Lexical Answer Types, and a set of question-answer pairs. This resource, freely available for research purposes, has been used to retrain a domain independent question answering system so to improve its performances in the domain of interest. Ongoing experiments on the development of a question classifier and an automatic tagger of Lexical Answer Types are also presented.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,379
inproceedings
funk-etal-2016-document
A Document Repository for Social Media and Speech Conversations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1070/
Funk, Adam and Gaizauskas, Robert and Favre, Benoit
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
436--440
We present a successfully implemented document repository REST service for flexible SCRUD (search, crate, read, update, delete) storage of social media conversations, using a GATE/TIPSTER-like document object model and providing a query language for document features. This software is currently being used in the SENSEI research project and will be published as open-source software before the project ends. It is, to the best of our knowledge, the first freely available, general purpose data repository to support large-scale multimodal (i.e., speech or text) conversation analytics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,380
inproceedings
parvizi-etal-2016-towards
Towards a Linguistic Ontology with an Emphasis on Reasoning and Knowledge Reuse
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1071/
Parvizi, Artemis and Kohl, Matt and Gonz{\`a}lez, Meritxell and Saur{\'i}, Roser
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
441--448
The Dictionaries division at Oxford University Press (OUP) is aiming to model, integrate, and publish lexical content for 100 languages focussing on digitally under-represented languages. While there are multiple ontologies designed for linguistic resources, none had adequate features for meeting our requirements, chief of which was the capability to losslessly capture diverse features of many different languages in a dictionary format, while supplying a framework for inferring relations like translation, derivation, etc., between the data. Building on valuable features of existing models, and working with OUP monolingual and bilingual dictionary datasets, we have designed and implemented a new linguistic ontology. The ontology has been reviewed by a number of computational linguists, and we are working to move more dictionary data into it. We have also developed APIs to surface the linked data to dictionary websites.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,381
inproceedings
maegaard-etal-2016-providing
Providing a Catalogue of Language Resources for Commercial Users
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1072/
Maegaard, Bente and Henriksen, Lina and Joscelyne, Andrew and Lusicky, Vesna and Mazura, Margaretha and Olsen, Sussi and Povlsen, Claus and Wacker, Philippe
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
449--456
Language resources (LR) are indispensable for the development of tools for machine translation (MT) or various kinds of computer-assisted translation (CAT). In particular language corpora, both parallel and monolingual are considered most important for instance for MT, not only SMT but also hybrid MT. The Language Technology Observatory will provide easy access to information about LRs deemed to be useful for MT and other translation tools through its LR Catalogue. In order to determine what aspects of an LR are useful for MT practitioners, a user study was made, providing a guide to the most relevant metadata and the most relevant quality criteria. We have seen that many resources exist which are useful for MT and similar work, but the majority are for (academic) research or educational use only, and as such not available for commercial use. Our work has revealed a list of gaps: coverage gap, awareness gap, quality gap, quantity gap. The paper ends with recommendations for a forward-looking strategy.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,382