entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
wilkens-etal-2016-b2sg
{B}2{SG}: a {TOEFL}-like Task for {P}ortuguese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1580/
Wilkens, Rodrigo and Zilio, Leonardo and Ferreira, Eduardo and Villavicencio, Aline
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3659--3662
Resources such as WordNet are useful for NLP applications, but their manual construction consumes time and personnel, and frequently results in low coverage. One alternative is the automatic construction of large resources from corpora like distributional thesauri, containing semantically associated words. However, as they may contain noise, there is a strong need for automatic ways of evaluating the quality of the resulting resource. This paper introduces a gold standard that can aid in this task. The BabelNet-Based Semantic Gold Standard (B2SG) was automatically constructed based on BabelNet and partly evaluated by human judges. It consists of sets of tests that present one target word, one related word and three unrelated words. B2SG contains 2,875 validated relations: 800 for verbs and 2,075 for nouns; these relations are divided among synonymy, antonymy and hypernymy. They can be used as the basis for evaluating the accuracy of the similarity relations on distributional thesauri by comparing the proximity of the target word with the related and unrelated options and observing if the related word has the highest similarity value among them. As a case study two distributional thesauri were also developed: one using surface forms from a large (1.5 billion word) corpus and the other using lemmatized forms from a smaller (409 million word) corpus. Both distributional thesauri were then evaluated against B2SG, and the one using lemmatized forms performed slightly better.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,890
inproceedings
yuan-etal-2016-mobil
{M}o{B}i{L}: A Hybrid Feature Set for Automatic Human Translation Quality Assessment
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1581/
Yuan, Yu and Sharoff, Serge and Babych, Bogdan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3663--3670
In this paper we introduce MoBiL, a hybrid Monolingual, Bilingual and Language modelling feature set and feature selection and evaluation framework. The set includes translation quality indicators that can be utilized to automatically predict the quality of human translations in terms of content adequacy and language fluency. We compare MoBiL with the QuEst baseline set by using them in classifiers trained with support vector machine and relevance vector machine learning algorithms on the same data set. We also report an experiment on feature selection to opt for fewer but more informative features from MoBiL. Our experiments show that classifiers trained on our feature set perform consistently better in predicting both adequacy and fluency than the classifiers trained on the baseline feature set. MoBiL also performs well when used with both support vector machine and relevance vector machine algorithms.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,891
inproceedings
logacheva-etal-2016-marmot
{MARMOT}: A Toolkit for Translation Quality Estimation at the Word Level
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1582/
Logacheva, Varvara and Hokamp, Chris and Specia, Lucia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3671--3674
We present Marmot{\textasciitilde}{\textemdash} a new toolkit for quality estimation (QE) of machine translation output. Marmot contains utilities targeted at quality estimation at the word and phrase level. However, due to its flexibility and modularity, it can also be extended to work at the sentence level. In addition, it can be used as a framework for extracting features and learning models for many common natural language processing tasks. The tool has a set of state-of-the-art features for QE, and new features can easily be added. The tool is open-source and can be downloaded from \url{https://github.com/qe-team/marmot/}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,892
inproceedings
katerenchuk-rosenberg-2016-rankdcg
{R}ank{DCG}: Rank-Ordering Evaluation Measure
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1583/
Katerenchuk, Denys and Rosenberg, Andrew
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3675--3680
Ranking is used for a wide array of problems, most notably information retrieval (search). Kendall`s {\ensuremath{\tau}}, Average Precision, and nDCG are a few popular approaches to the evaluation of ranking. When dealing with problems such as user ranking or recommendation systems, all these measures suffer from various problems, including the inability to deal with elements of the same rank, inconsistent and ambiguous lower bound scores, and an inappropriate cost function. We propose a new measure, a modification of the popular nDCG algorithm, named rankDCG, that addresses these problems. We provide a number of criteria for any effective ranking algorithm and show that only rankDCG satisfies them all. Results are presented on constructed and real data sets. We release a publicly available rankDCG evaluation package.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,893
inproceedings
etcheverry-wonsever-2016-spanish
{S}panish Word Vectors from {W}ikipedia
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1584/
Etcheverry, Mathias and Wonsever, Dina
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3681--3685
Contents analisys from text data requires semantic representations that are difficult to obtain automatically, as they may require large handcrafted knowledge bases or manually annotated examples. Unsupervised autonomous methods for generating semantic representations are of greatest interest in face of huge volumes of text to be exploited in all kinds of applications. In this work we describe the generation and validation of semantic representations in the vector space paradigm for Spanish. The method used is GloVe (Pennington, 2014), one of the best performing reported methods , and vectors were trained over Spanish Wikipedia. The learned vectors evaluation is done in terms of word analogy and similarity tasks (Pennington, 2014; Baroni, 2014; Mikolov, 2013a). The vector set and a Spanish version for some widely used semantic relatedness tests are made publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,894
inproceedings
humayoun-yu-2016-analyzing
Analyzing Pre-processing Settings for {U}rdu Single-document Extractive Summarization
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1585/
Humayoun, Muhammad and Yu, Hwanjo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3686--3693
Preprocessing is a preliminary step in many fields including IR and NLP. The effect of basic preprocessing settings on English for text summarization is well-studied. However, there is no such effort found for the Urdu language (with the best of our knowledge). In this study, we analyze the effect of basic preprocessing settings for single-document text summarization for Urdu, on a benchmark corpus using various experiments. The analysis is performed using the state-of-the-art algorithms for extractive summarization and the effect of stopword removal, lemmatization, and stemming is analyzed. Results showed that these pre-processing settings improve the results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,895
inproceedings
gabor-etal-2016-semantic
Semantic Annotation of the {ACL} {A}nthology Corpus for the Automatic Analysis of Scientific Literature
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1586/
G{\'abor, Kata and Zargayouna, Ha{\"ifa and Buscaldi, Davide and Tellier, Isabelle and Charnois, Thierry
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3694--3701
This paper describes the process of creating a corpus annotated for concepts and semantic relations in the scientific domain. A part of the ACL Anthology Corpus was selected for annotation, but the annotation process itself is not specific to the computational linguistics domain and could be applied to any scientific corpora. Concepts were identified and annotated fully automatically, based on a combination of terminology extraction and available ontological resources. A typology of semantic relations between concepts is also proposed. This typology, consisting of 18 domain-specific and 3 generic relations, is the result of a corpus-based investigation of the text sequences occurring between concepts in sentences. A sample of 500 abstracts from the corpus is currently being manually annotated with these semantic relations. Only explicit relations are taken into account, so that the data could serve to train or evaluate pattern-based semantic relation classification systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,896
inproceedings
derczynski-etal-2016-gate
{GATE}-Time: Extraction of Temporal Expressions and Events
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1587/
Derczynski, Leon and Str{\"otgen, Jannik and Maynard, Diana and Greenwood, Mark A. and Jung, Manuel
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3702--3708
GATE is a widely used open-source solution for text processing with a large user community. It contains components for several natural language processing tasks. However, temporal information extraction functionality within GATE has been rather limited so far, despite being a prerequisite for many application scenarios in the areas of natural language processing and information retrieval. This paper presents an integrated approach to temporal information processing. We take state-of-the-art tools in temporal expression and event recognition and bring them together to form an openly-available resource within the GATE infrastructure. GATE-Time provides annotation in the form of TimeML events and temporal expressions complying with this mature ISO standard for temporal semantic annotation of documents. Major advantages of GATE-Time are (i) that it relies on HeidelTime for temporal tagging, so that temporal expressions can be extracted and normalized in multiple languages and across different domains, (ii) it includes a modern, fast event recognition and classification tool, and (iii) that it can be combined with different linguistic pre-processing annotations, and is thus not bound to license restricted preprocessing components.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,897
inproceedings
claveau-kijak-2016-distributional
Distributional Thesauri for Information Retrieval and vice versa
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1588/
Claveau, Vincent and Kijak, Ewa
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3709--3716
Distributional thesauri are useful in many tasks of Natural Language Processing. In this paper, we address the problem of building and evaluating such thesauri with the help of Information Retrieval (IR) concepts. Two main contributions are proposed. First, following the work of [8], we show how IR tools and concepts can be used with success to build a thesaurus. Through several experiments and by evaluating directly the results with reference lexicons, we show that some IR models outperform state-of-the-art systems. Secondly, we use IR as an applicative framework to indirectly evaluate the generated thesaurus. Here again, this task-based evaluation validates the IR approach used to build the thesaurus. Moreover, it allows us to compare these results with those from the direct evaluation framework used in the literature. The observed differences bring these evaluation habits into question.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,898
inproceedings
mott-etal-2016-parallel
Parallel {C}hinese-{E}nglish Entities, Relations and Events Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1589/
Mott, Justin and Bies, Ann and Song, Zhiyi and Strassel, Stephanie
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3717--3722
This paper introduces the parallel Chinese-English Entities, Relations and Events (ERE) corpora developed by Linguistic Data Consortium under the DARPA Deep Exploration and Filtering of Text (DEFT) Program. Original Chinese newswire and discussion forum documents are annotated for two versions of the ERE task. The texts are manually translated into English and then annotated for the same ERE tasks on the English translation, resulting in a rich parallel resource that has utility for performers within the DEFT program, for participants in NIST`s Knowledge Base Population evaluations, and for cross-language projection research more generally.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,899
inproceedings
ellendorff-etal-2016-psymine
The {P}sy{M}ine Corpus - A Corpus annotated with Psychiatric Disorders and their Etiological Factors
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1590/
Ellendorff, Tilia and Foster, Simon and Rinaldi, Fabio
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3723--3729
We present the first version of a corpus annotated for psychiatric disorders and their etiological factors. The paper describes the choice of text, annotated entities and events/relations as well as the annotation scheme and procedure applied. The corpus is featuring a selection of focus psychiatric disorders including depressive disorder, anxiety disorder, obsessive-compulsive disorder, phobic disorders and panic disorder. Etiological factors for these focus disorders are widespread and include genetic, physiological, sociological and environmental factors among others. Etiological events, including annotated evidence text, represent the interactions between their focus disorders and their etiological factors. Additionally to these core events, symptomatic and treatment events have been annotated. The current version of the corpus includes 175 scientific abstracts. All entities and events/relations have been manually annotated by domain experts and scores of inter-annotator agreement are presented. The aim of the corpus is to provide a first gold standard to support the development of biomedical text mining applications for the specific area of mental disorders which belong to the main contributors to the contemporary burden of disease.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,900
inproceedings
fulgoni-etal-2016-empirical
An Empirical Exploration of Moral Foundations Theory in Partisan News Sources
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1591/
Fulgoni, Dean and Carpenter, Jordan and Ungar, Lyle and Preo{\c{t}}iuc-Pietro, Daniel
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3730--3736
News sources frame issues in different ways in order to appeal or control the perception of their readers. We present a large scale study of news articles from partisan sources in the US across a variety of different issues. We first highlight that differences between sides exist by predicting the political leaning of articles of unseen political bias. Framing can be driven by different types of morality that each group values. We emphasize differences in framing of different news building on the moral foundations theory quantified using hand crafted lexicons. Our results show that partisan sources frame political issues differently both in terms of words usage and through the moral foundations they relate to.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,901
inproceedings
banea-etal-2016-building
Building a Dataset for Possessions Identification in Text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1592/
Banea, Carmen and Chen, Xi and Mihalcea, Rada
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3737--3740
Just as industrialization matured from mass production to customization and personalization, so has the Web migrated from generic content to public disclosures of one`s most intimately held thoughts, opinions and beliefs. This relatively new type of data is able to represent finer and more narrowly defined demographic slices. If until now researchers have primarily focused on leveraging personalized content to identify latent information such as gender, nationality, location, or age of the author, this study seeks to establish a structured way of extracting possessions, or items that people own or are entitled to, as a way to ultimately provide insights into people`s behaviors and characteristics. In order to promote more research in this area, we are releasing a set of 798 possessions extracted from blog genre, where possessions are marked at different confidence levels, as well as a detailed set of guidelines to help in future annotation studies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,902
inproceedings
griffitt-strassel-2016-query
The Query of Everything: Developing Open-Domain, Natural-Language Queries for {BOLT} Information Retrieval
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1593/
Griffitt, Kira and Strassel, Stephanie
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3741--3747
The DARPA BOLT Information Retrieval evaluations target open-domain natural-language queries over a large corpus of informal text in English, Chinese and Egyptian Arabic. We outline the goals of BOLT IR, comparing it with the prior GALE Distillation task. After discussing the properties of the BOLT IR corpus, we provide a detailed description of the query creation process, contrasting the summary query format presented to systems at run time with the full query format created by annotators. We describe the relevance criteria used to assess BOLT system responses, highlighting the evolution of the procedures used over the three evaluation phases. We provide a detailed review of the decision points model for relevance assessment introduced during Phase 2, and conclude with information about inter-assessor consistency achieved with the decision points assessment model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,903
inproceedings
liu-etal-2016-validation
The Validation of {MRCPD} Cross-language Expansions on Imageability Ratings
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1594/
Liu, Ting and Cho, Kit and Strzalkowski, Tomek and Shaikh, Samira and Mirzaei, Mehrdad
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3748--3751
In this article, we present a method to validate a multi-lingual (English, Spanish, Russian, and Farsi) corpus on imageability ratings automatically expanded from MRCPD (Liu et al., 2014). We employed the corpus (Brysbaert et al., 2014) on concreteness ratings for our English MRCPD+ validation because of lacking human assessed imageability ratings and high correlation between concreteness ratings and imageability ratings (e.g. r = .83). For the same reason, we built a small corpus with human imageability assessment for the other language corpus validation. The results show that the automatically expanded imageability ratings are highly correlated with human assessment in all four languages, which demonstrate our automatic expansion method is valid and robust. We believe these new resources can be of significant interest to the research community, particularly in natural language processing and computational sociolinguistics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,904
inproceedings
pawar-etal-2016-building
Building Tempo-{H}indi{W}ord{N}et: A resource for effective temporal information access in {H}indi
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1595/
Pawar, Dipawesh and Hasanuzzaman, Mohammed and Ekbal, Asif
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3752--3759
In this paper, we put forward a strategy that supplements Hindi WordNet entries with information on the temporality of its word senses. Each synset of Hindi WordNet is automatically annotated to one of the five dimensions: past, present, future, neutral and atemporal. We use semi-supervised learning strategy to build temporal classifiers over the glosses of manually selected initial seed synsets. The classification process is iterated based on the repetitive confidence based expansion strategy of the initial seed list until cross-validation accuracy drops. The resource is unique in its nature as, to the best of our knowledge, still no such resource is available for Hindi.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,905
inproceedings
grabar-eshkol-taravela-2016-detection
Detection of Reformulations in Spoken {F}rench
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1596/
Grabar, Natalia and Eshkol-Taravela, Iris
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3760--3767
Our work addresses automatic detection of enunciations and segments with reformulations in French spoken corpora. The proposed approach is syntagmatic. It is based on reformulation markers and specificities of spoken language. The reference data are built manually and have gone through consensus. Automatic methods, based on rules and CRF machine learning, are proposed in order to detect the enunciations and segments that contain reformulations. With the CRF models, different features are exploited within a window of various sizes. Detection of enunciations with reformulations shows up to 0.66 precision. The tests performed for the detection of reformulated segments indicate that the task remains difficult. The best average performance values reach up to 0.65 F-measure, 0.75 precision, and 0.63 recall. We have several perspectives to this work for improving the detection of reformulated segments and for studying the data from other points of view.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,906
inproceedings
banjade-rus-2016-dt
{DT}-Neg: Tutorial Dialogues Annotated for Negation Scope and Focus in Context
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1597/
Banjade, Rajendra and Rus, Vasile
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3768--3771
Negation is often found more frequent in dialogue than commonly written texts, such as literary texts. Furthermore, the scope and focus of negation depends on context in dialogues than other forms of texts. Existing negation datasets have focused on non-dialogue texts such as literary texts where the scope and focus of negation is normally present within the same sentence where the negation is located and therefore are not the most appropriate to inform the development of negation handling algorithms for dialogue-based systems. In this paper, we present DT -Neg corpus (DeepTutor Negation corpus) which contains texts extracted from tutorial dialogues where students interacted with an Intelligent Tutoring System (ITS) to solve conceptual physics problems. The DT -Neg corpus contains annotated negations in student responses with scope and focus marked based on the context of the dialogue. Our dataset contains 1,088 instances and is available for research purposes at \url{http://language.memphis.edu/dt-neg}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,907
inproceedings
roberts-demner-fushman-2016-annotating
Annotating Logical Forms for {EHR} Questions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1598/
Roberts, Kirk and Demner-Fushman, Dina
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3772--3778
This paper discusses the creation of a semantically annotated corpus of questions about patient data in electronic health records (EHRs). The goal is provide the training data necessary for semantic parsers to automatically convert EHR questions into a structured query. A layered annotation strategy is used which mirrors a typical natural language processing (NLP) pipeline. First, questions are syntactically analyzed to identify multi-part questions. Second, medical concepts are recognized and normalized to a clinical ontology. Finally, logical forms are created using a lambda calculus representation. We use a corpus of 446 questions asking for patient-specific information. From these, 468 specific questions are found containing 259 unique medical concepts and requiring 53 unique predicates to represent the logical forms. We further present detailed characteristics of the corpus, including inter-annotator agreement results, and describe the challenges automatic NLP systems will face on this task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,908
inproceedings
bethard-parker-2016-semantically
A Semantically Compositional Annotation Scheme for Time Normalization
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1599/
Bethard, Steven and Parker, Jonathan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3779--3786
We present a new annotation scheme for normalizing time expressions, such as {\textquotedblleft}three days ago{\textquotedblright}, to computer-readable forms, such as 2016-03-07. The annotation scheme addresses several weaknesses of the existing TimeML standard, allowing the representation of time expressions that align to more than one calendar unit (e.g., {\textquotedblleft}the past three summers{\textquotedblright}), that are defined relative to events (e.g., {\textquotedblleft}three weeks postoperative{\textquotedblright}), and that are unions or intersections of smaller time expressions (e.g., {\textquotedblleft}Tuesdays and Thursdays{\textquotedblright}). It achieves this by modeling time expression interpretation as the semantic composition of temporal operators like UNION, NEXT, and AFTER. We have applied the annotation scheme to 34 documents so far, producing 1104 annotations, and achieving inter-annotator agreement of 0.821.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,909
inproceedings
ozbal-etal-2016-prometheus
{PROMETHEUS}: A Corpus of Proverbs Annotated with Metaphors
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1600/
{\"Ozbal, G{\"ozde and Strapparava, Carlo and Tekiro{\u{glu, Serra Sinem
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3787--3793
Proverbs are commonly metaphoric in nature and the mapping across domains is commonly established in proverbs. The abundance of proverbs in terms of metaphors makes them an extremely valuable linguistic resource since they can be utilized as a gold standard for various metaphor related linguistic tasks such as metaphor identification or interpretation. Besides, a collection of proverbs fromvarious languages annotated with metaphors would also be essential for social scientists to explore the cultural differences betweenthose languages. In this paper, we introduce PROMETHEUS, a dataset consisting of English proverbs and their equivalents in Italian.In addition to the word-level metaphor annotations for each proverb, PROMETHEUS contains other types of information such as the metaphoricity degree of the overall proverb, its meaning, the century that it was first recorded in and a pair of subjective questions responded by the annotators. To the best of our knowledge, this is the first multi-lingual and open-domain corpus of proverbs annotated with word-level metaphors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,910
inproceedings
djemaa-etal-2016-corpus
Corpus Annotation within the {F}rench {F}rame{N}et: a Domain-by-domain Methodology
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1601/
Djemaa, Marianne and Candito, Marie and Muller, Philippe and Vieu, Laure
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3794--3801
This paper reports on the development of a French FrameNet, within the ASFALDA project. While the first phase of the project focused on the development of a French set of frames and corresponding lexicon (Candito et al., 2014), this paper concentrates on the subsequent corpus annotation phase, which focused on four notional domains (commercial transactions, cognitive stances, causality and verbal communication). Given full coverage is not reachable for a relatively {\textquotedblleft}new{\textquotedblright} FrameNet project, we advocate that focusing on specific notional domains allowed us to obtain full lexical coverage for the frames of these domains, while partially reflecting word sense ambiguities. Furthermore, as frames and roles were annotated on two French Treebanks (the French Treebank (Abeill{\'e} and Barrier, 2004) and the Sequoia Treebank (Candito and Seddah, 2012), we were able to extract a syntactico-semantic lexicon from the annotated frames. In the resource`s current status, there are 98 frames, 662 frame evoking words, 872 senses, and about 13000 annotated frames, with their semantic roles assigned to portions of text. The French FrameNet is freely available at alpage.inria.fr/asfalda.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,911
inproceedings
lefeuvre-halftermeyer-etal-2016-covering
Covering various Needs in Temporal Annotation: a Proposal of Extension of {ISO} {T}ime{ML} that Preserves Upward Compatibility
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1602/
Lefeuvre-Halftermeyer, Ana{\"is and Antoine, Jean-Yves and Couillault, Alain and Schang, Emmanuel and Abouda, Lotfi and Savary, Agata and Maurel, Denis and Eshkol, Iris and Battistelli, Delphine
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3802--3806
This paper reports a critical analysis of the ISO TimeML standard, in the light of several experiences of temporal annotation that were conducted on spoken French. It shows that the norm suffers from weaknesses that should be corrected to fit a larger variety of needs inNLP and in corpus linguistics. We present our proposition of some improvements of the norm before it will be revised by the ISO Committee in 2017. These modifications concern mainly (1) Enrichments of well identified features of the norm: temporal function of TIMEX time expressions, additional types for TLINK temporal relations; (2) Deeper modifications concerning the units or features annotated: clarification between time and tense for EVENT units, coherence of representation between temporal signals (the SIGNAL unit) and TIMEX modifiers (the MOD feature); (3) A recommendation to perform temporal annotation on top of a syntactic (rather than lexical) layer (temporal annotation on a treebank).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,912
inproceedings
vieu-etal-2016-general
A General Framework for the Annotation of Causality Based on {F}rame{N}et
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1603/
Vieu, Laure and Muller, Philippe and Candito, Marie and Djemaa, Marianne
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3807--3813
We present here a general set of semantic frames to annotate causal expressions, with a rich lexicon in French and an annotated corpus of about 5000 instances of causal lexical items with their corresponding semantic frames. The aim of our project is to have both the largest possible coverage of causal phenomena in French, across all parts of speech, and have it linked to a general semantic framework such as FN, to benefit in particular from the relations between other semantic frames, e.g., temporal ones or intentional ones, and the underlying upper lexical ontology that enable some forms of reasoning. This is part of the larger ASFALDA French FrameNet project, which focuses on a few different notional domains which are interesting in their own right (Djemma et al., 2016), including cognitive positions and communication frames. In the process of building the French lexicon and preparing the annotation of the corpus, we had to remodel some of the frames proposed in FN based on English data, with hopefully more precise frame definitions to facilitate human annotation. This includes semantic clarifications of frames and frame elements, redundancy elimination, and added coverage. The result is arguably a significant improvement of the treatment of causality in FN itself.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,913
inproceedings
vempala-blanco-2016-annotating
Annotating Temporally-Anchored Spatial Knowledge on Top of {O}nto{N}otes Semantic Roles
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1604/
Vempala, Alakananda and Blanco, Eduardo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3814--3821
This paper presents a two-step methodology to annotate spatial knowledge on top of OntoNotes semantic roles. First, we manipulate semantic roles to automatically generate potential additional spatial knowledge. Second, we crowdsource annotations with Amazon Mechanical Turk to either validate or discard the potential additional spatial knowledge. The resulting annotations indicate whether entities are or are not located somewhere with a degree of certainty, and temporally anchor this spatial information. Crowdsourcing experiments show that the additional spatial knowledge is ubiquitous and intuitive to humans, and experimental results show that it can be inferred automatically using standard supervised machine learning techniques.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,914
inproceedings
gotze-boye-2016-spaceref
{S}pace{R}ef: A corpus of street-level geographic descriptions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1605/
G{\"otze, Jana and Boye, Johan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3822--3827
This article describes SPACEREF, a corpus of street-level geographic descriptions. Pedestrians are walking a route in a (real) urban environment, describing their actions. Their position is automatically logged, their speech is manually transcribed, and their references to objects are manually annotated with respect to a crowdsourced geographic database. We describe how the data was collected and annotated, and how it has been used in the context of creating resources for an automatic pedestrian navigation system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,915
inproceedings
mirzaei-moloodi-2016-persian
{P}ersian {P}roposition {B}ank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1606/
Mirzaei, Azadeh and Moloodi, Amirsaeid
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3828--3835
This paper describes the procedure of semantic role labeling and the development of the first manually annotated Persian Proposition Bank (PerPB) which added a layer of predicate-argument information to the syntactic structures of Persian Dependency Treebank (known as PerDT). Through the process of annotating, the annotators could see the syntactic information of all the sentences and so they annotated 29982 sentences with more than 9200 unique verbs. In the annotation procedure, the direct syntactic dependents of the verbs were the first candidates for being annotated. So we did not annotate the other indirect dependents unless their phrasal heads were propositional and had their own arguments or adjuncts. Hence besides the semantic role labeling of verbs, the argument structure of 1300 unique propositional nouns and 300 unique propositional adjectives were annotated in the sentences, too. The accuracy of annotation process was measured by double annotation of the data at two separate stages and finally the data was prepared in the CoNLL dependency format.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,916
inproceedings
tateisi-etal-2016-typed
Typed Entity and Relation Annotation on Computer Science Papers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1607/
Tateisi, Yuka and Ohta, Tomoko and Pyysalo, Sampo and Miyao, Yusuke and Aizawa, Akiko
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3836--3843
We describe our ongoing effort to establish an annotation scheme for describing the semantic structures of research articles in the computer science domain, with the intended use of developing search systems that can refine their results by the roles of the entities denoted by the query keys. In our scheme, mentions of entities are annotated with ontology-based types, and the roles of the entities are annotated as relations with other entities described in the text. So far, we have annotated 400 abstracts from the ACL anthology and the ACM digital library. In this paper, the scheme and the annotated dataset are described, along with the problems found in the course of annotation. We also show the results of automatic annotation and evaluate the corpus in a practical setting in application to topic extraction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,917
inproceedings
gast-etal-2016-enriching
Enriching {T}ime{B}ank: Towards a more precise annotation of temporal relations in a text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1608/
Gast, Volker and Bierkandt, Lennart and Druskat, Stephan and Rzymski, Christoph
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3844--3850
We propose a way of enriching the TimeML annotations of TimeBank by adding information about the Topic Time in terms of Klein (1994). The annotations are partly automatic, partly inferential and partly manual. The corpus was converted into the native format of the annotation software GraphAnno and POS-tagged using the Stanford bidirectional dependency network tagger. On top of each finite verb, a FIN-node with tense information was created, and on top of any FIN-node, a TOPICTIME-node, in accordance with Klein`s (1994) treatment of finiteness as the linguistic correlate of the Topic Time. Each TOPICTIME-node is linked to a MAKEINSTANCE-node representing an (instantiated) event in TimeML (Pustejovsky et al. 2005), the markup language used for the annotation of TimeBank. For such links we introduce a new category, ELINK. ELINKs capture the relationship between the Topic Time (TT) and the Time of Situation (TSit) and have an aspectual interpretation in Klein`s (1994) theory. In addition to these automatic and inferential annotations, some TLINKs were added manually. Using an example from the corpus, we show that the inclusion of the Topic Time in the annotations allows for a richer representation of the temporal structure than does TimeML. A way of representing this structure in a diagrammatic form similar to the T-Box format (Verhagen, 2007) is proposed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,918
inproceedings
sheikh-etal-2016-diachronic
How Diachronic Text Corpora Affect Context based Retrieval of {OOV} Proper Names for Audio News
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1609/
Sheikh, Imran and Illina, Irina and Fohr, Dominique
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3851--3855
Out-Of-Vocabulary (OOV) words missed by Large Vocabulary Continuous Speech Recognition (LVCSR) systems can be recovered with the help of topic and semantic context of the OOV words captured from a diachronic text corpus. In this paper we investigate how the choice of documents for the diachronic text corpora affects the retrieval of OOV Proper Names (PNs) relevant to an audio document. We first present our diachronic French broadcast news datasets, which highlight the motivation of our study on OOV PNs. Then the effect of using diachronic text data from different sources and a different time span is analysed. With OOV PN retrieval experiments on French broadcast news videos, we conclude that a diachronic corpus with text from different sources leads to better retrieval performance than one relying on text from single source or from a longer time span.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,919
inproceedings
wong-etal-2016-syllable
Syllable based {DNN}-{HMM} {C}antonese Speech to Text System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1610/
Wong, Timothy and Li, Claire and Lam, Sam and Chiu, Billy and Lu, Qin and Li, Minglei and Xiong, Dan and Yu, Roy Shing and Ng, Vincent T.Y.
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3856--3862
This paper reports our work on building up a Cantonese Speech-to-Text (STT) system with a syllable based acoustic model. This is a part of an effort in building a STT system to aid dyslexic students who have cognitive deficiency in writing skills but have no problem expressing their ideas through speech. For Cantonese speech recognition, the basic unit of acoustic models can either be the conventional Initial-Final (IF) syllables, or the Onset-Nucleus-Coda (ONC) syllables where finals are further split into nucleus and coda to reflect the intra-syllable variations in Cantonese. By using the Kaldi toolkit, our system is trained using the stochastic gradient descent optimization model with the aid of GPUs for the hybrid Deep Neural Network and Hidden Markov Model (DNN-HMM) with and without I-vector based speaker adaptive training technique. The input features of the same Gaussian Mixture Model with speaker adaptive training (GMM-SAT) to DNN are used in all cases. Experiments show that the ONC-based syllable acoustic modeling with I-vector based DNN-HMM achieves the best performance with the word error rate (WER) of 9.66{\%} and the real time factor (RTF) of 1.38812.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,920
inproceedings
gauthier-etal-2016-collecting
Collecting Resources in Sub-{S}aharan {A}frican Languages for Automatic Speech Recognition: a Case Study of {W}olof
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1611/
Gauthier, Elodie and Besacier, Laurent and Voisin, Sylvie and Melese, Michael and Elingui, Uriel Pascal
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3863--3867
This article presents the data collected and ASR systems developped for 4 sub-saharan african languages (Swahili, Hausa, Amharic and Wolof). To illustrate our methodology, the focus is made on Wolof (a very under-resourced language) for which we designed the first ASR system ever built in this language. All data and scripts are available online on our github repository.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,921
inproceedings
pelemans-etal-2016-scale
{SCALE}: A Scalable Language Engineering Toolkit
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1612/
Pelemans, Joris and Verwimp, Lyan and Demuynck, Kris and Van hamme, Hugo and Wambacq, Patrick
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3868--3871
In this paper we present SCALE, a new Python toolkit that contains two extensions to n-gram language models. The first extension is a novel technique to model compound words called Semantic Head Mapping (SHM). The second extension, Bag-of-Words Language Modeling (BagLM), bundles popular models such as Latent Semantic Analysis and Continuous Skip-grams. Both extensions scale to large data and allow the integration into first-pass ASR decoding. The toolkit is open source, includes working examples and can be found on \url{http://github.com/jorispelemans/scale}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,922
inproceedings
brognaux-etal-2016-combining
Combining Manual and Automatic Prosodic Annotation for Expressive Speech Synthesis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1613/
Brognaux, Sandrine and Fran{\c{c}}ois, Thomas and Saerens, Marco
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3872--3879
Text-to-speech has long been centered on the production of an intelligible message of good quality. More recently, interest has shifted to the generation of more natural and expressive speech. A major issue of existing approaches is that they usually rely on a manual annotation in expressive styles, which tends to be rather subjective. A typical related issue is that the annotation is strongly influenced {\textemdash} and possibly biased {\textemdash} by the semantic content of the text (e.g. a shot or a fault may incite the annotator to tag that sequence as expressing a high degree of excitation, independently of its acoustic realization). This paper investigates the assumption that human annotation of basketball commentaries in excitation levels can be automatically improved on the basis of acoustic features. It presents two techniques for label correction exploiting a Gaussian mixture and a proportional-odds logistic regression. The automatically re-annotated corpus is then used to train HMM-based expressive speech synthesizers, the performance of which is assessed through subjective evaluations. The results indicate that the automatic correction of the annotation with Gaussian mixture helps to synthesize more contrasted excitation levels, while preserving naturalness.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,923
inproceedings
kisler-etal-2016-bas
{BAS} Speech Science Web Services - an Update of Current Developments
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1614/
Kisler, Thomas and Reichel, Uwe and Schiel, Florian and Draxler, Christoph and Jackl, Bernhard and P{\"orner, Nina
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3880--3885
In 2012 the Bavarian Archive for Speech Signals started providing some of its tools from the field of spoken language in the form of Software as a Service (SaaS). This means users access the processing functionality over a web browser and therefore do not have to install complex software packages on a local computer. Amongst others, these tools include segmentation {\&} labeling, grapheme-to-phoneme conversion, text alignment, syllabification and metadata generation, where all but the last are available for a variety of languages. Since its creation the number of available services and the web interface have changed considerably. We give an overview and a detailed description of the system architecture, the available web services and their functionality. Furthermore, we show how the number of files processed over the system developed in the last four years.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,924
inproceedings
batista-etal-2016-spa
{SPA}: Web-based Platform for easy Access to Speech Processing Modules
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1615/
Batista, Fernando and Curto, Pedro and Trancoso, Isabel and Abad, Alberto and Ferreira, Jaime and Ribeiro, Eug{\'e}nio and Moniz, Helena and de Matos, David Martins and Ribeiro, Ricardo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3886--3892
This paper presents SPA, a web-based Speech Analytics platform that integrates several speech processing modules and that makes it possible to use them through the web. It was developed with the aim of facilitating the usage of the modules, without the need to know about software dependencies and specific configurations. Apart from being accessed by a web-browser, the platform also provides a REST API for easy integration with other applications. The platform is flexible, scalable, provides authentication for access restrictions, and was developed taking into consideration the time and effort of providing new services. The platform is still being improved, but it already integrates a considerable number of audio and text processing modules, including: Automatic transcription, speech disfluency classification, emotion detection, dialog act recognition, age and gender classification, non-nativeness detection, hyper-articulation detection, dialog act recognition, and two external modules for feature extraction and DTMF detection. This paper describes the SPA architecture, presents the already integrated modules, and provides a detailed description for the ones most recently integrated.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,925
inproceedings
seara-etal-2016-enhanced
Enhanced {CORILGA}: Introducing the Automatic Phonetic Alignment Tool for Continuous Speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1616/
Seara, Roberto and Martinez, Marta and Varela, Roc{\'i}o and Mateo, Carmen Garc{\'i}a and Rei, Elisa Fernandez and Regueira, Xos{\'e} Luis
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3893--3898
The {\textquotedblleft}Corpus Oral Informatizado da Lingua Galega (CORILGA){\textquotedblright} project aims at building a corpus of oral language for Galician, primarily designed to study the linguistic variation and change. This project is currently under development and it is periodically enriched with new contributions. The long-term goal is that all the speech recordings will be enriched with phonetic, syllabic, morphosyntactic, lexical and sentence ELAN-complaint annotations. A way to speed up the process of annotation is to use automatic speech-recognition-based tools tailored to the application. Therefore, CORILGA repository has been enhanced with an automatic alignment tool, available to the administrator of the repository, that aligns speech with an orthographic transcription. In the event that no transcription, or just a partial one, were available, a speech recognizer for Galician is used to generate word and phonetic segmentations. These recognized outputs may contain errors that will have to be manually corrected by the administrator. For assisting this task, the tool also provides an ELAN tier with the confidence measure of each recognized word. In this paper, after the description of the main facts of the CORILGA corpus, the speech alignment and recognition tools are described. Both have been developed using the Kaldi toolkit.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,926
inproceedings
konat-etal-2016-corpus
A Corpus of Argument Networks: Using Graph Properties to Analyse Divisive Issues
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1617/
Konat, Barbara and Lawrence, John and Park, Joonsuk and Budzynska, Katarzyna and Reed, Chris
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3899--3906
Governments are increasingly utilising online platforms in order to engage with, and ascertain the opinions of, their citizens. Whilst policy makers could potentially benefit from such enormous feedback from society, they first face the challenge of making sense out of the large volumes of data produced. This creates a demand for tools and technologies which will enable governments to quickly and thoroughly digest the points being made and to respond accordingly. By determining the argumentative and dialogical structures contained within a debate, we are able to determine the issues which are divisive and those which attract agreement. This paper proposes a method of graph-based analytics which uses properties of graphs representing networks of arguments pro- {\&} con- in order to automatically analyse issues which divide citizens about new regulations. By future application of the most recent advances in argument mining, the results reported here will have a chance to scale up to enable sense-making of the vast amount of feedback received from citizens on directions that policy should take.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,927
inproceedings
correia-etal-2016-metated
meta{TED}: a Corpus of Metadiscourse for Spoken Language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1618/
Correia, Rui and Mamede, Nuno and Baptista, Jorge and Eskenazi, Maxine
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3907--3913
This paper describes metaTED {\textemdash} a freely available corpus of metadiscursive acts in spoken language collected via crowdsourcing. Metadiscursive acts were annotated on a set of 180 randomly chosen TED talks in English, spanning over different speakers and topics. The taxonomy used for annotation is composed of 16 categories, adapted from Adel(2010). This adaptation takes into account both the material to annotate and the setting in which the annotation task is performed. The crowdsourcing setup is described, including considerations regarding training and quality control. The collected data is evaluated in terms of quantity of occurrences, inter-annotator agreement, and annotation related measures (such as average time on task and self-reported confidence). Results show different levels of agreement among metadiscourse acts ({\ensuremath{\alpha}} {\ensuremath{\in}} [0.15; 0.49]). To further assess the collected material, a subset of the annotations was submitted to expert appreciation, who validated which of the marked occurrences truly correspond to instances of the metadiscursive act at hand. Similarly to what happened with the crowd, experts revealed different levels of agreement between categories ({\ensuremath{\alpha}} {\ensuremath{\in}} [0.18; 0.72]). The paper concludes with a discussion on the applicability of metaTED with respect to each of the 16 categories of metadiscourse.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,928
inproceedings
pareti-2016-parc
{PARC} 3.0: A Corpus of Attribution Relations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1619/
Pareti, Silvia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3914--3920
Quotation and opinion extraction, discourse and factuality have all partly addressed the annotation and identification of Attribution Relations. However, disjoint efforts have provided a partial and partly inaccurate picture of attribution and generated small or incomplete resources, thus limiting the applicability of machine learning approaches. This paper presents PARC 3.0, a large corpus fully annotated with Attribution Relations (ARs). The annotation scheme was tested with an inter-annotator agreement study showing satisfactory results for the identification of ARs and high agreement on the selection of the text spans corresponding to its constitutive elements: source, cue and content. The corpus, which comprises around 20k ARs, was used to investigate the range of structures that can express attribution. The results show a complex and varied relation of which the literature has addressed only a portion. PARC 3.0 is available for research use and can be used in a range of different studies to analyse attribution and validate assumptions as well as to develop supervised attribution extraction models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,929
inproceedings
li-etal-2016-improving
Improving the Annotation of Sentence Specificity
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1620/
Li, Junyi Jessy and O{'}Daniel, Bridget and Wu, Yi and Zhao, Wenli and Nenkova, Ani
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3921--3927
We introduce improved guidelines for annotation of sentence specificity, addressing the issues encountered in prior work. Our annotation provides judgements of sentences in context. Rather than binary judgements, we introduce a specificity scale which accommodates nuanced judgements. Our augmented annotation procedure also allows us to define where in the discourse context the lack of specificity can be resolved. In addition, the cause of the underspecification is annotated in the form of free text questions. We present results from a pilot annotation with this new scheme and demonstrate good inter-annotator agreement. We found that the lack of specificity distributes evenly among immediate prior context, long distance prior context and no prior context. We find that missing details that are not resolved in the the prior context are more likely to trigger questions about the reason behind events, {\textquotedblleft}why{\textquotedblright} and {\textquotedblleft}how{\textquotedblright}. Our data is accessible at \url{http://www.cis.upenn.edu/~nlp/corpora/lrec16spec.html}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,930
inproceedings
de-kuthy-etal-2016-focus-annotation
Focus Annotation of Task-based Data: A Comparison of Expert and Crowd-Sourced Annotation in a Reading Comprehension Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1621/
De Kuthy, Kordula and Ziai, Ramon and Meurers, Detmar
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3928--3935
While the formal pragmatic concepts in information structure, such as the focus of an utterance, are precisely defined in theoretical linguistics and potentially very useful in conceptual and practical terms, it has turned out to be difficult to reliably annotate such notions in corpus data. We present a large-scale focus annotation effort designed to overcome this problem. Our annotation study is based on the tasked-based corpus CREG, which consists of answers to explicitly given reading comprehension questions. We compare focus annotation by trained annotators with a crowd-sourcing setup making use of untrained native speakers. Given the task context and an annotation process incrementally making the question form and answer type explicit, the trained annotators reach substantial agreement for focus annotation. Interestingly, the crowd-sourcing setup also supports high-quality annotation {\textemdash} for specific subtypes of data. Finally, we turn to the question whether the relevance of focus annotation can be extrinsically evaluated. We show that automatic short-answer assessment significantly improves for focus annotated data. The focus annotated CREG corpus is freely available and constitutes the largest such resource for German.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,931
inproceedings
alex-etal-2016-homing
Homing in on {T}witter Users: Evaluating an Enhanced Geoparser for User Profile Locations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1622/
Alex, Beatrice and Llewellyn, Clare and Grover, Claire and Oberlander, Jon and Tobin, Richard
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3936--3944
Twitter-related studies often need to geo-locate Tweets or Twitter users, identifying their real-world geographic locations. As tweet-level geotagging remains rare, most prior work exploited tweet content, timezone and network information to inform geolocation, or else relied on off-the-shelf tools to geolocate users from location information in their user profiles. However, such user location metadata is not consistently structured, causing such tools to fail regularly, especially if a string contains multiple locations, or if locations are very fine-grained. We argue that user profile location (UPL) and tweet location need to be treated as distinct types of information from which differing inferences can be drawn. Here, we apply geoparsing to UPLs, and demonstrate how task performance can be improved by adapting our Edinburgh Geoparser, which was originally developed for processing English text. We present a detailed evaluation method and results, including inter-coder agreement. We demonstrate that the optimised geoparser can effectively extract and geo-reference multiple locations at different levels of granularity with an F1-score of around 0.90. We also illustrate how geoparsed UPLs can be exploited for international information trade studies and country-level sentiment analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,932
inproceedings
mohammad-etal-2016-dataset
A Dataset for Detecting Stance in Tweets
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1623/
Mohammad, Saif and Kiritchenko, Svetlana and Sobhani, Parinaz and Zhu, Xiaodan and Cherry, Colin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3945--3952
We can often detect from a person`s utterances whether he/she is in favor of or against a given target entity (a product, topic, another person, etc.). Here for the first time we present a dataset of tweets annotated for whether the tweeter is in favor of or against pre-chosen targets of interest{\textemdash}their stance. The targets of interest may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. The data pertains to six targets of interest commonly known and debated in the United States. Apart from stance, the tweets are also annotated for whether the target of interest is the target of opinion in the tweet. The annotations were performed by crowdsourcing. Several techniques were employed to encourage high-quality annotations (for example, providing clear and simple instructions) and to identify and discard poor annotations (for example, using a small set of check questions annotated by the authors). This Stance Dataset, which was subsequently also annotated for sentiment, can be used to better understand the relationship between stance, sentiment, entity relationships, and textual inference.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,933
inproceedings
dini-bittar-2016-emotion
Emotion Analysis on {T}witter: The Hidden Challenge
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1624/
Dini, Luca and Bittar, Andr{\'e}
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3953--3958
In this paper, we present an experiment to detect emotions in tweets. Unlike much previous research, we draw the important distinction between the tasks of emotion detection in a closed world assumption (i.e. every tweet is emotional) and the complicated task of identifying emotional versus non-emotional tweets. Given an apparent lack of appropriately annotated data, we created two corpora for these tasks. We describe two systems, one symbolic and one based on machine learning, which we evaluated on our datasets. Our evaluation shows that a machine learning classifier performs best on emotion detection, while a symbolic approach is better for identifying relevant (i.e. emotional) tweets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,934
inproceedings
inel-etal-2016-crowdsourcing
Crowdsourcing Salient Information from News and Tweets
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1625/
Inel, Oana and Caselli, Tommaso and Aroyo, Lora
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3959--3966
The increasing streams of information pose challenges to both humans and machines. On the one hand, humans need to identify relevant information and consume only the information that lies at their interests. On the other hand, machines need to understand the information that is published in online data streams and generate concise and meaningful overviews. We consider events as prime factors to query for information and generate meaningful context. The focus of this paper is to acquire empirical insights for identifying salience features in tweets and news about a target event, i.e., the event of {\textquotedblleft}whaling{\textquotedblright}. We first derive a methodology to identify such features by building up a knowledge space of the event enriched with relevant phrases, sentiments and ranked by their novelty. We applied this methodology on tweets and we have performed preliminary work towards adapting it to news articles. Our results show that crowdsourcing text relevance, sentiments and novelty (1) can be a main step in identifying salient information, and (2) provides a deeper and more precise understanding of the data at hand compared to state-of-the-art approaches.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,935
inproceedings
barbieri-etal-2016-emoji
What does this Emoji Mean? A Vector Space Skip-Gram Model for {T}witter Emojis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1626/
Barbieri, Francesco and Ronzano, Francesco and Saggion, Horacio
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3967--3972
Emojis allow us to describe objects, situations and even feelings with small images, providing a visual and quick way to communicate. In this paper, we analyse emojis used in Twitter with distributional semantic models. We retrieve 10 millions tweets posted by USA users, and we build several skip gram word embedding models by mapping in the same vectorial space both words and emojis. We test our models with semantic similarity experiments, comparing the output of our models with human assessment. We also carry out an exhaustive qualitative evaluation, showing interesting results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,936
inproceedings
iosif-potamianos-2016-crossmodal
Crossmodal Network-Based Distributional Semantic Models
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1627/
Iosif, Elias and Potamianos, Alexandros
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3973--3979
Despite the recent success of distributional semantic models (DSMs) in various semantic tasks they remain disconnected with real-world perceptual cues since they typically rely on linguistic features. Text data constitute the dominant source of features for the majority of such models, although there is evidence from cognitive science that cues from other modalities contribute to the acquisition and representation of semantic knowledge. In this work, we propose the crossmodal extension of a two-tier text-based model, where semantic representations are encoded in the first layer, while the second layer is used for computing similarity between words. We exploit text- and image-derived features for performing computations at each layer, as well as various approaches for their crossmodal fusion. It is shown that the crossmodal model performs better (from 0.68 to 0.71 correlation coefficient) than the unimodal one for the task of similarity computation between words.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,937
inproceedings
bonial-palmer-2016-comprehensive
Comprehensive and Consistent {P}rop{B}ank Light Verb Annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1628/
Bonial, Claire and Palmer, Martha
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3980--3985
Recent efforts have focused on expanding the annotation coverage of PropBank from verb relations to adjective and noun relations, as well as light verb constructions (e.g., make an offer, take a bath). While each new relation type has presented unique annotation challenges, ensuring consistent and comprehensive annotation of light verb constructions has proved particularly challenging, given that light verb constructions are semi-productive, difficult to define, and there are often borderline cases. This research describes the iterative process of developing PropBank annotation guidelines for light verb constructions, the current guidelines, and a comparison to related resources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,938
inproceedings
hollenstein-etal-2016-inconsistency
Inconsistency Detection in Semantic Annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1629/
Hollenstein, Nora and Schneider, Nathan and Webber, Bonnie
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3986--3990
Inconsistencies are part of any manually annotated corpus. Automatically finding these inconsistencies and correcting them (even manually) can increase the quality of the data. Past research has focused mainly on detecting inconsistency in syntactic annotation. This work explores new approaches to detecting inconsistency in semantic annotation. Two ranking methods are presented in this paper: a discrepancy ranking and an entropy ranking. Those methods are then tested and evaluated on multiple corpora annotated with multiword expressions and supersense labels. The results show considerable improvements in detecting inconsistency candidates over a random baseline. Possible applications of methods for inconsistency detection are improving the annotation procedure as well as the guidelines and correcting errors in completed annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,939
inproceedings
oepen-etal-2016-towards
Towards Comparability of Linguistic Graph {B}anks for Semantic Parsing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1630/
Oepen, Stephan and Kuhlmann, Marco and Miyao, Yusuke and Zeman, Daniel and Cinkov{\'a}, Silvie and Flickinger, Dan and Haji{\v{c}}, Jan and Ivanova, Angelina and Ure{\v{s}}ov{\'a}, Zde{\v{n}}ka
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3991--3995
We announce a new language resource for research on semantic parsing, a large, carefully curated collection of semantic dependency graphs representing multiple linguistic traditions. This resource is called SDP{\textasciitilde}2016 and provides an update and extension to previous versions used as Semantic Dependency Parsing target representations in the 2014 and 2015 Semantic Evaluation Exercises. For a common core of English text, this third edition comprises semantic dependency graphs from four distinct frameworks, packaged in a unified abstract format and aligned at the sentence and token levels. SDP 2016 is the first general release of this resource and available for licensing from the Linguistic Data Consortium in May 2016. The data is accompanied by an open-source SDP utility toolkit and system results from previous contrastive parsing evaluations against these target representations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,940
inproceedings
lu-ng-2016-event
Event Coreference Resolution with Multi-Pass Sieves
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1631/
Lu, Jing and Ng, Vincent
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3996--4003
Multi-pass sieve approaches have been successfully applied to entity coreference resolution and many other tasks in natural language processing (NLP), owing in part to the ease of designing high-precision rules for these tasks. However, the same is not true for event coreference resolution: typically lying towards the end of the standard information extraction pipeline, an event coreference resolver assumes as input the noisy outputs of its upstream components such as the trigger identification component and the entity coreference resolution component. The difficulty in designing high-precision rules makes it challenging to successfully apply a multi-pass sieve approach to event coreference resolution. In this paper, we investigate this challenge, proposing the first multi-pass sieve approach to event coreference resolution. When evaluated on the version of the KBP 2015 corpus available to the participants of EN Task 2 (Event Nugget Detection and Coreference), our approach achieves an Avg F-score of 40.32{\%}, outperforming the best participating system by 0.67{\%} in Avg F-score.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,941
inproceedings
cavar-etal-2016-endangered
Endangered Language Documentation: Bootstrapping a Chatino Speech Corpus, Forced Aligner, {ASR}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1632/
{\'C}avar, Malgorzata and {\'C}avar, Damir and Cruz, Hilaria
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4004--4011
This project approaches the problem of language documentation and revitalization from a rather untraditional angle. To improve and facilitate language documentation of endangered languages, we attempt to use corpus linguistic methods and speech and language technologies to reduce the time needed for transcription and annotation of audio and video language recordings. The paper demonstrates this approach on the example of the endangered and seriously under-resourced variety of Eastern Chatino (CTP). We show how initial speech corpora can be created that can facilitate the development of speech and language technologies for under-resourced languages by utilizing Forced Alignment tools to time align transcriptions. Time-aligned transcriptions can be used to train speech corpora and utilize automatic speech recognition tools for the transcription and annotation of untranscribed data. Speech technologies can be used to reduce the time and effort necessary for transcription and annotation of large collections of audio and video recordings in digital language archives, addressing the transcription bottleneck problem that most language archives and many under-documented languages are confronted with. This approach can increase the availability of language resources from low-resourced and endangered languages to speech and language technology research and development.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,942
inproceedings
matos-etal-2016-dirha
The {DIRHA} {P}ortuguese Corpus: A Comparison of Home Automation Command Detection and Recognition in Simulated and Real Data.
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1633/
Matos, Miguel and Abad, Alberto and Serralheiro, Ant{\'o}nio
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4012--4018
In this paper, we describe a new corpus -named DIRHA-L2F RealCorpus- composed of typical home automation speech interactions in European Portuguese that has been recorded by the INESC-ID`s Spoken Language Systems Laboratory (L2F) to support the activities of the Distant-speech Interaction for Robust Home Applications (DIRHA) EU-funded project. The corpus is a multi-microphone and multi-room database of real continuous audio sequences containing read phonetically rich sentences, read and spontaneous keyword activation sentences, and read and spontaneous home automation commands. The background noise conditions are controlled and randomly recreated with noises typically found in home environments. Experimental validation on this corpus is reported in comparison with the results obtained on a simulated corpus using a fully automated speech processing pipeline for two fundamental automatic speech recognition tasks of typical {\textquoteleft}always-listening' home-automation scenarios: system activation and voice command recognition. Attending to results on both corpora, the presence of overlapping voice-like noise is shown as the main problem: simulated sequences contain concurrent speakers that result in general in a more challenging corpus, while real sequences performance drops drastically when TV or radio is on.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,943
inproceedings
mori-etal-2016-accuracy
Accuracy of Automatic Cross-Corpus Emotion Labeling for Conversational Speech Corpus Commonization
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1634/
Mori, Hiroki and Nagaoka, Atsushi and Arimoto, Yoshiko
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4019--4023
There exists a major incompatibility in emotion labeling framework among emotional speech corpora, that is, category-based and dimension-based. Commonizing these requires inter-corpus emotion labeling according to both frameworks, but doing this by human annotators is too costly for most cases. This paper examines the possibility of automatic cross-corpus emotion labeling. In order to evaluate the effectiveness of the automatic labeling, a comprehensive emotion annotation for two conversational corpora, UUDB and OGVC, was performed. With a state-of-the-art machine learning technique, dimensional and categorical emotion estimation models were trained and tested against the two corpora. For the emotion dimension estimation, the automatic cross-corpus emotion labeling for the different corpus was effective for the dimensions of aroused-sleepy, dominant-submissive and interested-indifferent, showing only slight performance degradation against the result for the same corpus. On the other hand, the performance for the emotion category estimation was not sufficient.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,944
inproceedings
carl-etal-2016-english
{E}nglish-to-{J}apanese Translation vs. Dictation vs. Post-editing: Comparing Translation Modes in a Multilingual Setting
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1635/
Carl, Michael and Aizawa, Akiko and Yamada, Masaru
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4024--4031
Speech-enabled interfaces have the potential to become one of the most efficient and ergonomic environments for human-computer interaction and for text production. However, not much research has been carried out to investigate in detail the processes and strategies involved in the different modes of text production. This paper introduces and evaluates a corpus of more than 55 hours of English-to-Japanese user activity data that were collected within the ENJA15 project, in which translators were observed while writing and speaking translations (translation dictation) and during machine translation post-editing. The transcription of the spoken data, keyboard logging and eye-tracking data were recorded with Translog-II, post-processed and integrated into the CRITT Translation Process Research-DB (TPR-DB), which is publicly available under a creative commons license. The paper presents the ENJA15 data as part of a large multilingual Chinese, Danish, German, Hindi and Spanish translation process data collection of more than 760 translation sessions. It compares the ENJA15 data with the other language pairs and reviews some of its particularities.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,945
inproceedings
neergaard-etal-2016-database
Database of {M}andarin Neighborhood Statistics
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1636/
Neergaard, Karl and Xu, Hongzhi and Huang, Chu-Ren
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4032--4036
In the design of controlled experiments with language stimuli, researchers from psycholinguistic, neurolinguistic, and related fields, require language resources that isolate variables known to affect language processing. This article describes a freely available database that provides word level statistics for words and nonwords of Mandarin, Chinese. The featured lexical statistics include subtitle corpus frequency, phonological neighborhood density, neighborhood frequency, and homophone density. The accompanying word descriptors include pinyin, ascii phonetic transcription (sampa), lexical tone, syllable structure, dominant PoS, and syllable, segment and pinyin lengths for each phonological word. It is designed for researchers particularly concerned with language processing of isolated words and made to accommodate multiple existing hypotheses concerning the structure of the Mandarin syllable. The database is divided into multiple files according to the desired search criteria: 1) the syllable segmentation schema used to calculate density measures, and 2) whether the search is for words or nonwords. The database is open to the research community at \url{https://github.com/karlneergaard/Mandarin-Neighborhood-Statistics}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,946
inproceedings
janssen-2016-teitok
{TEITOK}: Text-Faithful Annotated Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1637/
Janssen, Maarten
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4037--4043
TEITOK is a web-based framework for corpus creation, annotation, and distribution, that combines textual and linguistic annotation within a single TEI based XML document. TEITOK provides several built-in NLP tools to automatically (pre)process texts, and is highly customizable. It features multiple orthographic transcription layers, and a wide range of user-defined token-based annotations. For searching, TEITOK interfaces with a local CQP server. TEITOK can handle various types of additional resources including Facsimile images and linked audio files, making it possible to have a combined written/spoken corpus. It also has additional modules for PSDX syntactic annotation and several types of stand-off annotation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,947
inproceedings
schenner-nordhoff-2016-extracting
Extracting Interlinear Glossed Text from {L}a{T}e{X} Documents
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1638/
Schenner, Mathias and Nordhoff, Sebastian
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4044--4048
We present texigt, a command-line tool for the extraction of structured linguistic data from LaTeX source documents, and a language resource that has been generated using this tool: a corpus of interlinear glossed text (IGT) extracted from open access books published by Language Science Press. Extracted examples are represented in a simple XML format that is easy to process and can be used to validate certain aspects of interlinear glossed text. The main challenge involved is the parsing of TeX and LaTeX documents. We review why this task is impossible in general and how the texhs Haskell library uses a layered architecture and selective early evaluation (expansion) during lexing and parsing in order to provide access to structured representations of LaTeX documents at several levels. In particular, its parsing modules generate an abstract syntax tree for LaTeX documents after expansion of all user-defined macros and lexer-level commands that serves as an ideal interface for the extraction of interlinear glossed text by texigt. This architecture can easily be adapted to extract other types of linguistic data structures from LaTeX source documents.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,948
inproceedings
carlotto-etal-2016-interoperability
Interoperability of Annotation Schemes: Using the Pepper Framework to Display {AWA} Documents in the {ANNIS} Interface
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1639/
Carlotto, Talvany and Beloki, Zuhaitz and Artola, Xabier and Soroa, Aitor
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4049--4054
Natural language processing applications are frequently integrated to solve complex linguistic problems, but the lack of interoperability between these tools tends to be one of the main issues found in that process. That is often caused by the different linguistic formats used across the applications, which leads to attempts to both establish standard formats to represent linguistic information and to create conversion tools to facilitate this integration. Pepper is an example of the latter, as a framework that helps the conversion between different linguistic annotation formats. In this paper, we describe the use of Pepper to convert a corpus linguistically annotated by the annotation scheme AWA into the relANNIS format, with the ultimate goal of interacting with AWA documents through the ANNIS interface. The experiment converted 40 megabytes of AWA documents, allowed their use on the ANNIS interface, and involved making architectural decisions during the mapping from AWA into relANNIS using Pepper. The main issues faced during this process were due to technical issues mainly caused by the integration of the different systems and projects, namely AWA, Pepper and ANNIS.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,949
inproceedings
al-badrashiny-etal-2016-split
{SPLIT}: Smart Preprocessing (Quasi) Language Independent Tool
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1640/
Al-Badrashiny, Mohamed and Pasha, Arfath and Diab, Mona and Habash, Nizar and Rambow, Owen and Salloum, Wael and Eskander, Ramy
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4055--4060
Text preprocessing is an important and necessary task for all NLP applications. A simple variation in any preprocessing step may drastically affect the final results. Moreover replicability and comparability, as much as feasible, is one of the goals of our scientific enterprise, thus building systems that can ensure the consistency in our various pipelines would contribute significantly to our goals. The problem has become quite pronounced with the abundance of NLP tools becoming more and more available yet with different levels of specifications. In this paper, we present a dynamic unified preprocessing framework and tool, SPLIT, that is highly configurable based on user requirements which serves as a preprocessing tool for several tools at once. SPLIT aims to standardize the implementations of the most important preprocessing steps by allowing for a unified API that could be exchanged across different researchers to ensure complete transparency in replication. The user is able to select the required preprocessing tasks among a long list of preprocessing steps. The user is also able to specify the order of execution which in turn affects the final preprocessing output.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,950
inproceedings
samardzic-etal-2016-archimob
{A}rchi{M}ob - A Corpus of Spoken {S}wiss {G}erman
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1641/
Samard{\v{z}}i{\'c}, Tanja and Scherrer, Yves and Glaser, Elvira
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4061--4066
Swiss dialects of German are, unlike most dialects of well standardised languages, widely used in everyday communication. Despite this fact, automatic processing of Swiss German is still a considerable challenge due to the fact that it is mostly a spoken variety rarely recorded and that it is subject to considerable regional variation. This paper presents a freely available general-purpose corpus of spoken Swiss German suitable for linguistic research, but also for training automatic tools. The corpus is a result of a long design process, intensive manual work and specially adapted computational processing. We first describe how the documents were transcribed, segmented and aligned with the sound source, and how inconsistent transcriptions were unified through an additional normalisation layer. We then present a bootstrapping approach to automatic normalisation using different machine-translation-inspired methods. Furthermore, we evaluate the performance of part-of-speech taggers on our data and show how the same bootstrapping approach improves part-of-speech tagging by 10{\%} over four rounds. Finally, we present the modalities of access of the corpus as well as the data format.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,951
inproceedings
homburg-chiarcos-2016-word
Word Segmentation for {A}kkadian Cuneiform
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1642/
Homburg, Timo and Chiarcos, Christian
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4067--4074
We present experiments on word segmentation for Akkadian cuneiform, an ancient writing system and a language used for about 3 millennia in the ancient Near East. To our best knowledge, this is the first study of this kind applied to either the Akkadian language or the cuneiform writing system. As a logosyllabic writing system, cuneiform structurally resembles Eastern Asian writing systems, so, we employ word segmentation algorithms originally developed for Chinese and Japanese. We describe results of rule-based algorithms, dictionary-based algorithms, statistical and machine learning approaches. Our results may indicate possible promising steps in cuneiform word segmentation that can create and improve natural language processing in this area.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,952
inproceedings
grouin-2016-controlled
Controlled Propagation of Concept Annotations in Textual Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1643/
Grouin, Cyril
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4075--4079
In this paper, we presented the annotation propagation tool we designed to be used in conjunction with the BRAT rapid annotation tool. We designed two experiments to annotate a corpus of 60 files, first not using our tool, second using our propagation tool. We evaluated the annotation time and the quality of annotations. We shown that using the annotation propagation tool reduces by 31.7{\%} the time spent to annotate the corpus with a better quality of results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,953
inproceedings
hasida-2016-graphical
Graphical Annotation for Syntax-Semantics Mapping
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1644/
Hasida, K{\^o}iti
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4080--4084
A potential work item (PWI) for ISO standard (MAP) about linguistic annotation concerning syntax-semantics mapping is discussed. MAP is a framework for graphical linguistic annotation to specify a mapping (set of combinations) between possible syntactic and semantic structures of the annotated linguistic data. Just like a UML diagram, a MAP diagram is formal, in the sense that it accurately specifies such a mapping. MAP provides a diagrammatic sort of concrete syntax for linguistic annotation far easier to understand than textual concrete syntax such as in XML, so that it could better facilitate collaborations among people involved in research, standardization, and practical use of linguistic data. MAP deals with syntactic structures including dependencies, coordinations, ellipses, transsentential constructions, and so on. Semantic structures treated by MAP are argument structures, scopes, coreferences, anaphora, discourse relations, dialogue acts, and so forth. In order to simplify explicit annotations, MAP allows partial descriptions, and assumes a few general rules on correspondence between syntactic and semantic compositions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,954
inproceedings
sammons-etal-2016-edison
{EDISON}: Feature Extraction for {NLP}, Simplified
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1645/
Sammons, Mark and Christodoulopoulos, Christos and Kordjamshidi, Parisa and Khashabi, Daniel and Srikumar, Vivek and Vijayakumar, Paul and Bokhari, Mazin and Wu, Xinbo and Roth, Dan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4085--4092
When designing Natural Language Processing (NLP) applications that use Machine Learning (ML) techniques, feature extraction becomes a significant part of the development effort, whether developing a new application or attempting to reproduce results reported for existing NLP tasks. We present EDISON, a Java library of feature generation functions used in a suite of state-of-the-art NLP tools, based on a set of generic NLP data structures. These feature extractors populate simple data structures encoding the extracted features, which the package can also serialize to an intuitive JSON file format that can be easily mapped to formats used by ML packages. EDISON can also be used programmatically with JVM-based (Java/Scala) NLP software to provide the feature extractor input. The collection of feature extractors is organised hierarchically and a simple search interface is provided. In this paper we include examples that demonstrate the versatility and ease-of-use of the EDISON feature extraction suite to show that this can significantly reduce the time spent by developers on feature extraction design for NLP systems. The library is publicly hosted at \url{https://github.com/IllinoisCogComp/illinois-cogcomp-nlp/}, and we hope that other NLP researchers will contribute to the set of feature extractors. In this way, the community can help simplify reproduction of published results and the integration of ideas from diverse sources when developing new and improved NLP applications.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,955
inproceedings
al-twairesh-etal-2016-madad
{MADAD}: A Readability Annotation Tool for {A}rabic Text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1646/
Al-Twairesh, Nora and Al-Dayel, Abeer and Al-Khalifa, Hend and Al-Yahya, Maha and Alageel, Sinaa and Abanmy, Nora and Al-Shenaifi, Nouf
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4093--4097
This paper introduces MADAD, a general-purpose annotation tool for Arabic text with focus on readability annotation. This tool will help in overcoming the problem of lack of Arabic readability training data by providing an online environment to collect readability assessments on various kinds of corpora. Also the tool supports a broad range of annotation tasks for various linguistic and semantic phenomena by allowing users to create their customized annotation schemes. MADAD is a web-based tool, accessible through any web browser; the main features that distinguish MADAD are its flexibility, portability, customizability and its bilingual interface (Arabic/English).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,956
inproceedings
zampieri-etal-2016-modeling
Modeling Language Change in Historical Corpora: The Case of {P}ortuguese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1647/
Zampieri, Marcos and Malmasi, Shervin and Dras, Mark
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4098--4104
This paper presents a number of experiments to model changes in a historical Portuguese corpus composed of literary texts for the purpose of temporal text classification. Algorithms were trained to classify texts with respect to their publication date taking into account lexical variation represented as word n-grams, and morphosyntactic variation represented by part-of-speech (POS) distribution. We report results of 99.8{\%} accuracy using word unigram features with a Support Vector Machines classifier to predict the publication date of documents in time intervals of both one century and half a century. A feature analysis is performed to investigate the most informative features for this task and how they are linked to language change.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,957
inproceedings
gralinski-etal-2016-said
{\textquotedblleft}He Said She Said{\textquotedblright} {\textemdash} a Male/Female Corpus of {P}olish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1648/
Grali{\'n}ski, Filip and Borchmann, {\L}ukasz and Wierzcho{\'n}, Piotr
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4105--4110
Gender differences in language use have long been of interest in linguistics. The task of automatic gender attribution has been considered in computational linguistics as well. Most research of this type is done using (usually English) texts with authorship metadata. In this paper, we propose a new method of male/female corpus creation based on gender-specific first-person expressions. The method was applied on CommonCrawl Web corpus for Polish (language, in which gender-revealing first-person expressions are particularly frequent) to yield a large (780M words) and varied collection of men`s and women`s texts. The whole procedure for building the corpus and filtering out unwanted texts is described in the present paper. The quality check was done on a random sample of the corpus to make sure that the majority (84{\%}) of texts are correctly attributed, natural texts. Some preliminary (socio)linguistic insights (websites and words frequently occurring in male/female fragments) are given as well.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,958
inproceedings
smith-etal-2016-cohere
{C}ohere: A Toolkit for Local Coherence
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1649/
Smith, Karin Sim and Aziz, Wilker and Specia, Lucia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4111--4114
We describe COHERE, our coherence toolkit which incorporates various complementary models for capturing and measuring different aspects of text coherence. In addition to the traditional entity grid model (Lapata, 2005) and graph-based metric (Guinaudeau and Strube, 2013), we provide an implementation of a state-of-the-art syntax-based model (Louis and Nenkova, 2012), as well as an adaptation of this model which shows significant performance improvements in our experiments. We benchmark these models using the standard setting for text coherence: original documents and versions of the document with sentences in shuffled order.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,959
inproceedings
ravenscroft-etal-2016-multi
Multi-label Annotation in Scientific Articles - The Multi-label Cancer Risk Assessment Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1650/
Ravenscroft, James and Oellrich, Anika and Saha, Shyamasree and Liakata, Maria
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4115--4123
With the constant growth of the scientific literature, automated processes to enable access to its contents are increasingly in demand. Several functional discourse annotation schemes have been proposed to facilitate information extraction and summarisation from scientific articles, the most well known being argumentative zoning. Core Scientific concepts (CoreSC) is a three layered fine-grained annotation scheme providing content-based annotations at the sentence level and has been used to index, extract and summarise scientific publications in the biomedical literature. A previously developed CoreSC corpus on which existing automated tools have been trained contains a single annotation for each sentence. However, it is the case that more than one CoreSC concept can appear in the same sentence. Here, we present the Multi-CoreSC CRA corpus, a text corpus specific to the domain of cancer risk assessment (CRA), consisting of 50 full text papers, each of which contains sentences annotated with one or more CoreSCs. The full text papers have been annotated by three biology experts. We present several inter-annotator agreement measures appropriate for multi-label annotation assessment. Employing several inter-annotator agreement measures, we were able to identify the most reliable annotator and we built a harmonised consensus (gold standard) from the three different annotators, while also taking concept priority (as specified in the guidelines) into account. We also show that the new Multi-CoreSC CRA corpus allows us to improve performance in the recognition of CoreSCs. The updated guidelines, the multi-label CoreSC CRA corpus and other relevant, related materials are available at the time of publication at \url{http://www.sapientaproject.com/}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,960
inproceedings
orizu-he-2016-detecting
Detecting Expressions of Blame or Praise in Text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1651/
Orizu, Udochukwu and He, Yulan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4124--4129
The growth of social networking platforms has drawn a lot of attentions to the need for social computing. Social computing utilises human insights for computational tasks as well as design of systems that support social behaviours and interactions. One of the key aspects of social computing is the ability to attribute responsibility such as blame or praise to social events. This ability helps an intelligent entity account and understand other intelligent entities' social behaviours, and enriches both the social functionalities and cognitive aspects of intelligent agents. In this paper, we present an approach with a model for blame and praise detection in text. We build our model based on various theories of blame and include in our model features used by humans determining judgment such as moral agent causality, foreknowledge, intentionality and coercion. An annotated corpus has been created for the task of blame and praise detection from text. The experimental results show that while our model gives similar results compared to supervised classifiers on classifying text as blame, praise or others, it outperforms supervised classifiers on more finer-grained classification of determining the direction of blame and praise, i.e., self-blame, blame-others, self-praise or praise-others, despite not using labelled training data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,961
inproceedings
tulkens-etal-2016-evaluating
Evaluating Unsupervised {D}utch Word Embeddings as a Linguistic Resource
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1652/
Tulkens, St{\'e}phan and Emmery, Chris and Daelemans, Walter
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4130--4136
Word embeddings have recently seen a strong increase in interest as a result of strong performance gains on a variety of tasks. However, most of this research also underlined the importance of benchmark datasets, and the difficulty of constructing these for a variety of language-specific tasks. Still, many of the datasets used in these tasks could prove to be fruitful linguistic resources, allowing for unique observations into language use and variability. In this paper we demonstrate the performance of multiple types of embeddings, created with both count and prediction-based architectures on a variety of corpora, in two language-specific tasks: relation evaluation, and dialect identification. For the latter, we compare unsupervised methods with a traditional, hand-crafted dictionary. With this research, we provide the embeddings themselves, the relation evaluation task benchmark for use in further research, and demonstrate how the benchmarked embeddings prove a useful unsupervised linguistic resource, effectively used in a downstream task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,962
inproceedings
frain-wubben-2016-satiriclr
{S}atiric{LR}: a Language Resource of Satirical News Articles
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1653/
Frain, Alice and Wubben, Sander
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4137--4140
In this paper we introduce the Satirical Language Resource: a dataset containing a balanced collection of satirical and non satirical news texts from various domains. This is the first dataset of this magnitude and scope in the domain of satire. We envision this dataset will facilitate studies on various aspects of of sat- ire in news articles. We test the viability of our data on the task of classification of satire.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,963
inproceedings
klang-nugues-2016-wikiparq
{WIKIPARQ}: A Tabulated {W}ikipedia Resource Using the Parquet Format
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1654/
Klang, Marcus and Nugues, Pierre
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4141--4148
Wikipedia has become one of the most popular resources in natural language processing and it is used in quantities of applications. However, Wikipedia requires a substantial pre-processing step before it can be used. For instance, its set of nonstandardized annotations, referred to as the wiki markup, is language-dependent and needs specific parsers from language to language, for English, French, Italian, etc. In addition, the intricacies of the different Wikipedia resources: main article text, categories, wikidata, infoboxes, scattered into the article document or in different files make it difficult to have global view of this outstanding resource. In this paper, we describe WikiParq, a unified format based on the Parquet standard to tabulate and package the Wikipedia corpora. In combination with Spark, a map-reduce computing framework, and the SQL query language, WikiParq makes it much easier to write database queries to extract specific information or subcorpora from Wikipedia, such as all the first paragraphs of the articles in French, or all the articles on persons in Spanish, or all the articles on persons that have versions in French, English, and Spanish. WikiParq is available in six language versions and is potentially extendible to all the languages of Wikipedia. The WikiParq files are downloadable as tarball archives from this location: \url{http://semantica.cs.lth.se/wikiparq/}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,964
inproceedings
vilares-etal-2016-en
{EN}-{ES}-{CS}: An {E}nglish-{S}panish Code-Switching {T}witter Corpus for Multilingual Sentiment Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1655/
Vilares, David and Alonso, Miguel A. and G{\'o}mez-Rodr{\'i}guez, Carlos
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4149--4153
Code-switching texts are those that contain terms in two or more different languages, and they appear increasingly often in social media. The aim of this paper is to provide a resource to the research community to evaluate the performance of sentiment classification techniques on this complex multilingual environment, proposing an English-Spanish corpus of tweets with code-switching (EN-ES-CS CORPUS). The tweets are labeled according to two well-known criteria used for this purpose: SentiStrength and a trinary scale (positive, neutral and negative categories). Preliminary work on the resource is already done, providing a set of baselines for the research community.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,965
inproceedings
benikova-biemann-2016-semreldata
{S}em{R}el{D}ata {\textemdash} Multilingual Contextual Annotation of Semantic Relations between Nominals: Dataset and Guidelines
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1656/
Benikova, Darina and Biemann, Chris
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4154--4161
Semantic relations play an important role in linguistic knowledge representation. Although their role is relevant in the context of written text, there is no approach or dataset that makes use of contextuality of classic semantic relations beyond the boundary of one sentence. We present the SemRelData dataset that contains annotations of semantic relations between nominals in the context of one paragraph. To be able to analyse the universality of this context notion, the annotation was performed on a multi-lingual and multi-genre corpus. To evaluate the dataset, it is compared to large, manually created knowledge resources in the respective languages. The comparison shows that knowledge bases not only have coverage gaps; they also do not account for semantic relations that are manifested in particular contexts only, yet still play an important role for text cohesion.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,966
inproceedings
ferrero-etal-2016-multilingual
A Multilingual, Multi-style and Multi-granularity Dataset for Cross-language Textual Similarity Detection
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1657/
Ferrero, J{\'e}r{\'e}my and Agn{\`e}s, Fr{\'e}d{\'e}ric and Besacier, Laurent and Schwab, Didier
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4162--4169
In this paper we describe our effort to create a dataset for the evaluation of cross-language textual similarity detection. We present preexisting corpora and their limits and we explain the various gathered resources to overcome these limits and build our enriched dataset. The proposed dataset is multilingual, includes cross-language alignment for different granularities (from chunk to document), is based on both parallel and comparable corpora and contains human and machine translated texts. Moreover, it includes texts written by multiple types of authors (from average to professionals). With the obtained dataset, we conduct a systematic and rigorous evaluation of several state-of-the-art cross-language textual similarity detection methods. The evaluation results are reviewed and discussed. Finally, dataset and scripts are made publicly available on GitHub: \url{http://github.com/FerreroJeremy/Cross-Language-Dataset}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,967
inproceedings
rekabsaz-etal-2016-standard
Standard Test Collection for {E}nglish-{P}ersian Cross-Lingual Word Sense Disambiguation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1659/
Rekabsaz, Navid and Sabetghadam, Serwah and Lupu, Mihai and Andersson, Linda and Hanbury, Allan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4176--4179
In this paper, we address the shortage of evaluation benchmarks on Persian (Farsi) language by creating and making available a new benchmark for English to Persian Cross Lingual Word Sense Disambiguation (CL-WSD). In creating the benchmark, we follow the format of the SemEval 2013 CL-WSD task, such that the introduced tools of the task can also be applied on the benchmark. In fact, the new benchmark extends the SemEval-2013 CL-WSD task to Persian language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,969
inproceedings
dojchinovski-etal-2016-freme
{FREME}: Multilingual Semantic Enrichment with Linked Data and Language Technologies
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1660/
Dojchinovski, Milan and Sasaki, Felix and Gornostaja, Tatjana and Hellmann, Sebastian and Mannens, Erik and Salliau, Frank and Osella, Michele and Ritchie, Phil and Stoitsis, Giannis and Koidl, Kevin and Ackermann, Markus and Chakraborty, Nilesh
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4180--4183
In the recent years, Linked Data and Language Technology solutions gained popularity. Nevertheless, their coupling in real-world business is limited due to several issues. Existing products and services are developed for a particular domain, can be used only in combination with already integrated datasets or their language coverage is limited. In this paper, we present an innovative solution FREME - an open framework of e-Services for multilingual and semantic enrichment of digital content. The framework integrates six interoperable e-Services. We describe the core features of each e-Service and illustrate their usage in the context of four business cases: i) authoring and publishing; ii) translation and localisation; iii) cross-lingual access to data; and iv) personalised Web content recommendations. Business cases drive the design and development of the framework.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,970
inproceedings
hazem-morin-2016-improving
Improving Bilingual Terminology Extraction from Comparable Corpora via Multiple Word-Space Models
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1661/
Hazem, Amir and Morin, Emmanuel
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4184--4187
There is a rich flora of word space models that have proven their efficiency in many different applications including information retrieval (Dumais, 1988), word sense disambiguation (Schutze, 1992), various semantic knowledge tests (Lund et al., 1995; Karlgren, 2001), and text categorization (Sahlgren, 2005). Based on the assumption that each model captures some aspects of word meanings and provides its own empirical evidence, we present in this paper a systematic exploration of the principal corpus-based word space models for bilingual terminology extraction from comparable corpora. We find that, once we have identified the best procedures, a very simple combination approach leads to significant improvements compared to individual models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,971
inproceedings
berard-etal-2016-multivec
{M}ulti{V}ec: a Multilingual and Multilevel Representation Learning Toolkit for {NLP}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1662/
B{\'e}rard, Alexandre and Servan, Christophe and Pietquin, Olivier and Besacier, Laurent
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4188--4192
We present MultiVec, a new toolkit for computing continuous representations for text at different granularity levels (word-level or sequences of words). MultiVec includes word2vec`s features, paragraph vector (batch and online) and bivec for bilingual distributed representations. MultiVec also includes different distance measures between words and sequences of words. The toolkit is written in C++ and is aimed at being fast (in the same order of magnitude as word2vec), easy to use, and easy to extend. It has been evaluated on several NLP tasks: the analogical reasoning task, sentiment analysis, and crosslingual document classification.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,972
inproceedings
abouammoh-etal-2016-creation
Creation of comparable corpora for {E}nglish-{Urdu, Arabic, Persian}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1663/
Abouammoh, Murad and Shah, Kashif and Aker, Ahmet
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4193--4196
Statistical Machine Translation (SMT) relies on the availability of rich parallel corpora. However, in the case of under-resourced languages or some specific domains, parallel corpora are not readily available. This leads to under-performing machine translation systems in those sparse data settings. To overcome the low availability of parallel resources the machine translation community has recognized the potential of using comparable resources as training data. However, most efforts have been related to European languages and less in middle-east languages. In this study, we report comparable corpora created from news articles for the pair English {\textemdash}{\{}Arabic, Persian, Urdu{\}} languages. The data has been collected over a period of a year, entails Arabic, Persian and Urdu languages. Furthermore using the English as a pivot language, comparable corpora that involve more than one language can be created, e.g. English- Arabic - Persian, English - Arabic - Urdu, English {\textemdash} Urdu - Persian, etc. Upon request the data can be provided for research purposes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,973
inproceedings
nisioi-etal-2016-corpus
A Corpus of Native, Non-native and Translated Texts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1664/
Nisioi, Sergiu and Rabinovich, Ella and Dinu, Liviu P. and Wintner, Shuly
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4197--4201
We describe a monolingual English corpus of original and (human) translated texts, with an accurate annotation of speaker properties, including the original language of the utterances and the speaker`s country of origin. We thus obtain three sub-corpora of texts reflecting native English, non-native English, and English translated from a variety of European languages. This dataset will facilitate the investigation of similarities and differences between these kinds of sub-languages. Moreover, it will facilitate a unified comparative study of translations and language produced by (highly fluent) non-native speakers, two closely-related phenomena that have only been studied in isolation so far.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,974
inproceedings
fischer-etal-2016-orthographic
Orthographic and Morphological Correspondences between Related {S}lavic Languages as a Base for Modeling of Mutual Intelligibility
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1665/
Fischer, Andrea and J{\'a}grov{\'a}, Kl{\'a}ra and Stenger, Irina and Avgustinova, Tania and Klakow, Dietrich and Marti, Roland
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4202--4209
In an intercomprehension scenario, typically a native speaker of language L1 is confronted with output from an unknown, but related language L2. In this setting, the degree to which the receiver recognizes the unfamiliar words greatly determines communicative success. Despite exhibiting great string-level differences, cognates may be recognized very successfully if the receiver is aware of regular correspondences which allow to transform the unknown word into its familiar form. Modeling L1-L2 intercomprehension then requires the identification of all the regular correspondences between languages L1 and L2. We here present a set of linguistic orthographic correspondences manually compiled from comparative linguistics literature along with a set of statistically-inferred suggestions for correspondence rules. In order to do statistical inference, we followed the Minimum Description Length principle, which proposes to choose those rules which are most effective at describing the data. Our statistical model was able to reproduce most of our linguistic correspondences (88.5{\%} for Czech-Polish and 75.7{\%} for Bulgarian-Russian) and furthermore allowed to easily identify many more non-trivial correspondences which also cover aspects of morphology.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,975
inproceedings
gutierrez-vasques-etal-2016-axolotl
{A}xolotl: a Web Accessible Parallel Corpus for {S}panish-{N}ahuatl
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1666/
Gutierrez-Vasques, Ximena and Sierra, Gerardo and Pompa, Isaac Hernandez
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4210--4214
This paper describes the project called Axolotl which comprises a Spanish-Nahuatl parallel corpus and its search interface. Spanish and Nahuatl are distant languages spoken in the same country. Due to the scarcity of digital resources, we describe the several problems that arose when compiling this corpus: most of our sources were non-digital books, we faced errors when digitizing the sources and there were difficulties in the sentence alignment process, just to mention some. The documents of the parallel corpus are not homogeneous, they were extracted from different sources, there is dialectal, diachronical, and orthographical variation. Additionally, we present a web search interface that allows to make queries through the whole parallel corpus, the system is capable to retrieve the parallel fragments that contain a word or phrase searched by a user in any of the languages. To our knowledge, this is the first Spanish-Nahuatl public available digital parallel corpus. We think that this resource can be useful to develop language technologies and linguistic studies for this language pair.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,976
inproceedings
cetinoglu-2016-turkish
A {T}urkish-{G}erman Code-Switching Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1667/
{\c{Cetino{\u{glu, {\"Ozlem
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4215--4220
Bilingual communities often alternate between languages both in spoken and written communication. One such community, Germany residents of Turkish origin produce Turkish-German code-switching, by heavily mixing two languages at discourse, sentence, or word level. Code-switching in general, and Turkish-German code-switching in particular, has been studied for a long time from a linguistic perspective. Yet resources to study them from a more computational perspective are limited due to either small size or licence issues. In this work we contribute the solution of this problem with a corpus. We present a Turkish-German code-switching corpus which consists of 1029 tweets, with a majority of intra-sentential switches. We share different type of code-switching we have observed in our collection and describe our processing steps. The first step is data collection and filtering. This is followed by manual tokenisation and normalisation. And finally, we annotate data with word-level language identification information. The resulting corpus is available for research purposes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,977
inproceedings
mohler-etal-2016-introducing
Introducing the {LCC} Metaphor Datasets
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1668/
Mohler, Michael and Brunson, Mary and Rink, Bryan and Tomlinson, Marc
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4221--4227
In this work, we present the Language Computer Corporation (LCC) annotated metaphor datasets, which represent the largest and most comprehensive resource for metaphor research to date. These datasets were produced over the course of three years by a staff of nine annotators working in four languages (English, Spanish, Russian, and Farsi). As part of these datasets, we provide (1) metaphoricity ratings for within-sentence word pairs on a four-point scale, (2) scored links to our repository of 114 source concept domains and 32 target concept domains, and (3) ratings for the affective polarity and intensity of each pair. Altogether, we provide 188,741 annotations in English (for 80,100 pairs), 159,915 annotations in Spanish (for 63,188 pairs), 99,740 annotations in Russian (for 44,632 pairs), and 137,186 annotations in Farsi (for 57,239 pairs). In addition, we are providing a large set of likely metaphors which have been independently extracted by our two state-of-the-art metaphor detection systems but which have not been analyzed by our team of annotators.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,978
inproceedings
diab-etal-2016-creating
Creating a Large Multi-Layered Representational Repository of Linguistic Code Switched {A}rabic Data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1669/
Diab, Mona and Ghoneim, Mahmoud and Hawwari, Abdelati and AlGhamdi, Fahad and AlMarwani, Nada and Al-Badrashiny, Mohamed
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4228--4235
We present our effort to create a large Multi-Layered representational repository of Linguistic Code-Switched Arabic data. The process involves developing clear annotation standards and Guidelines, streamlining the annotation process, and implementing quality control measures. We used two main protocols for annotation: in-lab gold annotations and crowd sourcing annotations. We developed a web-based annotation tool to facilitate the management of the annotation process. The current version of the repository contains a total of 886,252 tokens that are tagged into one of sixteen code-switching tags. The data exhibits code switching between Modern Standard Arabic and Egyptian Dialectal Arabic representing three data genres: Tweets, commentaries, and discussion fora. The overall Inter-Annotator Agreement is 93.1{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,979
inproceedings
meurant-etal-2016-modelling
Modelling a Parallel Corpus of {F}rench and {F}rench {B}elgian {S}ign {L}anguage
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1670/
Meurant, Laurence and Gobert, Maxime and Cleve, Anthony
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4236--4240
The overarching objective underlying this research is to develop an online tool, based on a parallel corpus of French Belgian Sign Language (LSFB) and written Belgian French. This tool is aimed to assist various set of tasks related to the comparison of LSFB and French, to the benefit of general users as well as teachers in bilingual schools, translators and interpreters, as well as linguists. These tasks include (1) the comprehension of LSFB or French texts, (2) the production of LSFB or French texts, (3) the translation between LSFB and French in both directions and (4) the contrastive analysis of these languages. The first step of investigation aims at creating an unidirectional French-LSFB concordancer, able to align a one- or multiple-word expression from the French translated text with its corresponding expressions in the videotaped LSFB productions. We aim at testing the efficiency of this concordancer for the extraction of a dictionary of meanings in context. In this paper, we will present the modelling of the different data sources at our disposal and specifically the way they interact with one another.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,980
inproceedings
cebovic-tadic-2016-building
Building the {M}acedonian-{C}roatian Parallel Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1671/
Cebovi{\'c}, Ines and Tadi{\'c}, Marko
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4241--4244
In this paper we present the newly created parallel corpus of two under-resourced languages, namely, Macedonian-Croatian Parallel Corpus (mk-hr{\_}pcorp) that has been collected during 2015 at the Faculty of Humanities and Social Sciences, University of Zagreb. The mk-hr{\_}pcorp is a unidirectional (mk{\textrightarrow}hr) parallel corpus composed of synchronic fictional prose texts received already in digital form with over 500 Kw in each language. The corpus was sentence segmented and provides 39,735 aligned sentences. The alignment was done automatically and then post-corrected manually. The alignments order was shuffled and this enabled the corpus to be available under CC-BY license through META-SHARE. However, this prevents the research in language units over the sentence level.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,981
inproceedings
benko-2016-two
Two Years of Aranea: Increasing Counts and Tuning the Pipeline
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1672/
Benko, Vladim{\'i}r
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4245--4248
The Aranea Project is targeted at creation of a family of Gigaword web-corpora for a dozen of languages that could be used for teaching language- and linguistics-related subjects at Slovak universities, as well as for research purposes in various areas of linguistics. All corpora are being built according to a standard methodology and using the same set of tools for processing and annotation, which {\textemdash} together with their standard size and{\textemdash} makes them also a valuable resource for translators and contrastive studies. All our corpora are freely available either via a web interface or in a source form in an annotated vertical format.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,982
inproceedings
umata-etal-2016-quantitative
Quantitative Analysis of Gazes and Grounding Acts in {L}1 and {L}2 Conversations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1673/
Umata, Ichiro and Ijuin, Koki and Ishida, Mitsuru and Takeuchi, Moe and Yamamoto, Seiichi
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4249--4252
The listener`s gazing activities during utterances were analyzed in a face-to-face three-party conversation setting. The function of each utterance was categorized according to the Grounding Acts defined by Traum (Traum, 1994) so that gazes during utterances could be analyzed from the viewpoint of grounding in communication (Clark, 1996). Quantitative analysis showed that the listeners were gazing at the speakers more in the second language (L2) conversation than in the native language (L1) conversation during the utterances that added new pieces of information, suggesting that they are using visual information to compensate for their lack of linguistic proficiency in L2 conversation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,983
inproceedings
jones-etal-2016-multi
Multi-language Speech Collection for {NIST} {LRE}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1674/
Jones, Karen and Strassel, Stephanie and Walker, Kevin and Graff, David and Wright, Jonathan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4253--4258
The Multi-language Speech (MLS) Corpus supports NIST`s Language Recognition Evaluation series by providing new conversational telephone speech and broadcast narrowband data in 20 languages/dialects. The corpus was built with the intention of testing system performance in the matter of distinguishing closely related or confusable linguistic varieties, and careful manual auditing of collected data was an important aspect of this work. This paper lists the specific data requirements for the collection and provides both a commentary on the rationale for those requirements as well as an outline of the various steps taken to ensure all goals were met as specified. LDC conducted a large-scale recruitment effort involving the implementation of candidate assessment and interview techniques suitable for hiring a large contingent of telecommuting workers, and this recruitment effort is discussed in detail. We also describe the telephone and broadcast collection infrastructure and protocols, and provide details of the steps taken to pre-process collected data prior to auditing. Finally, annotation training, procedures and outcomes are presented in detail.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,984
inproceedings
zesch-horsmann-2016-flextag
{F}lex{T}ag: A Highly Flexible {P}o{S} Tagging Framework
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1675/
Zesch, Torsten and Horsmann, Tobias
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4259--4263
We present FlexTag, a highly flexible PoS tagging framework. In contrast to monolithic implementations that can only be retrained but not adapted otherwise, FlexTag enables users to modify the feature space and the classification algorithm. Thus, FlexTag makes it easy to quickly develop custom-made taggers exactly fitting the research problem.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,985
inproceedings
ljubesic-etal-2016-new
New Inflectional Lexicons and Training Corpora for Improved Morphosyntactic Annotation of {C}roatian and {S}erbian
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1676/
Ljube{\v{s}}i{\'c}, Nikola and Klubi{\v{c}}ka, Filip and Agi{\'c}, {\v{Z}}eljko and Jazbec, Ivo-Pavao
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4264--4270
In this paper we present newly developed inflectional lexcions and manually annotated corpora of Croatian and Serbian. We introduce hrLex and srLex - two freely available inflectional lexicons of Croatian and Serbian - and describe the process of building these lexicons, supported by supervised machine learning techniques for lemma and paradigm prediction. Furthermore, we introduce hr500k, a manually annotated corpus of Croatian, 500 thousand tokens in size. We showcase the three newly developed resources on the task of morphosyntactic annotation of both languages by using a recently developed CRF tagger. We achieve best results yet reported on the task for both languages, beating the HunPos baseline trained on the same datasets by a wide margin.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,986
inproceedings
luecking-etal-2016-tgermacorp
{TG}erma{C}orp {--} A (Digital) Humanities Resource for (Computational) Linguistics
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1677/
Luecking, Andy and Hoenen, Armin and Mehler, Alexander
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4271--4277
TGermaCorp is a German text corpus whose primary sources are collected from German literature texts which date from the sixteenth century to the present. The corpus is intended to represent its target language (German) in syntactic, lexical, stylistic and chronological diversity. For this purpose, it is hand-annotated on several linguistic layers, including POS, lemma, named entities, multiword expressions, clauses, sentences and paragraphs. In order to introduce TGermaCorp in comparison to more homogeneous corpora of contemporary everyday language, quantitative assessments of syntactic and lexical diversity are provided. In this respect, TGermaCorp contributes to establishing characterising features for resource descriptions, which is needed for keeping track of a meaningful comparison of the ever-growing number of natural language resources. The assessments confirm the special role of proper names, whose propagation in text may influence lexical and syntactic diversity measures in rather trivial ways. TGermaCorp will be made available via hucompute.org.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,987
inproceedings
pajkossy-zseder-2016-hunvec
The hunvec framework for {NN}-{CRF}-based sequential tagging
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1678/
Pajkossy, Katalin and Zs{\'e}der, Attila
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4278--4281
In this work we present the open source hunvec framework for sequential tagging, built upon Theano and Pylearn2. The underlying statistical model, which connects linear CRF-s with neural networks, was used by Collobert and co-workers, and several other researchers. For demonstrating the flexibility of our tool, we describe a set of experiments on part-of-speech and named-entity-recognition tasks, using English and Hungarian datasets, where we modify both model and training parameters, and illustrate the usage of custom features. Model parameters we experiment with affect the vectorial word representations used by the model; we apply different word vector initializations, defined by Word2vec and GloVe embeddings and enrich the representation of words by vectors assigned trigram features. We extend training methods by using their regularized (l2 and dropout) version. When testing our framework on a Hungarian named entity corpus, we find that its performance reaches the best published results on this dataset, with no need for language-specific feature engineering. Our code is available at \url{http://github.com/zseder/hunvec}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,988
inproceedings
khalifa-etal-2016-large
A Large Scale Corpus of {G}ulf {A}rabic
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1679/
Khalifa, Salam and Habash, Nizar and Abdulrahim, Dana and Hassan, Sara
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4282--4289
Most Arabic natural language processing tools and resources are developed to serve Modern Standard Arabic (MSA), which is the official written language in the Arab World. Some Dialectal Arabic varieties, notably Egyptian Arabic, have received some attention lately and have a growing collection of resources that include annotated corpora and morphological analyzers and taggers. Gulf Arabic, however, lags behind in that respect. In this paper, we present the Gumar Corpus, a large-scale corpus of Gulf Arabic consisting of 110 million words from 1,200 forum novels. We annotate the corpus for sub-dialect information at the document level. We also present results of a preliminary study in the morphological annotation of Gulf Arabic which includes developing guidelines for a conventional orthography. The text of the corpus is publicly browsable through a web interface we developed for it.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,989
inproceedings
straka-etal-2016-udpipe
{UDP}ipe: Trainable Pipeline for Processing {C}o{NLL}-{U} Files Performing Tokenization, Morphological Analysis, {POS} Tagging and Parsing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1680/
Straka, Milan and Haji{\v{c}}, Jan and Strakov{\'a}, Jana
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4290--4297
Automatic natural language processing of large texts often presents recurring challenges in multiple languages: even for most advanced tasks, the texts are first processed by basic processing steps {--} from tokenization to parsing. We present an extremely simple-to-use tool consisting of one binary and one model (per language), which performs these tasks for multiple languages without the need for any other external data. UDPipe, a pipeline processing CoNLL-U-formatted files, performs tokenization, morphological analysis, part-of-speech tagging, lemmatization and dependency parsing for nearly all treebanks of Universal Dependencies 1.2 (namely, the whole pipeline is currently available for 32 out of 37 treebanks). In addition, the pipeline is easily trainable with training data in CoNLL-U format (and in some cases also with additional raw corpora) and requires minimal linguistic knowledge on the users' part. The training code is also released.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,990