entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
ide-etal-2016-language
The Language Application Grid and Galaxy
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1073/
Ide, Nancy and Suderman, Keith and Pustejovsky, James and Verhagen, Marc and Cieri, Christopher
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
457--462
The NSF-SI2-funded LAPPS Grid project is a collaborative effort among Brandeis University, Vassar College, Carnegie-Mellon University (CMU), and the Linguistic Data Consortium (LDC), which has developed an open, web-based infrastructure through which resources can be easily accessed and within which tailored language services can be efficiently composed, evaluated, disseminated and consumed by researchers, developers, and students across a wide variety of disciplines. The LAPPS Grid project recently adopted Galaxy (Giardine et al., 2005), a robust, well-developed, and well-supported front end for workflow configuration, management, and persistence. Galaxy allows data inputs and processing steps to be selected from graphical menus, and results are displayed in intuitive plots and summaries that encourage interactive workflows and the exploration of hypotheses. The Galaxy workflow engine provides significant advantages for deploying pipelines of LAPPS Grid web services, including not only means to create and deploy locally-run and even customized versions of the LAPPS Grid as well as running the LAPPS Grid in the cloud, but also access to a huge array of statistical and visualization tools that have been developed for use in genomics research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,383
inproceedings
choukri-etal-2016-elra
{ELRA} Activities and Services
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1074/
Choukri, Khalid and Mapelli, Val{\'e}rie and Mazo, H{\'e}l{\`e}ne and Popescu, Vladimir
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
463--468
After celebrating its 20th anniversary in 2015, ELRA is carrying on its strong involvement in the HLT field. To share ELRA`s expertise of those 21 past years, this article begins with a presentation of ELRA`s strategic Data and LR Management Plan for a wide use by the language communities. Then, we further report on ELRA`s activities and services provided since LREC 2014. When looking at the cataloguing and licensing activities, we can see that ELRA has been active at making the Meta-Share repository move toward new developments steps, supporting Europe to obtain accurate LRs within the Connecting Europe Facility programme, promoting the use of LR citation, creating the ELRA License Wizard web portal. The article further elaborates on the recent LR production activities of various written, speech and video resources, commissioned by public and private customers. In parallel, ELDA has also worked on several EU-funded projects centred on strategic issues related to the European Digital Single Market. The last part gives an overview of the latest dissemination activities, with a special focus on the celebration of its 20th anniversary organised in Dubrovnik (Croatia) and the following up of LREC, as well as the launching of the new ELRA portal.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,384
inproceedings
navarretta-2016-mirroring
Mirroring Facial Expressions and Emotions in Dyadic Conversations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1075/
Navarretta, Costanza
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
469--474
This paper presents an investigation of mirroring facial expressions and the emotions which they convey in dyadic naturally occurring first encounters. Mirroring facial expressions are a common phenomenon in face-to-face interactions, and they are due to the mirror neuron system which has been found in both animals and humans. Researchers have proposed that the mirror neuron system is an important component behind many cognitive processes such as action learning and understanding the emotions of others. Preceding studies of the first encounters have shown that overlapping speech and overlapping facial expressions are very frequent. In this study, we want to determine whether the overlapping facial expressions are mirrored or are otherwise correlated in the encounters, and to what extent mirroring facial expressions convey the same emotion. The results of our study show that the majority of smiles and laughs, and one fifth of the occurrences of raised eyebrows are mirrored in the data. Moreover some facial traits in co-occurring expressions co-occur more often than it would be expected by chance. Finally, amusement, and to a lesser extent friendliness, are often emotions shared by both participants, while other emotions indicating individual affective states such as uncertainty and hesitancy are never showed by both participants, but co-occur with complementary emotions such as friendliness and support. Whether these tendencies are specific to this type of conversations or are more common should be investigated further.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,385
inproceedings
radev-etal-2016-humor
Humor in Collective Discourse: Unsupervised Funniness Detection in the New Yorker Cartoon Caption Contest
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1076/
Radev, Dragomir and Stent, Amanda and Tetreault, Joel and Pappu, Aasish and Iliakopoulou, Aikaterini and Chanfreau, Agustin and de Juan, Paloma and Vallmitjana, Jordi and Jaimes, Alejandro and Jha, Rahul and Mankoff, Robert
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
475--479
The New Yorker publishes a weekly captionless cartoon. More than 5,000 readers submit captions for it. The editors select three of them and ask the readers to pick the funniest one. We describe an experiment that compares a dozen automatic methods for selecting the funniest caption. We show that negative sentiment, human-centeredness, and lexical centrality most strongly match the funniest captions, followed by positive sentiment. These results are useful for understanding humor and also in the design of more engaging conversational agents in text and multimodal (vision+text) systems. As part of this work, a large set of cartoons and captions is being made available to the community.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,386
inproceedings
yaneva-etal-2016-corpus
A Corpus of Text Data and Gaze Fixations from Autistic and Non-Autistic Adults
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1077/
Yaneva, Victoria and Temnikova, Irina and Mitkov, Ruslan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
480--487
The paper presents a corpus of text data and its corresponding gaze fixations obtained from autistic and non-autistic readers. The data was elicited through reading comprehension testing combined with eye-tracking recording. The corpus consists of 1034 content words tagged with their POS, syntactic role and three gaze-based measures corresponding to the autistic and control participants. The reading skills of the participants were measured through multiple-choice questions and, based on the answers given, they were divided into groups of skillful and less-skillful readers. This division of the groups informs researchers on whether particular fixations were elicited from skillful or less-skillful readers and allows a fair between-group comparison for two levels of reading ability. In addition to describing the process of data collection and corpus development, we present a study on the effect that word length has on reading in autism. The corpus is intended as a resource for investigating the particular linguistic constructions which pose reading difficulties for people with autism and hopefully, as a way to inform future text simplification research intended for this population.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,387
inproceedings
chollet-etal-2016-multimodal
A Multimodal Corpus for the Assessment of Public Speaking Ability and Anxiety
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1078/
Chollet, Mathieu and W{\"ortwein, Torsten and Morency, Louis-Philippe and Scherer, Stefan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
488--495
The ability to efficiently speak in public is an essential asset for many professions and is used in everyday life. As such, tools enabling the improvement of public speaking performance and the assessment and mitigation of anxiety related to public speaking would be very useful. Multimodal interaction technologies, such as computer vision and embodied conversational agents, have recently been investigated for the training and assessment of interpersonal skills. Once central requirement for these technologies is multimodal corpora for training machine learning models. This paper addresses the need of these technologies by presenting and sharing a multimodal corpus of public speaking presentations. These presentations were collected in an experimental study investigating the potential of interactive virtual audiences for public speaking training. This corpus includes audio-visual data and automatically extracted features, measures of public speaking anxiety and personality, annotations of participants' behaviors and expert ratings of behavioral aspects and overall performance of the presenters. We hope this corpus will help other research teams in developing tools for supporting public speaking training.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,388
inproceedings
bertero-fung-2016-deep
Deep Learning of Audio and Language Features for Humor Prediction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1079/
Bertero, Dario and Fung, Pascale
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
496--501
We propose a comparison between various supervised machine learning methods to predict and detect humor in dialogues. We retrieve our humorous dialogues from a very popular TV sitcom: {\textquotedblleft}The Big Bang Theory{\textquotedblright}. We build a corpus where punchlines are annotated using the canned laughter embedded in the audio track. Our comparative study involves a linear-chain Conditional Random Field over a Recurrent Neural Network and a Convolutional Neural Network. Using a combination of word-level and audio frame-level features, the CNN outperforms the other methods, obtaining the best F-score of 68.5{\%} over 66.5{\%} by CRF and 52.9{\%} by RNN. Our work is a starting point to developing more effective machine learning and neural network models on the humor prediction task, as well as developing machines capable in understanding humor in general.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,389
inproceedings
alghamdi-etal-2016-empirical
An Empirical Study of {A}rabic Formulaic Sequence Extraction Methods
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1080/
Alghamdi, Ayman and Atwell, Eric and Brierley, Claire
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
502--506
This paper aims to implement what is referred to as the collocation of the Arabic keywords approach for extracting formulaic sequences (FSs) in the form of high frequency but semantically regular formulas that are not restricted to any syntactic construction or semantic domain. The study applies several distributional semantic models in order to automatically extract relevant FSs related to Arabic keywords. The data sets used in this experiment are rendered from a new developed corpus-based Arabic wordlist consisting of 5,189 lexical items which represent a variety of modern standard Arabic (MSA) genres and regions, the new wordlist being based on an overlapping frequency based on a comprehensive comparison of four large Arabic corpora with a total size of over 8 billion running words. Empirical n-best precision evaluation methods are used to determine the best association measures (AMs) for extracting high frequency and meaningful FSs. The gold standard reference FSs list was developed in previous studies and manually evaluated against well-established quantitative and qualitative criteria. The results demonstrate that the MI.log{\_}f AM achieved the highest results in extracting significant FSs from the large MSA corpus, while the T-score association measure achieved the worst results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,390
inproceedings
stankovic-etal-2016-rule
Rule-based Automatic Multi-word Term Extraction and Lemmatization
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1081/
Stankovi{\'c}, Ranka and Krstev, Cvetana and Obradovi{\'c}, Ivan and Lazi{\'c}, Biljana and Trtovac, Aleksandra
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
507--514
In this paper we present a rule-based method for multi-word term extraction that relies on extensive lexical resources in the form of electronic dictionaries and finite-state transducers for modelling various syntactic structures of multi-word terms. The same technology is used for lemmatization of extracted multi-word terms, which is unavoidable for highly inflected languages in order to pass extracted data to evaluators and subsequently to terminological e-dictionaries and databases. The approach is illustrated on a corpus of Serbian texts from the mining domain containing more than 600,000 simple word forms. Extracted and lemmatized multi-word terms are filtered in order to reject falsely offered lemmas and then ranked by introducing measures that combine linguistic and statistical information (C-Value, T-Score, LLR, and Keyness). Mean average precision for retrieval of MWU forms ranges from 0.789 to 0.804, while mean average precision of lemma production ranges from 0.956 to 0.960. The evaluation showed that 94{\%} of distinct multi-word forms were evaluated as proper multi-word units, and among them 97{\%} were associated with correct lemmas.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,391
inproceedings
kettnerova-bejcek-2016-distribution
Distribution of Valency Complements in {C}zech Complex Predicates: Between Verb and Noun
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1082/
Kettnerov{\'a}, V{\'a}clava and Bej{\v{c}}ek, Eduard
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
515--521
In this paper, we focus on Czech complex predicates formed by a light verb and a predicative noun expressed as the direct object. Although Czech {\textemdash} as an inflectional language encoding syntactic relations via morphological cases {\textemdash} provides an excellent opportunity to study the distribution of valency complements in the syntactic structure with complex predicates, this distribution has not been described so far. On the basis of a manual analysis of the richly annotated data from the Prague Dependency Treebank, we thus formulate principles governing this distribution. In an automatic experiment, we verify these principles on well-formed syntactic structures from the Prague Dependency Treebank and the Prague Czech-English Dependency Treebank with very satisfactory results: the distribution of 97{\%} of valency complements in the surface structure is governed by the proposed principles. These results corroborate that the surface structure formation of complex predicates is a regular process.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,392
inproceedings
liebeskind-hacohen-kerner-2016-lexical
A Lexical Resource of {H}ebrew Verb-Noun Multi-Word Expressions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1083/
Liebeskind, Chaya and HaCohen-Kerner, Yaakov
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
522--527
A verb-noun Multi-Word Expression (MWE) is a combination of a verb and a noun with or without other words, in which the combination has a meaning different from the meaning of the words considered separately. In this paper, we present a new lexical resource of Hebrew Verb-Noun MWEs (VN-MWEs). The VN-MWEs of this resource were manually collected and annotated from five different web resources. In addition, we analyze the lexical properties of Hebrew VN-MWEs by classifying them to three types: morphological, syntactic, and semantic. These two contributions are essential for designing algorithms for automatic VN-MWEs extraction. The analysis suggests some interesting features of VN-MWEs for exploration. The lexical resource enables to sample a set of positive examples for Hebrew VN-MWEs. This set of examples can either be used for training supervised algorithms or as seeds in unsupervised bootstrapping algorithms. Thus, this resource is a first step towards automatic identification of Hebrew VN-MWEs, which is important for natural language understanding, generation and translation systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,393
inproceedings
jacquet-etal-2016-cross
Cross-lingual Linking of Multi-word Entities and their corresponding Acronyms
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1084/
Jacquet, Guillaume and Ehrmann, Maud and Steinberger, Ralf and V{\"ayrynen, Jaakko
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
528--535
This paper reports on an approach and experiments to automatically build a cross-lingual multi-word entity resource. Starting from a collection of millions of acronym/expansion pairs for 22 languages where expansion variants were grouped into monolingual clusters, we experiment with several aggregation strategies to link these clusters across languages. Aggregation strategies make use of string similarity distances and translation probabilities and they are based on vector space and graph representations. The accuracy of the approach is evaluated against Wikipedia`s redirection and cross-lingual linking tables. The resulting multi-word entity resource contains 64,000 multi-word entities with unique identifiers and their 600,000 multilingual lexical variants. We intend to make this new resource publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,394
inproceedings
meurs-etal-2016-semlinker
{S}em{L}inker, a Modular and Open Source Framework for Named Entity Discovery and Linking
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1085/
Meurs, Marie-Jean and Almeida, Hayda and Jean-Louis, Ludovic and Charton, Eric
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
536--540
This paper presents SemLinker, an open source system that discovers named entities, connects them to a reference knowledge base, and clusters them semantically. SemLinker relies on several modules that perform surface form generation, mutual disambiguation, entity clustering, and make use of two annotation engines. SemLinker was evaluated in the English Entity Discovery and Linking track of the Text Analysis Conference on Knowledge Base Population, organized by the US National Institute of Standards and Technology. Along with the SemLinker source code, we release our annotation files containing the discovered named entities, their types, and position across processed documents.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,395
inproceedings
ilievski-etal-2016-context
Context-enhanced Adaptive Entity Linking
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1086/
Ilievski, Filip and Rizzo, Giuseppe and van Erp, Marieke and Plu, Julien and Troncy, Rapha{\"el
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
541--548
More and more knowledge bases are publicly available as linked data. Since these knowledge bases contain structured descriptions of real-world entities, they can be exploited by entity linking systems that anchor entity mentions from text to the most relevant resources describing those entities. In this paper, we investigate adaptation of the entity linking task using contextual knowledge. The key intuition is that entity linking can be customized depending on the textual content, as well as on the application that would make use of the extracted information. We present an adaptive approach that relies on contextual knowledge from text to enhance the performance of ADEL, a hybrid linguistic and graph-based entity linking system. We evaluate our approach on a domain-specific corpus consisting of annotated WikiNews articles.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,396
inproceedings
okur-etal-2016-named
Named Entity Recognition on {T}witter for {T}urkish using Semi-supervised Learning with Word Embeddings
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1087/
Okur, Eda and Demir, Hakan and {\"Ozg{\"ur, Arzucan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
549--555
Recently, due to the increasing popularity of social media, the necessity for extracting information from informal text types, such as microblog texts, has gained significant attention. In this study, we focused on the Named Entity Recognition (NER) problem on informal text types for Turkish. We utilized a semi-supervised learning approach based on neural networks. We applied a fast unsupervised method for learning continuous representations of words in vector space. We made use of these obtained word embeddings, together with language independent features that are engineered to work better on informal text types, for generating a Turkish NER system on microblog texts. We evaluated our Turkish NER system on Twitter messages and achieved better F-score performances than the published results of previously proposed NER systems on Turkish tweets. Since we did not employ any language dependent features, we believe that our method can be easily adapted to microblog texts in other morphologically rich languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,397
inproceedings
pershina-etal-2016-entity
Entity Linking with a Paraphrase Flavor
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1088/
Pershina, Maria and He, Yifan and Grishman, Ralph
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
556--560
The task of Named Entity Linking is to link entity mentions in the document to their correct entries in a knowledge base and to cluster NIL mentions. Ambiguous, misspelled, and incomplete entity mention names are the main challenges in the linking process. We propose a novel approach that combines two state-of-the-art models {\textemdash} for entity disambiguation and for paraphrase detection {\textemdash} to overcome these challenges. We consider name variations as paraphrases of the same entity mention and adopt a paraphrase model for this task. Our approach utilizes a graph-based disambiguation model based on Personalized Page Rank, and then refines and clusters its output using the paraphrase similarity between entity mention strings. It achieves a competitive performance of 80.5{\%} in B3+F clustering score on diagnostic TAC EDL 2014 data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,398
inproceedings
tian-etal-2016-domain
Domain Adaptation for Named Entity Recognition Using {CRF}s
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1089/
Tian, Tian and Dinarelli, Marco and Tellier, Isabelle and Cardoso, Pedro Dias
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
561--565
In this paper we explain how we created a labelled corpus in English for a Named Entity Recognition (NER) task from multi-source and multi-domain data, for an industrial partner. We explain the specificities of this corpus with examples and describe some baseline experiments. We present some results of domain adaptation on this corpus using a labelled Twitter corpus (Ritter et al., 2011). We tested a semi-supervised method from (Garcia-Fernandez et al., 2014) combined with a supervised domain adaptation approach proposed in (Raymond and Fayolle, 2010) for machine learning experiments with CRFs (Conditional Random Fields). We use the same technique to improve the NER results on the Twitter corpus (Ritter et al., 2011). Our contributions thus consist in an industrial corpus creation and NER performance improvements.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,399
inproceedings
arcan-etal-2016-iris
{IRIS}: {E}nglish-{I}rish Machine Translation System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1090/
Arcan, Mihael and Lane, Caoilfhionn and Droighne{\'a}in, Eoin {\'O} and Buitelaar, Paul
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
566--572
We describe IRIS, a statistical machine translation (SMT) system for translating from English into Irish and vice versa. Since Irish is considered an under-resourced language with a limited amount of machine-readable text, building a machine translation system that produces reasonable translations is rather challenging. As translation is a difficult task, current research in SMT focuses on obtaining statistics either from a large amount of parallel, monolingual or other multilingual resources. Nevertheless, we collected available English-Irish data and developed an SMT system aimed at supporting human translators and enabling cross-lingual language technology tasks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,400
inproceedings
tambouratzis-pouli-2016-linguistically
Linguistically Inspired Language Model Augmentation for {MT}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1091/
Tambouratzis, George and Pouli, Vasiliki
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
573--577
The present article reports on efforts to improve the translation accuracy of a corpus{\textemdash}based Machine Translation (MT) system. In order to achieve that, an error analysis performed on past translation outputs has indicated the likelihood of improving the translation accuracy by augmenting the coverage of the Target-Language (TL) side language model. The method adopted for improving the language model is initially presented, based on the concatenation of consecutive phrases. The algorithmic steps are then described that form the process for augmenting the language model. The key idea is to only augment the language model to cover the most frequent cases of phrase sequences, as counted over a TL-side corpus, in order to maximize the cases covered by the new language model entries. Experiments presented in the article show that substantial improvements in translation accuracy are achieved via the proposed method, when integrating the grown language model to the corpus-based MT system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,401
inproceedings
abercrombie-2016-rule
A Rule-based Shallow-transfer Machine Translation System for {S}cots and {E}nglish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1092/
Abercrombie, Gavin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
578--584
An open-source rule-based machine translation system is developed for Scots, a low-resourced minor language closely related to English and spoken in Scotland and Ireland. By concentrating on translation for assimilation (gist comprehension) from Scots to English, it is proposed that the development of dictionaries designed to be used with in the Apertium platform will be sufficient to produce translations that improve non-Scots speakers understanding of the language. Mono- and bilingual Scots dictionaries are constructed using lexical items gathered from a variety of resources across several domains. Although the primary goal of this project is translation for gisting, the system is evaluated for both assimilation and dissemination (publication-ready translations). A variety of evaluation methods are used, including a cloze test undertaken by human volunteers. While evaluation results are comparable to, and in some cases superior to, those of other language pairs within the Apertium platform, room for improvement is identified in several areas of the system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,402
inproceedings
rikters-skadina-2016-syntax
Syntax-based Multi-system Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1093/
Rikters, Mat{\={i}}ss and Skadi{\c{n}}a, Inguna
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
585--591
This paper describes a hybrid machine translation system that explores a parser to acquire syntactic chunks of a source sentence, translates the chunks with multiple online machine translation (MT) system application program interfaces (APIs) and creates output by combining translated chunks to obtain the best possible translation. The selection of the best translation hypothesis is performed by calculating the perplexity for each translated chunk. The goal of this approach is to enhance the baseline multi-system hybrid translation (MHyT) system that uses only a language model to select best translation from translations obtained with different APIs and to improve overall English {\textemdash} Latvian machine translation quality over each of the individual MT APIs. The presented syntax-based multi-system translation (SyMHyT) system demonstrates an improvement in terms of BLEU and NIST scores compared to the baseline system. Improvements reach from 1.74 up to 2.54 BLEU points.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,403
inproceedings
stajner-etal-2016-use
Use of Domain-Specific Language Resources in Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1094/
{\v{S}}tajner, Sanja and Querido, Andreia and Rendeiro, Nuno and Rodrigues, Jo{\~a}o Ant{\'o}nio and Branco, Ant{\'o}nio
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
592--598
In this paper, we address the problem of Machine Translation (MT) for a specialised domain in a language pair for which only a very small domain-specific parallel corpus is available. We conduct a series of experiments using a purely phrase-based SMT (PBSMT) system and a hybrid MT system (TectoMT), testing three different strategies to overcome the problem of the small amount of in-domain training data. Our results show that adding a small size in-domain bilingual terminology to the small in-domain training corpus leads to the best improvements of a hybrid MT system, while the PBSMT system achieves the best results by adding a combination of in-domain bilingual terminology and a larger out-of-domain corpus. We focus on qualitative human evaluation of the output of two best systems (one for each approach) and perform a systematic in-depth error analysis which revealed advantages of the hybrid MT system over the pure PBSMT system for this specific task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,404
inproceedings
pal-etal-2016-catalog-online
{CAT}a{L}og Online: Porting a Post-editing Tool to the Web
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1095/
Pal, Santanu and Zampieri, Marcos and Naskar, Sudip Kumar and Nayak, Tapas and Vela, Mihaela and van Genabith, Josef
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
599--604
This paper presents CATaLog online, a new web-based MT and TM post-editing tool. CATaLog online is a freeware software that can be used through a web browser and it requires only a simple registration. The tool features a number of editing and log functions similar to the desktop version of CATaLog enhanced with several new features that we describe in detail in this paper. CATaLog online is designed to allow users to post-edit both translation memory segments as well as machine translation output. The tool provides a complete set of log information currently not available in most commercial CAT tools. Log information can be used both for project management purposes as well as for the study of the translation process and translator`s productivity.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,405
inproceedings
hayakawa-etal-2016-ilmt
The {ILMT}-s2s Corpus {\textemdash} A Multimodal Interlingual Map Task Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1096/
Hayakawa, Akira and Luz, Saturnino and Cerrato, Loredana and Campbell, Nick
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
605--612
This paper presents the multimodal Interlingual Map Task Corpus (ILMT-s2s corpus) collected at Trinity College Dublin, and discuss some of the issues related to the collection and analysis of the data. The corpus design is inspired by the HCRC Map Task Corpus which was initially designed to support the investigation of linguistic phenomena, and has been the focus of a variety of studies of communicative behaviour. The simplicity of the task, and the complexity of phenomena it can elicit, make the map task an ideal object of study. Although there are studies that used replications of the map task to investigate communication in computer mediated tasks, this ILMT-s2s corpus is, to the best of our knowledge, the first investigation of communicative behaviour in the presence of three additional {\textquotedblleft}filters{\textquotedblright}: Automatic Speech Recognition (ASR), Machine Translation (MT) and Text To Speech (TTS) synthesis, where the instruction giver and the instruction follower speak different languages. This paper details the data collection setup and completed annotation of the ILMT-s2s corpus, and outlines preliminary results obtained from the data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,406
inproceedings
sadamitsu-etal-2016-name
Name Translation based on Fine-grained Named Entity Recognition in a Single Language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1097/
Sadamitsu, Kugatsu and Saito, Itsumi and Katayama, Taichi and Asano, Hisako and Matsuo, Yoshihiro
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
613--619
We propose named entity abstraction methods with fine-grained named entity labels for improving statistical machine translation (SMT). The methods are based on a bilingual named entity recognizer that uses a monolingual named entity recognizer with transliteration. Through experiments, we demonstrate that incorporating fine-grained named entities into statistical machine translation improves the accuracy of SMT with more adequate granularity compared with the standard SMT, which is a non-named entity abstraction method.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,407
inproceedings
s-bhattacharyya-2016-lexical
Lexical Resources to Enrich {E}nglish {M}alayalam Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1098/
S, Sreelekha and Bhattacharyya, Pushpak
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
620--627
In this paper we present our work on the usage of lexical resources for the Machine Translation English and Malayalam. We describe a comparative performance between different Statistical Machine Translation (SMT) systems on top of phrase based SMT system as baseline. We explore different ways of utilizing lexical resources to improve the quality of English Malayalam statistical machine translation. In order to enrich the training corpus we have augmented the lexical resources in two ways (a) additional vocabulary and (b) inflected verbal forms. Lexical resources include IndoWordnet semantic relation set, lexical words and verb phrases etc. We have described case studies, evaluations and have given detailed error analysis for both Malayalam to English and English to Malayalam machine translation systems. We observed significant improvement in evaluations of translation quality. Lexical resources do help uplift performance when parallel corpora are scanty.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,408
inproceedings
xu-yvon-2016-novel
Novel elicitation and annotation schemes for sentential and sub-sentential alignments of bitexts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1099/
Xu, Yong and Yvon, Fran{\c{c}}ois
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
628--635
Resources for evaluating sentence-level and word-level alignment algorithms are unsatisfactory. Regarding sentence alignments, the existing data is too scarce, especially when it comes to difficult bitexts, containing instances of non-literal translations. Regarding word-level alignments, most available hand-aligned data provide a complete annotation at the level of words that is difficult to exploit, for lack of a clear semantics for alignment links. In this study, we propose new methodologies for collecting human judgements on alignment links, which have been used to annotate 4 new data sets, at the sentence and at the word level. These will be released online, with the hope that they will prove useful to evaluate alignment software and quality estimation tools for automatic alignment. Keywords: Parallel corpora, Sentence Alignments, Word Alignments, Confidence Estimation
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,409
inproceedings
guillou-hardmeier-2016-protest
{PROTEST}: A Test Suite for Evaluating Pronouns in Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1100/
Guillou, Liane and Hardmeier, Christian
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
636--643
We present PROTEST, a test suite for the evaluation of pronoun translation by MT systems. The test suite comprises 250 hand-selected pronoun tokens and an automatic evaluation method which compares the translations of pronouns in MT output with those in the reference translation. Pronoun translations that do not match the reference are referred for manual evaluation. PROTEST is designed to support analysis of system performance at the level of individual pronoun groups, rather than to provide a single aggregate measure over all pronouns. We wish to encourage detailed analyses to highlight issues in the handling of specific linguistic mechanisms by MT systems, thereby contributing to a better understanding of those problems involved in translating pronouns. We present two use cases for PROTEST: a) for measuring improvement/degradation of an incremental system change, and b) for comparing the performance of a group of systems whose design may be largely unrelated. Following the latter use case, we demonstrate the application of PROTEST to the evaluation of the systems submitted to the DiscoMT 2015 shared task on pronoun translation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,410
inproceedings
chu-kurohashi-2016-paraphrasing
Paraphrasing Out-of-Vocabulary Words with Word Embeddings and Semantic Lexicons for Low Resource Statistical Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1101/
Chu, Chenhui and Kurohashi, Sadao
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
644--648
Out-of-vocabulary (OOV) word is a crucial problem in statistical machine translation (SMT) with low resources. OOV paraphrasing that augments the translation model for the OOV words by using the translation knowledge of their paraphrases has been proposed to address the OOV problem. In this paper, we propose using word embeddings and semantic lexicons for OOV paraphrasing. Experiments conducted on a low resource setting of the OLYMPICS task of IWSLT 2012 verify the effectiveness of our proposed method.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,411
inproceedings
daiber-van-der-goot-2016-denoised
The Denoised Web Treebank: Evaluating Dependency Parsing under Noisy Input Conditions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1102/
Daiber, Joachim and van der Goot, Rob
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
649--653
We introduce the Denoised Web Treebank: a treebank including a normalization layer and a corresponding evaluation metric for dependency parsing of noisy text, such as Tweets. This benchmark enables the evaluation of parser robustness as well as text normalization methods, including normalization as machine translation and unsupervised lexical normalization, directly on syntactic trees. Experiments show that text normalization together with a combination of domain-specific and generic part-of-speech taggers can lead to a significant improvement in parsing accuracy on this test set.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,412
inproceedings
che-etal-2016-punctuation
Punctuation Prediction for Unsegmented Transcript Based on Word Vector
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1103/
Che, Xiaoyin and Wang, Cheng and Yang, Haojin and Meinel, Christoph
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
654--658
In this paper we propose an approach to predict punctuation marks for unsegmented speech transcript. The approach is purely lexical, with pre-trained Word Vectors as the only input. A training model of Deep Neural Network (DNN) or Convolutional Neural Network (CNN) is applied to classify whether a punctuation mark should be inserted after the third word of a 5-words sequence and which kind of punctuation mark the inserted one should be. TED talks within IWSLT dataset are used in both training and evaluation phases. The proposed approach shows its effectiveness by achieving better result than the state-of-the-art lexical solution which works with same type of data, especially when predicting puncuation position only.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,413
inproceedings
zhou-etal-2016-evaluating
Evaluating a Deterministic Shift-Reduce Neural Parser for Constituent Parsing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1104/
Zhou, Hao and Zhang, Yue and Huang, Shujian and Dai, Xin-Yu and Chen, Jiajun
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
659--663
Greedy transition-based parsers are appealing for their very fast speed, with reasonably high accuracies. In this paper, we build a fast shift-reduce neural constituent parser by using a neural network to make local decisions. One challenge to the parsing speed is the large hidden and output layer sizes caused by the number of constituent labels and branching options. We speed up the parser by using a hierarchical output layer, inspired by the hierarchical log-bilinear neural language model. In standard WSJ experiments, the neural parser achieves an almost 2.4 time speed up (320 sen/sec) compared to a non-hierarchical baseline without significant accuracy loss (89.06 vs 89.13 F-score).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,414
inproceedings
ushiku-etal-2016-language
Language Resource Addition Strategies for Raw Text Parsing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1105/
Ushiku, Atsushi and Sasada, Tetsuro and Mori, Shinsuke
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
664--671
We focus on the improvement of accuracy of raw text parsing, from the viewpoint of language resource addition. In Japanese, the raw text parsing is divided into three steps: word segmentation, part-of-speech tagging, and dependency parsing. We investigate the contribution of language resource addition in each of three steps to the improvement in accuracy for two domain corpora. The experimental results show that this improvement depends on the target domain. For example, when we handle well-written texts of limited vocabulary, white paper, an effective language resource is a word-POS pair sequence corpus for the parsing accuracy. So we conclude that it is important to check out the characteristics of the target domain and to choose a suitable language resource addition strategy for the parsing accuracy improvement.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,415
inproceedings
marton-toutanova-2016-e
{E}-{TIPSY}: Search Query Corpus Annotated with Entities, Term Importance, {POS} Tags, and Syntactic Parses
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1106/
Marton, Yuval and Toutanova, Kristina
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
672--676
We present E-TIPSY, a search query corpus annotated with named Entities, Term Importance, POS tags, and SYntactic parses. This corpus contains crowdsourced (gold) annotations of the three most important terms in each query. In addition, it contains automatically produced annotations of named entities, part-of-speech tags, and syntactic parses for the same queries. This corpus comes in two formats: (1) Sober Subset: annotations that two or more crowd workers agreed upon, and (2) Full Glass: all annotations. We analyze the strikingly low correlation between term importance and syntactic headedness, which invites research into effective ways of combining these different signals. Our corpus can serve as a benchmark for term importance methods aimed at improving search engine quality and as an initial step toward developing a dataset of gold linguistic analysis of web search queries. In addition, it can be used as a basis for linguistic inquiries into the kind of expressions used in search.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,416
inproceedings
augustinus-etal-2016-afribooms
{A}fri{B}ooms: An Online Treebank for {A}frikaans
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1107/
Augustinus, Liesbeth and Dirix, Peter and van Niekerk, Daniel and Schuurman, Ineke and Vandeghinste, Vincent and Van Eynde, Frank and van Huyssteen, Gerhard
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
677--682
Compared to well-resourced languages such as English and Dutch, natural language processing (NLP) tools for Afrikaans are still not abundant. In the context of the AfriBooms project, KU Leuven and the North-West University collaborated to develop a first, small treebank, a dependency parser, and an easy to use online linguistic search engine for Afrikaans for use by researchers and students in the humanities and social sciences. The search tool is based on a similar development for Dutch, i.e. GrETEL, a user-friendly search engine which allows users to query a treebank by means of a natural language example instead of a formal search instruction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,417
inproceedings
ponti-passarotti-2016-differentia
Differentia compositionem facit. A Slower-Paced and Reliable Parser for {L}atin
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1108/
Ponti, Edoardo Maria and Passarotti, Marco
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
683--688
The Index Thomisticus Treebank is the largest available treebank for Latin; it contains Medieval Latin texts by Thomas Aquinas. After experimenting on its data with a number of dependency parsers based on different supervised machine learning techniques, we found that DeSR with a multilayer perceptron algorithm, a right-to-left transition, and a tailor-made feature model is the parser providing the highest accuracy rates. We improved the results further by using a technique that combines the output parses of DeSR with those provided by other parsers, outperforming the previous state of the art in parsing the Index Thomisticus Treebank. The key idea behind such improvement is to ensure a sufficient diversity and accuracy of the outputs to be combined; for this reason, we performed an in-depth evaluation of the results provided by the different parsers that we combined. Finally, we assessed that, although the general architecture of the parser is portable to Classical Latin, yet the model trained on Medieval Latin is inadequate for such purpose.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,418
inproceedings
eiselen-2016-south
{S}outh {A}frican Language Resources: Phrase Chunking
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1109/
Eiselen, Roald
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
689--693
Phrase chunking remains an important natural language processing (NLP) technique for intermediate syntactic processing. This paper describes the development of protocols, annotated phrase chunking data sets and automatic phrase chunkers for ten South African languages. Various problems with adapting the existing annotation protocols of English are discussed as well as an overview of the annotated data sets. Based on the annotated sets, CRF-based phrase chunkers are created and tested with a combination of different features, including part of speech tags and character n-grams. The results of the phrase chunking evaluation show that disjunctively written languages can achieve notably better results for phrase chunking with a limited data set than conjunctive languages, but that the addition of character n-grams improve the results for conjunctive languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,419
inproceedings
libovicky-2016-neural
Neural Scoring Function for {MST} Parser
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1110/
Libovick{\'y}, Jind{\v{r}}ich
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
694--698
Continuous word representations appeared to be a useful feature in many natural language processing tasks. Using fixed-dimension pre-trained word embeddings allows avoiding sparse bag-of-words representation and to train models with fewer parameters. In this paper, we use fixed pre-trained word embeddings as additional features for a neural scoring function in the MST parser. With the multi-layer architecture of the scoring function we can avoid handcrafting feature conjunctions. The continuous word representations on the input also allow us to reduce the number of lexical features, make the parser more robust to out-of-vocabulary words, and reduce the total number of parameters of the model. Although its accuracy stays below the state of the art, the model size is substantially smaller than with the standard features set. Moreover, it performs well for languages where only a smaller treebank is available and the results promise to be useful in cross-lingual parsing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,420
inproceedings
listenmaa-claessen-2016-analysing
Analysing Constraint Grammars with a {SAT}-solver
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1111/
Listenmaa, Inari and Claessen, Koen
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
699--706
We describe a method for analysing Constraint Grammars (CG) that can detect internal conflicts and redundancies in a given grammar, without the need for a corpus. The aim is for grammar writers to be able to automatically diagnose, and then manually improve their grammars. Our method works by translating the given grammar into logical constraints that are analysed by a SAT-solver. We have evaluated our analysis on a number of non-trivial grammars and found inconsistencies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,421
inproceedings
stein-2016-old
{O}ld {F}rench Dependency Parsing: Results of Two Parsers Analysed from a Linguistic Point of View
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1112/
Stein, Achim
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
707--713
The treatment of medieval texts is a particular challenge for parsers. I compare how two dependency parsers, one graph-based, the other transition-based, perform on Old French, facing some typical problems of medieval texts: graphical variation, relatively free word order, and syntactic variation of several parameters over a diachronic period of about 300 years. Both parsers were trained and evaluated on the {\textquotedblleft}Syntactic Reference Corpus of Medieval French{\textquotedblright} (SRCMF), a manually annotated dependency treebank. I discuss the relation between types of parsers and types of language, as well as the differences of the analyses from a linguistic point of view.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,422
inproceedings
di-buono-2016-semi
Semi-automatic Parsing for Web Knowledge Extraction through Semantic Annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1113/
di Buono, Maria Pia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
714--717
Parsing Web information, namely parsing content to find relevant documents on the basis of a user`s query, represents a crucial step to guarantee fast and accurate Information Retrieval (IR). Generally, an automated approach to such task is considered faster and cheaper than manual systems. Nevertheless, results do not seem have a high level of accuracy, indeed, as also Hjorland (2007) states, using stochastic algorithms entails: {\textbullet} Low precision due to the indexing of common Atomic Linguistic Units (ALUs) or sentences. {\textbullet} Low recall caused by the presence of synonyms. {\textbullet} Generic results arising from the use of too broad or too narrow terms. Usually IR systems are based on invert text index, namely an index data structure storing a mapping from content to its locations in a database file, or in a document or a set of documents. In this paper we propose a system, by means of which we will develop a search engine able to process online documents, starting from a natural language query, and to return information to users. The proposed approach, based on the Lexicon-Grammar (LG) framework and its language formalization methodologies, aims at integrating a semantic annotation process for both query analysis and document retrieval.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,423
inproceedings
weiner-etal-2016-towards
Towards Automatic Transcription of {ILSE} {\textemdash} an Interdisciplinary Longitudinal Study of Adult Development and Aging
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1114/
Weiner, Jochen and Frankenberg, Claudia and Telaar, Dominic and Wendelstein, Britta and Schr{\"oder, Johannes and Schultz, Tanja
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
718--725
The Interdisciplinary Longitudinal Study on Adult Development and Aging (ILSE) was created to facilitate the study of challenges posed by rapidly aging societies in developed countries such as Germany. ILSE contains over 8,000 hours of biographic interviews recorded from more than 1,000 participants over the course of 20 years. Investigations on various aspects of aging, such as cognitive decline, often rely on the analysis of linguistic features which can be derived from spoken content like these interviews. However, transcribing speech is a time and cost consuming manual process and so far only 380 hours of ILSE interviews have been transcribed. Thus, it is the aim of our work to establish technical systems to fully automatically transcribe the ILSE interview data. The joint occurrence of poor recording quality, long audio segments, erroneous transcriptions, varying speaking styles {\&} crosstalk, and emotional {\&} dialectal speech in these interviews presents challenges for automatic speech recognition (ASR). We describe our ongoing work towards the fully automatic transcription of all ILSE interviews and the steps we implemented in preparing the transcriptions to meet the interviews' challenges. Using a recursive long audio alignment procedure 96 hours of the transcribed data have been made accessible for ASR training.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,424
inproceedings
ajili-etal-2016-fabiole
{FABIOLE}, a Speech Database for Forensic Speaker Comparison
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1115/
Ajili, Moez and Bonastre, Jean-Fran{\c{c}}ois and Kahn, Juliette and Rossato, Solange and Bernard, Guillaume
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
726--733
A speech database has been collected for use to highlight the importance of {\textquotedblleft}speaker factor{\textquotedblright} in forensic voice comparison. FABIOLE has been created during the FABIOLE project funded by the French Research Agency (ANR) from 2013 to 2016. This corpus consists in more than 3 thousands excerpts spoken by 130 French native male speakers. The speakers are divided into two categories: 30 target speakers who everyone has 100 excerpts and 100 {\textquotedblleft}impostors{\textquotedblright} who everyone has only one excerpt. The data were collected from 10 different French radio and television shows where each utterance turns with a minimum duration of 30s and has a good speech quality. The data set is mainly used for investigating speaker factor in forensic voice comparison and interpreting some unsolved issue such as the relationship between speaker characteristics and system behavior. In this paper, we present FABIOLE database. Then, preliminary experiments are performed to evaluate the effect of the {\textquotedblleft}speaker factor{\textquotedblright} and the show on a voice comparison system behavior.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,425
inproceedings
halabi-wald-2016-phonetic
Phonetic Inventory for an {A}rabic Speech Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1116/
Halabi, Nawar and Wald, Mike
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
734--738
Corpus design for speech synthesis is a well-researched topic in languages such as English compared to Modern Standard Arabic, and there is a tendency to focus on methods to automatically generate the orthographic transcript to be recorded (usually greedy methods). In this work, a study of Modern Standard Arabic (MSA) phonetics and phonology is conducted in order to create criteria for a greedy method to create a speech corpus transcript for recording. The size of the dataset is reduced a number of times using these optimisation methods with different parameters to yield a much smaller dataset with identical phonetic coverage than before the reduction, and this output transcript is chosen for recording. This is part of a larger work to create a completely annotated and segmented speech corpus for MSA.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,426
inproceedings
burkhardt-reichel-2016-taxonomy
A Taxonomy of Specific Problem Classes in Text-to-Speech Synthesis: Comparing Commercial and Open Source Performance
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1118/
Burkhardt, Felix and Reichel, Uwe D.
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
744--749
Current state-of-the-art speech synthesizers for domain-independent systems still struggle with the challenge of generating understandable and natural-sounding speech. This is mainly because the pronunciation of words of foreign origin, inflections and compound words often cannot be handled by rules. Furthermore there are too many of these for inclusion in exception dictionaries. We describe an approach to evaluating text-to-speech synthesizers with a subjective listening experiment. The focus is to differentiate between known problem classes for speech synthesizers. The target language is German but we believe that many of the described phenomena are not language specific. We distinguish the following problem categories: Normalization, Foreign linguistics, Natural writing, Language specific and General. Each of them is divided into five to three problem classes. Word lists for each of the above mentioned categories were compiled and synthesized by both a commercial and an open source synthesizer, both being based on the non-uniform unit-selection approach. The synthesized speech was evaluated by human judges using the Speechalyzer toolkit and the results are discussed. It shows that, as expected, the commercial synthesizer performs much better than the open-source one, and especially words of foreign origin were pronounced badly by both systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,428
inproceedings
braunger-etal-2016-comparative
A Comparative Analysis of Crowdsourced Natural Language Corpora for Spoken Dialog Systems
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1119/
Braunger, Patricia and Hofmann, Hansj{\"org and Werner, Steffen and Schmidt, Maria
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
750--755
Recent spoken dialog systems have been able to recognize freely spoken user input in restricted domains thanks to statistical methods in the automatic speech recognition. These methods require a high number of natural language utterances to train the speech recognition engine and to assess the quality of the system. Since human speech offers many variants associated with a single intent, a high number of user utterances have to be elicited. Developers are therefore turning to crowdsourcing to collect this data. This paper compares three different methods to elicit multiple utterances for given semantics via crowd sourcing, namely with pictures, with text and with semantic entities. Specifically, we compare the methods with regard to the number of valid data and linguistic variance, whereby a quantitative and qualitative approach is proposed. In our study, the method with text led to a high variance in the utterances and a relatively low rate of invalid data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,429
inproceedings
sarasola-etal-2016-singing
A Singing Voice Database in {B}asque for Statistical Singing Synthesis of Bertsolaritza
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1120/
Sarasola, Xabier and Navas, Eva and Tavarez, David and Erro, Daniel and Saratxaga, Ibon and Hernaez, Inma
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
756--759
This paper describes the characteristics and structure of a Basque singing voice database of bertsolaritza. Bertsolaritza is a popular singing style from Basque Country sung exclusively in Basque that is improvised and a capella. The database is designed to be used in statistical singing voice synthesis for bertsolaritza style. Starting from the recordings and transcriptions of numerous singers, diarization and phoneme alignment experiments have been made to extract the singing voice from the recordings and create phoneme alignments. This labelling processes have been performed applying standard speech processing techniques and the results prove that these techniques can be used in this specific singing style.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,430
inproceedings
pessentheiner-etal-2016-amisco
{AMISCO}: The {A}ustrian {G}erman Multi-Sensor Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1121/
Pessentheiner, Hannes and Pichler, Thomas and Hagm{\"uller, Martin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
760--766
We introduce a unique, comprehensive Austrian German multi-sensor corpus with moving and non-moving speakers to facilitate the evaluation of estimators and detectors that jointly detect a speaker`s spatial and temporal parameters. The corpus is suitable for various machine learning and signal processing tasks, linguistic studies, and studies related to a speaker`s fundamental frequency (due to recorded glottograms). Available corpora are limited to (synthetically generated/spatialized) speech data or recordings of musical instruments that lack moving speakers, glottograms, and/or multi-channel distant speech recordings. That is why we recorded 24 spatially non-moving and moving speakers, balanced male and female, to set up a two-room and 43-channel Austrian German multi-sensor speech corpus. It contains 8.2 hours of read speech based on phonetically balanced sentences, commands, and digits. The orthographic transcriptions include around 53,000 word tokens and 2,070 word types. Special features of this corpus are the laryngograph recordings (representing glottograms required to detect a speaker`s instantaneous fundamental frequency and pitch), corresponding clean-speech recordings, and spatial information and video data provided by four Kinects and a camera.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,431
inproceedings
aichinger-etal-2016-database
A Database of Laryngeal High-Speed Videos with Simultaneous High-Quality Audio Recordings of Pathological and Non-Pathological Voices
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1122/
Aichinger, Philipp and Roesner, Immer and Leonhard, Matthias and Denk-Linnert, Doris-Maria and Bigenzahn, Wolfgang and Schneider-Stickler, Berit
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
767--770
Auditory voice quality judgements are used intensively for the clinical assessment of pathological voice. Voice quality concepts are fuzzily defined and poorly standardized however, which hinders scientific and clinical communication. The described database documents a wide variety of pathologies and is used to investigate auditory voice quality concepts with regard to phonation mechanisms. The database contains 375 laryngeal high-speed videos and simultaneous high-quality audio recordings of sustained phonations of 80 pathological and 40 non-pathological subjects. Interval wise annotations regarding video and audio quality, as well as voice quality ratings are provided. Video quality is annotated for the visibility of anatomical structures and artefacts such as blurring or reduced contrast. Voice quality annotations include ratings on the presence of dysphonia and diplophonia. The purpose of the database is to aid the formulation of observationally well-founded models of phonation and the development of model-based automatic detectors for distinct types of phonation, especially for clinically relevant nonmodal voice phenomena. Another application is the training of audio-based fundamental frequency extractors on video-based reference fundamental frequencies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,432
inproceedings
hateva-etal-2016-bulphonc
{B}ul{P}hon{C}: {B}ulgarian Speech Corpus for the Development of {ASR} Technology
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1123/
Hateva, Neli and Mitankin, Petar and Mihov, Stoyan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
771--774
In this paper we introduce a Bulgarian speech database, which was created for the purpose of ASR technology development. The paper describes the design and the content of the speech database. We present also an empirical evaluation of the performance of a LVCSR system for Bulgarian trained on the BulPhonC data. The resource is available free for scientific usage.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,433
inproceedings
pinnis-etal-2016-designing
Designing a Speech Corpus for the Development and Evaluation of Dictation Systems in {L}atvian
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1124/
Pinnis, M{\={a}}rcis and Salimbajevs, Askars and Auzi{\c{n}}a, Ilze
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
775--780
In this paper the authors present a speech corpus designed and created for the development and evaluation of dictation systems in Latvian. The corpus consists of over nine hours of orthographically annotated speech from 30 different speakers. The corpus features spoken commands that are common for dictation systems for text editors. The corpus is evaluated in an automatic speech recognition scenario. Evaluation results in an ASR dictation scenario show that the addition of the corpus to the acoustic model training data in combination with language model adaptation allows to decrease the WER by up to relative 41.36{\%} (or 16.83{\%} in absolute numbers) compared to a baseline system without language model adaptation. Contribution of acoustic data augmentation is at relative 12.57{\%} (or 3.43{\%} absolute).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,434
inproceedings
proenca-etal-2016-letsread
The {L}ets{R}ead Corpus of {P}ortuguese Children Reading Aloud for Performance Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1125/
Proen{\c{c}}a, Jorge and Celorico, Dirce and Candeias, Sara and Lopes, Carla and Perdig{\~a}o, Fernando
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
781--785
This paper introduces the LetsRead Corpus of European Portuguese read speech from 6 to 10 years old children. The motivation for the creation of this corpus stems from the inexistence of databases with recordings of reading tasks of Portuguese children with different performance levels and including all the common reading aloud disfluencies. It is also essential to develop techniques to fulfill the main objective of the LetsRead project: to automatically evaluate the reading performance of children through the analysis of reading tasks. The collected data amounts to 20 hours of speech from 284 children from private and public Portuguese schools, with each child carrying out two tasks: reading sentences and reading a list of pseudowords, both with varying levels of difficulty throughout the school grades. In this paper, the design of the reading tasks presented to children is described, as well as the collection procedure. Manually annotated data is analyzed according to disfluencies and reading performance. The considered word difficulty parameter is also confirmed to be suitable for the pseudoword reading tasks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,435
inproceedings
reichel-etal-2016-bas
The {BAS} Speech Data Repository
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1126/
Reichel, Uwe and Schiel, Florian and Kisler, Thomas and Draxler, Christoph and P{\"orner, Nina
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
786--791
The BAS CLARIN speech data repository is introduced. At the current state it comprises 31 pre-dominantly German corpora of spoken language. It is compliant to the CLARIN-D as well as the OLAC requirements. This enables its embedding into several infrastructures. We give an overview over its structure, its implementation as well as the corpora it contains.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,436
inproceedings
yilmaz-etal-2016-dutch
A {D}utch Dysarthric Speech Database for Individualized Speech Therapy Research
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1127/
Yilmaz, Emre and Ganzeboom, Mario and Beijer, Lilian and Cucchiarini, Catia and Strik, Helmer
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
792--795
We present a new Dutch dysarthric speech database containing utterances of neurological patients with Parkinson`s disease, traumatic brain injury and cerebrovascular accident. The speech content is phonetically and linguistically diversified by using numerous structured sentence and word lists. Containing more than 6 hours of mildly to moderately dysarthric speech, this database can be used for research on dysarthria and for developing and testing speech-to-text systems designed for medical applications. Current activities aimed at extending this database are also discussed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,437
inproceedings
humayoun-etal-2016-urdu
{U}rdu Summary Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1128/
Humayoun, Muhammad and Nawab, Rao Muhammad Adeel and Uzair, Muhammad and Aslam, Saba and Farzand, Omer
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
796--800
Language resources, such as corpora, are important for various natural language processing tasks. Urdu has millions of speakers around the world but it is under-resourced in terms of standard evaluation resources. This paper reports the construction of a benchmark corpus for Urdu summaries (abstracts) to facilitate the development and evaluation of single document summarization systems for Urdu language. In Urdu, space does not always mark word boundary. Therefore, we created two versions of the same corpus. In the first version, words are separated by space. In contrast, proper word boundaries are manually tagged in the second version. We further apply normalization, part-of-speech tagging, morphological analysis, lemmatization, and stemming for the articles and their summaries in both versions. In order to apply these annotations, we re-implemented some NLP tools for Urdu. We provide Urdu Summary Corpus, all these annotations and the needed software tools (as open-source) for researchers to run experiments and to evaluate their work including but not limited to single-document summarization task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,438
inproceedings
koto-2016-publicly
A Publicly Available {I}ndonesian Corpora for Automatic Abstractive and Extractive Chat Summarization
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1129/
Koto, Fajri
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
801--805
In this paper we report our effort to construct the first ever Indonesian corpora for chat summarization. Specifically, we utilized documents of multi-participant chat from a well known online instant messaging application, WhatsApp. We construct the gold standard by asking three native speakers to manually summarize 300 chat sections (152 of them contain images). As result, three reference summaries in extractive and either abstractive form are produced for each chat sections. The corpus is still in its early stage of investigation, yielding exciting possibilities of future works.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,439
inproceedings
cohan-goharian-2016-revisiting
Revisiting Summarization Evaluation for Scientific Articles
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1130/
Cohan, Arman and Goharian, Nazli
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
806--813
Evaluation of text summarization approaches have been mostly based on metrics that measure similarities of system generated summaries with a set of human written gold-standard summaries. The most widely used metric in summarization evaluation has been the ROUGE family. ROUGE solely relies on lexical overlaps between the terms and phrases in the sentences; therefore, in cases of terminology variations and paraphrasing, ROUGE is not as effective. Scientific article summarization is one such case that is different from general domain summarization (e.g. newswire data). We provide an extensive analysis of ROUGE`s effectiveness as an evaluation metric for scientific summarization; we show that, contrary to the common belief, ROUGE is not much reliable in evaluating scientific summaries. We furthermore show how different variants of ROUGE result in very different correlations with the manual Pyramid scores. Finally, we propose an alternative metric for summarization evaluation which is based on the content relevance between a system generated summary and the corresponding human written summaries. We call our metric SERA (Summarization Evaluation by Relevance Analysis). Unlike ROUGE, SERA consistently achieves high correlations with manual scores which shows its effectiveness in evaluation of scientific article summarization.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,440
inproceedings
kabadjov-etal-2016-onforums
The {O}n{F}orum{S} corpus from the Shared Task on Online Forum Summarisation at {M}ulti{L}ing 2015
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1131/
Kabadjov, Mijail and Kruschwitz, Udo and Poesio, Massimo and Steinberger, Josef and Valderrama, Jorge and Zaragoza, Hugo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
814--818
In this paper we present the OnForumS corpus developed for the shared task of the same name on Online Forum Summarisation (OnForumS at MultiLing`15). The corpus consists of a set of news articles with associated readers' comments from The Guardian (English) and La Repubblica (Italian). It comes with four levels of annotation: argument structure, comment-article linking, sentiment and coreference. The former three were produced through crowdsourcing, whereas the latter, by an experienced annotator using a mature annotation scheme. Given its annotation breadth, we believe the corpus will prove a useful resource in stimulating and furthering research in the areas of Argumentation Mining, Summarisation, Sentiment, Coreference and the interlinks therein.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,441
inproceedings
di-caro-boella-2016-automatic
Automatic Enrichment of {W}ord{N}et with Common-Sense Knowledge
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1132/
Di Caro, Luigi and Boella, Guido
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
819--822
WordNet represents a cornerstone in the Computational Linguistics field, linking words to meanings (or senses) through a taxonomical representation of synsets, i.e., clusters of words with an equivalent meaning in a specific context often described by few definitions (or glosses) and examples. Most of the approaches to the Word Sense Disambiguation task fully rely on these short texts as a source of contextual information to match with the input text to disambiguate. This paper presents the first attempt to enrich synsets data with common-sense definitions, automatically retrieved from ConceptNet 5, and disambiguated accordingly to WordNet. The aim was to exploit the shared- and immediate-thinking nature of common-sense knowledge to extend the short but incredibly useful contextual information of the synsets. A manual evaluation on a subset of the entire result (which counts a total of almost 600K synset enrichments) shows a very high precision with an estimated good recall.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,442
inproceedings
baisa-etal-2016-vps
{VPS}-{G}rade{U}p: Graded Decisions on Usage Patterns
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1133/
Baisa, V{\'i}t and Cinkov{\'a}, Silvie and Krej{\v{c}}ov{\'a}, Ema and Vernerov{\'a}, Anna
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
823--827
We present VPS-GradeUp {\textemdash} a set of 11,400 graded human decisions on usage patterns of 29 English lexical verbs from the Pattern Dictionary of English Verbs by Patrick Hanks. The annotation contains, for each verb lemma, a batch of 50 concordances with the given lemma as KWIC, and for each of these concordances we provide a graded human decision on how well the individual PDEV patterns for this particular lemma illustrate the given concordance, indicated on a 7-point Likert scale for each PDEV pattern. With our annotation, we were pursuing a pilot investigation of the foundations of human clustering and disambiguation decisions with respect to usage patterns of verbs in context. The data set is publicly available at \url{http://hdl.handle.net/11234/1-1585}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,443
inproceedings
miller-etal-2016-sense
Sense-annotating a Lexical Substitution Data Set with Ubyline
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1134/
Miller, Tristan and Khemakhem, Mohamed and de Castilho, Richard Eckart and Gurevych, Iryna
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
828--835
We describe the construction of GLASS, a newly sense-annotated version of the German lexical substitution data set used at the GermEval 2015: LexSub shared task. Using the two annotation layers, we conduct the first known empirical study of the relationship between manually applied word senses and lexical substitutions. We find that synonymy and hypernymy/hyponymy are the only semantic relations directly linking targets to their substitutes, and that substitutes in the target`s hypernymy/hyponymy taxonomy closely align with the synonyms of a single GermaNet synset. Despite this, these substitutes account for a minority of those provided by the annotators. The results of our analysis accord with those of a previous study on English-language data (albeit with automatically induced word senses), leading us to suspect that the sense{\textemdash}substitution relations we discovered may be of a universal nature. We also tentatively conclude that relatively cheap lexical substitution annotations can be used as a knowledge source for automatic WSD. Also introduced in this paper is Ubyline, the web application used to produce the sense annotations. Ubyline presents an intuitive user interface optimized for annotating lexical sample data, and is readily adaptable to sense inventories other than GermaNet.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,444
inproceedings
horbach-etal-2016-corpus
A Corpus of Literal and Idiomatic Uses of {G}erman Infinitive-Verb Compounds
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1135/
Horbach, Andrea and Hensler, Andrea and Krome, Sabine and Prange, Jakob and Scholze-Stubenrecht, Werner and Steffen, Diana and Thater, Stefan and Wellner, Christian and Pinkal, Manfred
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
836--841
We present an annotation study on a representative dataset of literal and idiomatic uses of German infinitive-verb compounds in newspaper and journal texts. Infinitive-verb compounds form a challenge for writers of German, because spelling regulations are different for literal and idiomatic uses. Through the participation of expert lexicographers we were able to obtain a high-quality corpus resource which offers itself as a testbed for automatic idiomaticity detection and coarse-grained word-sense disambiguation. We trained a classifier on the corpus which was able to distinguish literal and idiomatic uses with an accuracy of 85 {\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,445
inproceedings
pedersen-etal-2016-semdax
The {S}em{D}a{X} Corpus {\textemdash} Sense Annotations with Scalable Sense Inventories
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1136/
Pedersen, Bolette and Braasch, Anna and Johannsen, Anders and Alonso, H{\'e}ctor Mart{\'i}nez and Nimb, Sanni and Olsen, Sussi and S{\o}gaard, Anders and S{\o}rensen, Nicolai Hartvig
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
842--847
We launch the SemDaX corpus which is a recently completed Danish human-annotated corpus available through a CLARIN academic license. The corpus includes approx. 90,000 words, comprises six textual domains, and is annotated with sense inventories of different granularity. The aim of the developed corpus is twofold: i) to assess the reliability of the different sense annotation schemes for Danish measured by qualitative analyses and annotation agreement scores, and ii) to serve as training and test data for machine learning algorithms with the practical purpose of developing sense taggers for Danish. To these aims, we take a new approach to human-annotated corpus resources by double annotating a much larger part of the corpus than what is normally seen: for the all-words task we double annotated 60{\%} of the material and for the lexical sample task 100{\%}. We include in the corpus not only the adjucated files, but also the diverging annotations. In other words, we consider not all disagreement to be noise, but rather to contain valuable linguistic information that can help us improve our annotation schemes and our learning algorithms.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,446
inproceedings
cinkova-etal-2016-graded
Graded and Word-Sense-Disambiguation Decisions in Corpus Pattern Analysis: a Pilot Study
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1137/
Cinkov{\'a}, Silvie and Krej{\v{c}}ov{\'a}, Ema and Vernerov{\'a}, Anna and Baisa, V{\'i}t
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
848--854
We present a pilot analysis of a new linguistic resource, VPS-GradeUp (available at \url{http://hdl.handle.net/11234/1-1585}). The resource contains 11,400 graded human decisions on usage patterns of 29 English lexical verbs, randomly selected from the Pattern Dictionary of English Verbs (Hanks, 2000 2014) based on their frequency and the number of senses their lemmas have in PDEV. This data set has been created to observe the interannotator agreement on PDEV patterns produced using the Corpus Pattern Analysis (Hanks, 2013). Apart from the graded decisions, the data set also contains traditional Word-Sense-Disambiguation (WSD) labels. We analyze the associations between the graded annotation and WSD annotation. The results of the respective annotations do not correlate with the size of the usage pattern inventory for the respective verbs lemmas, which makes the data set worth further linguistic analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,447
inproceedings
lu-etal-2016-multi-prototype
Multi-prototype {C}hinese Character Embedding
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1138/
Lu, Yanan and Zhang, Yue and Ji, Donghong
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
855--859
Chinese sentences are written as sequences of characters, which are elementary units of syntax and semantics. Characters are highly polysemous in forming words. We present a position-sensitive skip-gram model to learn multi-prototype Chinese character embeddings, and explore the usefulness of such character embeddings to Chinese NLP tasks. Evaluation on character similarity shows that multi-prototype embeddings are significantly better than a single-prototype baseline. In addition, used as features in the Chinese NER task, the embeddings result in a 1.74{\%} F-score improvement over a state-of-the-art baseline.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,448
inproceedings
chang-etal-2016-comparison
A comparison of Named-Entity Disambiguation and Word Sense Disambiguation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1139/
Chang, Angel and Spitkovsky, Valentin I. and Manning, Christopher D. and Agirre, Eneko
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
860--867
Named Entity Disambiguation (NED) is the task of linking a named-entity mention to an instance in a knowledge-base, typically Wikipedia-derived resources like DBpedia. This task is closely related to word-sense disambiguation (WSD), where the mention of an open-class word is linked to a concept in a knowledge-base, typically WordNet. This paper analyzes the relation between two annotated datasets on NED and WSD, highlighting the commonalities and differences. We detail the methods to construct a NED system following the WSD word-expert approach, where we need a dictionary and one classifier is built for each target entity mention string. Constructing a dictionary for NED proved challenging, and although similarity and ambiguity are higher for NED, the results are also higher due to the larger number of training data, and the more crisp and skewed meaning differences.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,449
inproceedings
villegas-etal-2016-leveraging
Leveraging {RDF} Graphs for Crossing Multiple Bilingual Dictionaries
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1140/
Villegas, Marta and Melero, Maite and Bel, N{\'u}ria and Gracia, Jorge
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
868--876
The experiments presented here exploit the properties of the Apertium RDF Graph, principally cycle density and nodes' degree, to automatically generate new translation relations between words, and therefore to enrich existing bilingual dictionaries with new entries. Currently, the Apertium RDF Graph includes data from 22 Apertium bilingual dictionaries and constitutes a large unified array of linked lexical entries and translations that are available and accessible on the Web (\url{http://linguistic.linkeddata.es/apertium/}). In particular, its graph structure allows for interesting exploitation opportunities, some of which are addressed in this paper. Two {\textquoteleft}massive' experiments are reported: in the first one, the original EN-ES translation set was removed from the Apertium RDF Graph and a new EN-ES version was generated. The results were compared against the previously removed EN-ES data and against the Concise Oxford Spanish Dictionary. In the second experiment, a new non-existent EN-FR translation set was generated. In this case the results were compared against a converted wiktionary English-French file. The results we got are really good and perform well for the extreme case of correlated polysemy. This lead us to address the possibility to use cycles and nodes degree to identify potential oddities in the source data. If cycle density proves efficient when considering potential targets, we can assume that in dense graphs nodes with low degree may indicate potential errors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,450
inproceedings
corcoglioniti-etal-2016-premon
{P}re{MO}n: a Lemon Extension for Exposing Predicate Models as Linked Data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1141/
Corcoglioniti, Francesco and Rospocher, Marco and Aprosio, Alessio Palmero and Tonelli, Sara
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
877--884
We introduce PreMOn (predicate model for ontologies), a linguistic resource for exposing predicate models (PropBank, NomBank, VerbNet, and FrameNet) and mappings between them (e.g, SemLink) as Linked Open Data. It consists of two components: (i) the PreMOn Ontology, an extension of the lemon model by the W3C Ontology-Lexica Community Group, that enables to homogeneously represent data from the various predicate models; and, (ii) the PreMOn Dataset, a collection of RDF datasets integrating various versions of the aforementioned predicate models and mapping resources. PreMOn is freely available and accessible online in different ways, including through a dedicated SPARQL endpoint.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,451
inproceedings
chalub-etal-2016-semantic
Semantic Links for {P}ortuguese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1142/
Chalub, Fabricio and Real, Livy and Rademaker, Alexandre and de Paiva, Valeria
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
885--891
This paper describes work on incorporating Princenton`s WordNet morphosemantics links to the fabric of the Portuguese OpenWordNet-PT. Morphosemantic links are relations between verbs and derivationally related nouns that are semantically typed (such as for tune-tuner {\textemdash} in Portuguese {\textquotedblleft}afinar-afinador{\textquotedblright} {--} linked through an {\textquotedblleft}agent{\textquotedblright} link). Morphosemantic links have been discussed for Princeton`s WordNet for a while, but have not been added to the official database. These links are very useful, they help us to improve our Portuguese WordNet. Thus we discuss the integration of these links in our base and the issues we encountered with the integration.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,452
inproceedings
klimek-etal-2016-creating
Creating Linked Data Morphological Language Resources with {MM}o{O}n - The {H}ebrew Morpheme Inventory
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1143/
Klimek, Bettina and Arndt, Natanael and Krause, Sebastian and Arndt, Timotheus
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
892--899
The development of standard models for describing general lexical resources has led to the emergence of numerous lexical datasets of various languages in the Semantic Web. However, equivalent models covering the linguistic domain of morphology do not exist. As a result, there are hardly any language resources of morphemic data available in RDF to date. This paper presents the creation of the Hebrew Morpheme Inventory from a manually compiled tabular dataset comprising around 52.000 entries. It is an ongoing effort of representing the lexemes, word-forms and morphologigal patterns together with their underlying relations based on the newly created Multilingual Morpheme Ontology (MMoOn). It will be shown how segmented Hebrew language data can be granularly described in a Linked Data format, thus, serving as an exemplary case for creating morpheme inventories of any inflectional language with MMoOn. The resulting dataset is described a) according to the structure of the underlying data format, b) with respect to the Hebrew language characteristic of building word-forms directly from roots, c) by exemplifying how inflectional information is realized and d) with regard to its enrichment with external links to sense resources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,453
inproceedings
prokopidis-etal-2016-parallel
Parallel {G}lobal {V}oices: a Collection of Multilingual Corpora with Citizen Media Stories
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1144/
Prokopidis, Prokopis and Papavassiliou, Vassilis and Piperidis, Stelios
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
900--905
We present a new collection of multilingual corpora automatically created from the content available in the Global Voices websites, where volunteers have been posting and translating citizen media stories since 2004. We describe how we crawled and processed this content to generate parallel resources comprising 302.6K document pairs and 8.36M segment alignments in 756 language pairs. For some language pairs, the segment alignments in this resource are the first open examples of their kind. In an initial use of this resource, we discuss how a set of document pair detection algorithms performs on the Greek-English corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,454
inproceedings
li-etal-2016-large
Large Multi-lingual, Multi-level and Multi-genre Annotation Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1145/
Li, Xuansong and Palmer, Martha and Xue, Nianwen and Ramshaw, Lance and Maamouri, Mohamed and Bies, Ann and Conger, Kathryn and Grimes, Stephen and Strassel, Stephanie
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
906--913
High accuracy for automated translation and information retrieval calls for linguistic annotations at various language levels. The plethora of informal internet content sparked the demand for porting state-of-art natural language processing (NLP) applications to new social media as well as diverse language adaptation. Effort launched by the BOLT (Broad Operational Language Translation) program at DARPA (Defense Advanced Research Projects Agency) successfully addressed the internet information with enhanced NLP systems. BOLT aims for automated translation and linguistic analysis for informal genres of text and speech in online and in-person communication. As a part of this program, the Linguistic Data Consortium (LDC) developed valuable linguistic resources in support of the training and evaluation of such new technologies. This paper focuses on methodologies, infrastructure, and procedure for developing linguistic annotation at various language levels, including Treebank (TB), word alignment (WA), PropBank (PB), and co-reference (CoRef). Inspired by the OntoNotes approach with adaptations to the tasks to reflect the goals and scope of the BOLT project, this effort has introduced more annotation types of informal and free-style genres in English, Chinese and Egyptian Arabic. The corpus produced is by far the largest multi-lingual, multi-level and multi-genre annotation corpus of informal text and speech.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,455
inproceedings
habernal-etal-2016-c4corpus
{C}4{C}orpus: Multilingual Web-size Corpus with Free License
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1146/
Habernal, Ivan and Zayed, Omnia and Gurevych, Iryna
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
914--922
Large Web corpora containing full documents with permissive licenses are crucial for many NLP tasks. In this article we present the construction of 12 million-pages Web corpus (over 10 billion tokens) licensed under CreativeCommons license family in 50+ languages that has been extracted from CommonCrawl, the largest publicly available general Web crawl to date with about 2 billion crawled URLs. Our highly-scalable Hadoop-based framework is able to process the full CommonCrawl corpus on 2000+ CPU cluster on the Amazon Elastic Map/Reduce infrastructure. The processing pipeline includes license identification, state-of-the-art boilerplate removal, exact duplicate and near-duplicate document removal, and language detection. The construction of the corpus is highly configurable and fully reproducible, and we provide both the framework (DKPro C4CorpusTools) and the resulting data (C4Corpus) to the research community.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,456
inproceedings
lison-tiedemann-2016-opensubtitles2016
{O}pen{S}ubtitles2016: Extracting Large Parallel Corpora from Movie and {TV} Subtitles
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1147/
Lison, Pierre and Tiedemann, J{\"org
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
923--929
We present a new major release of the OpenSubtitles collection of parallel corpora. The release is compiled from a large database of movie and TV subtitles and includes a total of 1689 bitexts spanning 2.6 billion sentences across 60 languages. The release also incorporates a number of enhancements in the preprocessing and alignment of the subtitles, such as the automatic correction of OCR errors and the use of meta-data to estimate the quality of each subtitle and score subtitle pairs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,457
inproceedings
rambelli-etal-2016-lexfr
{L}ex{F}r: Adapting the {L}ex{I}t Framework to Build a Corpus-based {F}rench Subcategorization Lexicon
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1148/
Rambelli, Giulia and Lebani, Gianluca and Pr{\'e}vot, Laurent and Lenci, Alessandro
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
930--937
This paper introduces LexFr, a corpus-based French lexical resource built by adapting the framework LexIt, originally developed to describe the combinatorial potential of Italian predicates. As in the original framework, the behavior of a group of target predicates is characterized by a series of syntactic (i.e., subcategorization frames) and semantic (i.e., selectional preferences) statistical information (a.k.a. distributional profiles) whose extraction process is mostly unsupervised. The first release of LexFr includes information for 2,493 verbs, 7,939 nouns and 2,628 adjectives. In these pages we describe the adaptation process and evaluated the final resource by comparing the information collected for 20 test verbs against the information available in a gold standard dictionary. In the best performing setting, we obtained 0.74 precision, 0.66 recall and 0.70 F-measure.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,458
inproceedings
vicente-saralegi-2016-polarity
Polarity Lexicon Building: to what Extent Is the Manual Effort Worth?
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1149/
Vicente, I{\~n}aki San and Saralegi, Xabier
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
938--942
Polarity lexicons are a basic resource for analyzing the sentiments and opinions expressed in texts in an automated way. This paper explores three methods to construct polarity lexicons: translating existing lexicons from other languages, extracting polarity lexicons from corpora, and annotating sentiments Lexical Knowledge Bases. Each of these methods require a different degree of human effort. We evaluate how much manual effort is needed and to what extent that effort pays in terms of performance improvement. Experiment setup includes generating lexicons for Basque, and evaluating them against gold standard datasets in different domains. Results show that extracting polarity lexicons from corpora is the best solution for achieving a good performance with reasonable human effort.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,459
inproceedings
nahli-etal-2016-al
Al Qamus al Muhit, a Medieval {A}rabic Lexicon in {LMF}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1150/
Nahli, Ouafae and Frontini, Francesca and Monachini, Monica and Khan, Fahad and Zarghili, Arsalan and Khalfi, Mustapha
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
943--950
This paper describes the conversion into LMF, a standard lexicographic digital format of {\textquoteleft}al-q{\={a}}m{\={u}}s al-muḥ{\={i}}ṭ, a Medieval Arabic lexicon. The lexicon is first described, then all the steps required for the conversion are illustrated. The work is will produce a useful lexicographic resource for Arabic NLP, but is also interesting per se, to study the implications of adapting the LMF model to the Arabic language. Some reflections are offered as to the status of roots with respect to previously suggested representations. In particular, roots are, in our opinion are to be not treated as lexical entries, but modeled as lexical metadata for classifying and identifying lexical entries. In this manner, each root connects all entries that are derived from it.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,460
inproceedings
baeza-yates-etal-2016-cassaurus
{CASSA}urus: A Resource of Simpler {S}panish Synonyms
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1151/
Baeza-Yates, Ricardo and Rello, Luz and Dembowski, Julia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
951--955
In this work we introduce and describe a language resource composed of lists of simpler synonyms for Spanish. The synonyms are divided in different senses taken from the Spanish OpenThesaurus, where context disambiguation was performed by using statistical information from the Web and Google Books Ngrams. This resource is freely available online and can be used for different NLP tasks such as lexical simplification. Indeed, so far it has been already integrated into four tools.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,461
inproceedings
kettunen-paakkonen-2016-measuring
Measuring Lexical Quality of a Historical {F}innish Newspaper Collection {\textemdash} Analysis of Garbled {OCR} Data with Basic Language Technology Tools and Means
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1152/
Kettunen, Kimmo and P{\"a{\"akk{\"onen, Tuula
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
956--961
The National Library of Finland has digitized a large proportion of the historical newspapers published in Finland between 1771 and 1910 (Bremer-Laamanen 2001). This collection contains approximately 1.95 million pages in Finnish and Swedish. Finnish part of the collection consists of about 2.39 billion words. The National Library`s Digital Collections are offered via the digi.kansalliskirjasto.fi web service, also known as Digi. Part of this material is also available freely downloadable in The Language Bank of Finland provided by the Fin-CLARIN consortium . The collection can also be accessed through the Korp environment that has been developed by Spr{\"akbanken at the University of Gothenburg and extended by FIN-CLARIN team at the University of Helsinki to provide concordances of text resources. A Cranfield-style information retrieval test collection has been produced out of a small part of the Digi newspaper material at the University of Tampere (J{\"arvelin et al., 2015). The quality of the OCRed collections is an important topic in digital humanities, as it affects general usability and searchability of collections. There is no single available method to assess the quality of large collections, but different methods can be used to approximate the quality. This paper discusses different corpus analysis style ways to approximate the overall lexical quality of the Finnish part of the Digi collection.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,462
inproceedings
afli-etal-2016-using
Using {SMT} for {OCR} Error Correction of Historical Texts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1153/
Afli, Haithem and Qiu, Zhengwei and Way, Andy and Sheridan, P{\'a}raic
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
962--966
A trend to digitize historical paper-based archives has emerged in recent years, with the advent of digital optical scanners. A lot of paper-based books, textbooks, magazines, articles, and documents are being transformed into electronic versions that can be manipulated by a computer. For this purpose, Optical Character Recognition (OCR) systems have been developed to transform scanned digital text into editable computer text. However, different kinds of errors in the OCR system output text can be found, but Automatic Error Correction tools can help in performing the quality of electronic texts by cleaning and removing noises. In this paper, we perform a qualitative and quantitative comparison of several error-correction techniques for historical French documents. Experimentation shows that our Machine Translation for Error Correction method is superior to other Language Modelling correction techniques, with nearly 13{\%} relative improvement compared to the initial baseline.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,463
inproceedings
reynaert-2016-ocr
{OCR} Post-Correction Evaluation of Early {D}utch Books Online - Revisited
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1154/
Reynaert, Martin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
967--974
We present further work on evaluation of the fully automatic post-correction of Early Dutch Books Online, a collection of 10,333 18th century books. In prior work we evaluated the new implementation of Text-Induced Corpus Clean-up (TICCL) on the basis of a single book Gold Standard derived from this collection. In the current paper we revisit the same collection on the basis of a sizeable 1020 item random sample of OCR post-corrected strings from the full collection. Both evaluations have their own stories to tell and lessons to teach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,464
inproceedings
clematide-etal-2016-crowdsourcing
Crowdsourcing an {OCR} Gold Standard for a {G}erman and {F}rench Heritage Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1155/
Clematide, Simon and Furrer, Lenz and Volk, Martin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
975--982
Crowdsourcing approaches for post-correction of OCR output (Optical Character Recognition) have been successfully applied to several historic text collections. We report on our crowd-correction platform Kokos, which we built to improve the OCR quality of the digitized yearbooks of the Swiss Alpine Club (SAC) from the 19th century. This multilingual heritage corpus consists of Alpine texts mainly written in German and French, all typeset in Antiqua font. Finding and engaging volunteers for correcting large amounts of pages into high quality text requires a carefully designed user interface, an easy-to-use workflow, and continuous efforts for keeping the participants motivated. More than 180,000 characters on about 21,000 pages were corrected by volunteers in about 7 month, achieving an OCR gold standard with a systematically evaluated accuracy of 99.7{\%} on the word level. The crowdsourced OCR gold standard and the corresponding original OCR recognition results from Abby FineReader 7 for each page are available as a resource. Additionally, the scanned images (300dpi) of all pages are included in order to facilitate tests with other OCR software.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,465
inproceedings
saint-dizier-2016-argument
Argument Mining: the Bottleneck of Knowledge and Language Resources
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1156/
Saint-Dizier, Patrick
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
983--990
Given a controversial issue, argument mining from natural language texts (news papers, and any form of text on the Internet) is extremely challenging: domain knowledge is often required together with appropriate forms of inferences to identify arguments. This contribution explores the types of knowledge that are required and how they can be paired with reasoning schemes, language processing and language resources to accurately mine arguments. We show via corpus analysis that the Generative Lexicon, enhanced in different manners and viewed as both a lexicon and a domain knowledge representation, is a relevant approach. In this paper, corpus annotation for argument mining is first developed, then we show how the generative lexicon approach must be adapted and how it can be paired with language processing patterns to extract and specify the nature of arguments. Our approach to argument mining is thus knowledge driven.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,466
inproceedings
lapshinova-koltunski-etal-2016-interoperable
From Interoperable Annotations towards Interoperable Resources: A Multilingual Approach to the Analysis of Discourse
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1157/
Lapshinova-Koltunski, Ekaterina and Kunz, Kerstin Anna and Nedoluzhko, Anna
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
991--997
In the present paper, we analyse variation of discourse phenomena in two typologically different languages, i.e. in German and Czech. The novelty of our approach lies in the nature of the resources we are using. Advantage is taken of existing resources, which are, however, annotated on the basis of two different frameworks. We use an interoperable scheme unifying discourse phenomena in both frameworks into more abstract categories and considering only those phenomena that have a direct match in German and Czech. The discourse properties we focus on are relations of identity, semantic similarity, ellipsis and discourse relations. Our study shows that the application of interoperable schemes allows an exploitation of discourse-related phenomena analysed in different projects and on the basis of different frameworks. As corpus compilation and annotation is a time-consuming task, positive results of this experiment open up new paths for contrastive linguistics, translation studies and NLP, including machine translation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,467
inproceedings
van-den-heuvel-oostdijk-2016-falling
Falling silent, lost for words ... Tracing personal involvement in interviews with {D}utch war veterans
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1158/
van den Heuvel, Henk and Oostdijk, Nelleke
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
998--1001
In sources used in oral history research (such as interviews with eye witnesses), passages where the degree of personal emotional involvement is found to be high can be of particular interest, as these may give insight into how historical events were experienced, and what moral dilemmas and psychological or religious struggles were encountered. In a pilot study involving a large corpus of interview recordings with Dutch war veterans, we have investigated if it is possible to develop a method for automatically identifying those passages where the degree of personal emotional involvement is high. The method is based on the automatic detection of exceptionally large silences and filled pause segments (using Automatic Speech Recognition), and cues taken from specific n-grams. The first results appear to be encouraging enough for further elaboration of the method.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,468
inproceedings
liu-etal-2016-bilingual
A Bilingual Discourse Corpus and Its Applications
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1159/
Liu, Yang and Zhang, Jiajun and Zong, Chengqing and Yang, Yating and Zhou, Xi
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1002--1007
Existing discourse research only focuses on the monolingual languages and the inconsistency between languages limits the power of the discourse theory in multilingual applications such as machine translation. To address this issue, we design and build a bilingual discource corpus in which we are currently defining and annotating the bilingual elementary discourse units (BEDUs). The BEDUs are then organized into hierarchical structures. Using this discourse style, we have annotated nearly 20K LDC sentences. Finally, we design a bilingual discourse based method for machine translation evaluation and show the effectiveness of our bilingual discourse annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,469
inproceedings
scheffler-stede-2016-adding
Adding Semantic Relations to a Large-Coverage Connective Lexicon of {G}erman
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1160/
Scheffler, Tatjana and Stede, Manfred
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1008--1013
DiMLex is a lexicon of German connectives that can be used for various language understanding purposes. We enhanced the coverage to 275 connectives, which we regard as covering all known German discourse connectives in current use. In this paper, we consider the task of adding the semantic relations that can be expressed by each connective. After discussing different approaches to retrieving semantic information, we settle on annotating each connective with senses from the new PDTB 3.0 sense hierarchy. We describe our new implementation in the extended DiMLex, which will be available for research purposes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,470
inproceedings
janier-reed-2016-corpus
Corpus Resources for Dispute Mediation Discourse
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1161/
Janier, Mathilde and Reed, Chris
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1014--1021
Dispute mediation is a growing activity in the resolution of conflicts, and more and more research emerge to enhance and better understand this (until recently) understudied practice. Corpus analyses are necessary to study discourse in this context; yet, little data is available, mainly because of its confidentiality principle. After proposing hints and avenues to acquire transcripts of mediation sessions, this paper presents the Dispute Mediation Corpus, which gathers annotated excerpts of mediation dialogues. Although developed as part of a project on argumentation, it is freely available and the text data can be used by anyone. This first-ever open corpus of mediation interactions can be of interest to scholars studying discourse, but also conflict resolution, argumentation, linguistics, communication, etc. We advocate for using and extending this resource that may be valuable to a large variety of domains of research, particularly those striving to enhance the study of the rapidly growing activity of dispute mediation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,471
inproceedings
valmaseda-etal-2016-tagged
A Tagged Corpus for Automatic Labeling of Disabilities in Medical Scientific Papers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1162/
Valmaseda, Carlos and Martinez-Romo, Juan and Araujo, Lourdes
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1022--1025
This paper presents the creation of a corpus of labeled disabilities in scientific papers. The identification of medical concepts in documents and, especially, the identification of disabilities, is a complex task mainly due to the variety of expressions that can make reference to the same problem. Currently there is not a set of documents manually annotated with disabilities with which to evaluate an automatic detection system of such concepts. This is the reason why this corpus arises, aiming to facilitate the evaluation of systems that implement an automatic annotation tool for extracting biomedical concepts such as disabilities. The result is a set of scientific papers manually annotated. For the selection of these scientific papers has been conducted a search using a list of rare diseases, since they generally have associated several disabilities of different kinds.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,472
inproceedings
lukin-etal-2016-personabank
{P}ersona{B}ank: A Corpus of Personal Narratives and Their Story Intention Graphs
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1163/
Lukin, Stephanie and Bowden, Kevin and Barackman, Casey and Walker, Marilyn
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1026--1033
We present a new corpus, PersonaBank, consisting of 108 personal stories from weblogs that have been annotated with their Story Intention Graphs, a deep representation of the content of a story. We describe the topics of the stories and the basis of the Story Intention Graph representation, as well as the process of annotating the stories to produce the Story Intention Graphs and the challenges of adapting the tool to this new personal narrative domain. We also discuss how the corpus can be used in applications that retell the story using different styles of tellings, co-tellings, or as a content planner.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,473
inproceedings
chen-etal-2016-fine
Fine-Grained {C}hinese Discourse Relation Labelling
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1164/
Chen, Huan-Yuan and Liao, Wan-Shan and Huang, Hen-Hsen and Chen, Hsin-Hsi
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1034--1038
This paper explores several aspects together for a fine-grained Chinese discourse analysis. We deal with the issues of ambiguous discourse markers, ambiguous marker linkings, and more than one discourse marker. A universal feature representation is proposed. The pair-once postulation, cross-discourse-unit-first rule and word-pair-marker-first rule select a set of discourse markers from ambiguous linkings. Marker-Sum feature considers total contribution of markers and Marker-Preference feature captures the probability distribution of discourse functions of a representative marker by using preference rule. The HIT Chinese discourse relation treebank (HIT-CDTB) is used to evaluate the proposed models. The 25-way classifier achieves 0.57 micro-averaged F-score.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,474
inproceedings
rehbein-etal-2016-annotating
Annotating Discourse Relations in Spoken Language: A Comparison of the {PDTB} and {CCR} Frameworks
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1165/
Rehbein, Ines and Scholman, Merel and Demberg, Vera
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1039--1046
In discourse relation annotation, there is currently a variety of different frameworks being used, and most of them have been developed and employed mostly on written data. This raises a number of questions regarding interoperability of discourse relation annotation schemes, as well as regarding differences in discourse annotation for written vs. spoken domains. In this paper, we describe ouron annotating two spoken domains from the SPICE Ireland corpus (telephone conversations and broadcast interviews) according todifferent discourse annotation schemes, PDTB 3.0 and CCR. We show that annotations in the two schemes can largely be mappedone another, and discuss differences in operationalisations of discourse relation schemes which present a challenge to automatic mapping. We also observe systematic differences in the prevalence of implicit discourse relations in spoken data compared to written texts,find that there are also differences in the types of causal relations between the domains. Finally, we find that PDTB 3.0 addresses many shortcomings of PDTB 2.0 wrt. the annotation of spoken discourse, and suggest further extensions. The new corpus has roughly theof the CoNLL 2015 Shared Task test set, and we hence hope that it will be a valuable resource for the evaluation of automatic discourse relation labellers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,475
inproceedings
lailler-etal-2016-enhancing
Enhancing The {RATP}-{DECODA} Corpus With Linguistic Annotations For Performing A Large Range Of {NLP} Tasks
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1166/
Lailler, Carole and Landeau, Ana{\"is and B{\'echet, Fr{\'ed{\'eric and Est{\`eve, Yannick and Del{\'eglise, Paul
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1047--1050
In this article, we present the RATP-DECODA Corpus which is composed by a set of 67 hours of speech from telephone conversations of a Customer Care Service (CCS). This corpus is already available on line at \url{http://sldr.org/sldr000847/fr} in its first version. However, many enhancements have been made in order to allow the development of automatic techniques to transcript conversations and to capture their meaning. These enhancements fall into two categories: firstly, we have increased the size of the corpus with manual transcriptions from a new operational day; secondly we have added new linguistic annotations to the whole corpus (either manually or through an automatic processing) in order to perform various linguistic tasks from syntactic and semantic parsing to dialog act tagging and dialog summarization.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,476
inproceedings
stede-etal-2016-parallel
Parallel Discourse Annotations on a Corpus of Short Texts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1167/
Stede, Manfred and Afantenos, Stergos and Peldszus, Andreas and Asher, Nicholas and Perret, J{\'e}r{\'e}my
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1051--1058
We present the first corpus of texts annotated with two alternative approaches to discourse structure, Rhetorical Structure Theory (Mann and Thompson, 1988) and Segmented Discourse Representation Theory (Asher and Lascarides, 2003). 112 short argumentative texts have been analyzed according to these two theories. Furthermore, in previous work, the same texts have already been annotated for their argumentation structure, according to the scheme of Peldszus and Stede (2013). This corpus therefore enables studies of correlations between the two accounts of discourse structure, and between discourse and argumentation. We converted the three annotation formats to a common dependency tree format that enables to compare the structures, and we describe some initial findings.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,477
inproceedings
lee-yeung-2016-annotated
An Annotated Corpus of Direct Speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1168/
Lee, John and Yeung, Chak Yan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1059--1063
We propose a scheme for annotating direct speech in literary texts, based on the Text Encoding Initiative (TEI) and the coreference annotation guidelines from the Message Understanding Conference (MUC). The scheme encodes the speakers and listeners of utterances in a text, as well as the quotative verbs that reports the utterances. We measure inter-annotator agreement on this annotation task. We then present statistics on a manually annotated corpus that consists of books from the New Testament. Finally, we visualize the corpus as a conversational network.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,478
inproceedings
etxeberria-etal-2016-evaluating
Evaluating the Noisy Channel Model for the Normalization of Historical Texts: {B}asque, {S}panish and {S}lovene
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1169/
Etxeberria, Izaskun and Alegria, I{\~n}aki and Uria, Larraitz and Hulden, Mans
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1064--1069
This paper presents a method for the normalization of historical texts using a combination of weighted finite-state transducers and language models. We have extended our previous work on the normalization of dialectal texts and tested the method against a 17th century literary work in Basque. This preprocessed corpus is made available in the LREC repository. The performance of this method for learning relations between historical and contemporary word forms is evaluated against resources in three languages. The method we present learns to map phonological changes using a noisy channel model. The model is based on techniques commonly used for phonological inference and producing Grapheme-to-Grapheme conversion systems encoded as weighted transducers and produces F-scores above 80{\%} in the task for Basque. A wider evaluation shows that the approach performs equally well with all the languages in our evaluation suite: Basque, Spanish and Slovene. A comparison against other methods that address the same task is also provided.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,479
inproceedings
darwish-mubarak-2016-farasa
{F}arasa: A New Fast and Accurate {A}rabic Word Segmenter
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1170/
Darwish, Kareem and Mubarak, Hamdy
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1070--1074
In this paper, we present Farasa (meaning insight in Arabic), which is a fast and accurate Arabic segmenter. Segmentation involves breaking Arabic words into their constituent clitics. Our approach is based on SVMrank using linear kernels. The features that we utilized account for: likelihood of stems, prefixes, suffixes, and their combination; presence in lexicons containing valid stems and named entities; and underlying stem templates. Farasa outperforms or equalizes state-of-the-art Arabic segmenters, namely QATARA and MADAMIRA. Meanwhile, Farasa is nearly one order of magnitude faster than QATARA and two orders of magnitude faster than MADAMIRA. The segmenter should be able to process one billion words in less than 5 hours. Farasa is written entirely in native Java, with no external dependencies, and is open-source.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,480
inproceedings
bick-2016-morphological
A Morphological Lexicon of {E}speranto with Morpheme Frequencies
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1171/
Bick, Eckhard
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1075--1078
This paper discusses the internal structure of complex Esperanto words (CWs). Using a morphological analyzer, possible affixation and compounding is checked for over 50,000 Esperanto lexemes against a list of 17,000 root words. Morpheme boundaries in the resulting analyses were then checked manually, creating a CW dictionary of 28,000 words, representing 56.4{\%} of the lexicon, or 19.4{\%} of corpus tokens. The error percentage of the EspGram morphological analyzer for new corpus CWs was 4.3{\%} for types and 6.4{\%} for tokens, with a recall of almost 100{\%}, and wrong/spurious boundaries being more common than missing ones. For pedagogical purposes a morpheme frequency dictionary was constructed for a 16 million word corpus, confirming the importance of agglutinative derivational morphemes in the Esperanto lexicon. Finally, as a means to reduce the morphological ambiguity of CWs, we provide POS likelihoods for Esperanto suffixes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,481
inproceedings
liu-wang-2016-dictionary
How does Dictionary Size Influence Performance of {V}ietnamese Word Segmentation?
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1172/
Liu, Wuying and Wang, Lin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1079--1083
Vietnamese word segmentation (VWS) is a challenging basic issue for natural language processing. This paper addresses the problem of how does dictionary size influence VWS performance, proposes two novel measures: square overlap ratio (SOR) and relaxed square overlap ratio (RSOR), and validates their effectiveness. The SOR measure is the product of dictionary overlap ratio and corpus overlap ratio, and the RSOR measure is the relaxed version of SOR measure under an unsupervised condition. The two measures both indicate the suitable degree between segmentation dictionary and object corpus waiting for segmentation. The experimental results show that the more suitable, neither smaller nor larger, dictionary size is better to achieve the state-of-the-art performance for dictionary-based Vietnamese word segmenters.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,482
inproceedings
hathout-namer-2016-giving
Giving Lexical Resources a Second Life: D{\'e}monette, a Multi-sourced Morpho-semantic Network for {F}rench
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1173/
Hathout, Nabil and Namer, Fiammetta
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1084--1091
D{\'e}monette is a derivational morphological network designed for the description of French. Its original architecture enables its use as a formal framework for the description of morphological analyses and as a repository for existing lexicons. It is fed with a variety of resources, which all are already validated. The harmonization of their content into a unified format provides them a second life, in which they are enriched with new properties, provided these are deductible from their contents. D{\'e}monette is released under a Creative Commons license. It is usable for theoretical and descriptive research in morphology, as a source of experimental material for psycholinguistics, natural language processing (NLP) and information retrieval (IR), where it fills a gap, since French lacks a large-coverage derivational resources database. The article presents the integration of two existing lexicons into D{\'e}monette. The first is Verbaction, a lexicon of deverbal action nouns. The second is Lexeur, a database of agent nouns in -eur derived from verbs or from nouns.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,483