entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
roesiger-2016-scicorp
{S}ci{C}orp: A Corpus of {E}nglish Scientific Articles Annotated for Information Status Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1275/
Roesiger, Ina
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1743--1749
This paper presents SciCorp, a corpus of full-text English scientific papers of two disciplines, genetics and computational linguistics. The corpus comprises co-reference and bridging information as well as information status labels. Since SciCorp is annotated with both labels and the respective co-referent and bridging links, we believe it is a valuable resource for NLP researchers working on scientific articles or on applications such as co-reference resolution, bridging resolution or information status classification. The corpus has been reliably annotated by independent human coders with moderate inter-annotator agreement (average kappa = 0.71). In total, we have annotated 14 full papers containing 61,045 tokens and marked 8,708 definite noun phrases. The paper describes in detail the annotation scheme as well as the resulting corpus. The corpus is available for download in two different formats: in an offset-based format and for the co-reference annotations in the widely-used, tabular CoNLL-2012 format.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,585
inproceedings
jain-etal-2016-using
Using lexical and Dependency Features to Disambiguate Discourse Connectives in {H}indi
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1276/
Jain, Rohit and Sharma, Himanshu and Sharma, Dipti
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1750--1754
Discourse parsing is a challenging task in NLP and plays a crucial role in discourse analysis. To enable discourse analysis for Hindi, Hindi Discourse Relations Bank was created on a subset of Hindi TreeBank. The benefits of a discourse analyzer in automated discourse analysis, question summarization and question answering domains has motivated us to begin work on a discourse analyzer for Hindi. In this paper, we focus on discourse connective identification for Hindi. We explore various available syntactic features for this task. We also explore the use of dependency tree parses present in the Hindi TreeBank and study the impact of the same on the performance of the system. We report that the novel dependency features introduced have a higher impact on precision, in comparison to the syntactic features previously used for this task. In addition, we report a high accuracy of 96{\%} for this task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,586
inproceedings
andersson-etal-2016-annotating
Annotating Topic Development in Information Seeking Queries
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1277/
Andersson, Marta and {\"Ozt{\"urel, Adnan and Pareti, Silvia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1755--1761
This paper contributes to the limited body of empirical research in the domain of discourse structure of information seeking queries. We describe the development of an annotation schema for coding topic development in information seeking queries and the initial observations from a pilot sample of query sessions. The main idea that we explore is the relationship between constant and variable discourse entities and their role in tracking changes in the topic progression. We argue that the topicalized entities remain stable across development of the discourse and can be identified by a simple mechanism where anaphora resolution is a precursor. We also claim that a corpus annotated in this framework can be used as training data for dialogue management and computational semantics systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,587
inproceedings
alharbi-hain-2016-opencourseware
The {O}pen{C}ourse{W}are Metadiscourse ({OCWMD}) Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1279/
Alharbi, Ghada and Hain, Thomas
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1770--1776
This study describes a new corpus of over 60,000 hand-annotated metadiscourse acts from 106 OpenCourseWare lectures, from two different disciplines: Physics and Economics. Metadiscourse is a set of linguistic expressions that signal different functions in the discourse. This type of language is hypothesised to be helpful in finding a structure in unstructured text, such as lectures discourse. A brief summary is provided about the annotation scheme and labelling procedures, inter-annotator reliability statistics, overall distributional statistics, a description of auxiliary data that will be distributed with the corpus, and information relating to how to obtain the data. The results provide a deeper understanding of lecture structure and confirm the reliable coding of metadiscursive acts in academic lectures across different disciplines. The next stage of our research will be to build a classification model to automate the tagging process, instead of manual annotation, which take time and efforts. This is in addition to the use of these tags as indicators of the higher level structure of lecture discourse.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,589
inproceedings
hernandez-etal-2016-ubuntu
{U}buntu-fr: A Large and Open Corpus for Multi-modal Analysis of Online Written Conversations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1280/
Hernandez, Nicolas and Salim, Soufian and Clouet, Elizaveta Loginova
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1777--1783
We present a large, free, French corpus of online written conversations extracted from the Ubuntu platform`s forums, mailing lists and IRC channels. The corpus is meant to support multi-modality and diachronic studies of online written conversations. We choose to build the corpus around a robust metadata model based upon strong principles, such as the {\textquotedblleft}stand off{\textquotedblright} annotation principle. We detail the model, we explain how the data was collected and processed - in terms of meta-data, text and conversation - and we detail the corpus`contents through a series of meaningful statistics. A portion of the corpus - about 4,700 sentences from emails, forum posts and chat messages sent in November 2014 - is annotated in terms of dialogue acts and sentiment. We discuss how we adapted our dialogue act taxonomy from the DIT++ annotation scheme and how the data was annotated, before presenting our results as well as a brief qualitative analysis of the annotated data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,590
inproceedings
hough-etal-2016-duel
{DUEL}: A Multi-lingual Multimodal Dialogue Corpus for Disfluency, Exclamations and Laughter
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1281/
Hough, Julian and Tian, Ye and de Ruiter, Laura and Betz, Simon and Kousidis, Spyros and Schlangen, David and Ginzburg, Jonathan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1784--1788
We present the DUEL corpus, consisting of 24 hours of natural, face-to-face, loosely task-directed dialogue in German, French and Mandarin Chinese. The corpus is uniquely positioned as a cross-linguistic, multimodal dialogue resource controlled for domain. DUEL includes audio, video and body tracking data and is transcribed and annotated for disfluency, laughter and exclamations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,591
inproceedings
barzdins-etal-2016-character
Character-Level Neural Translation for Multilingual Media Monitoring in the {SUMMA} Project
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1282/
Barzdins, Guntis and Renals, Steve and Gosko, Didzis
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1789--1793
The paper steps outside the comfort-zone of the traditional NLP tasks like automatic speech recognition (ASR) and machine translation (MT) to addresses two novel problems arising in the automated multilingual news monitoring: segmentation of the TV and radio program ASR transcripts into individual stories, and clustering of the individual stories coming from various sources and languages into storylines. Storyline clustering of stories covering the same events is an essential task for inquisitorial media monitoring. We address these two problems jointly by engaging the low-dimensional semantic representation capabilities of the sequence to sequence neural translation models. To enable joint multi-task learning for multilingual neural translation of morphologically rich languages we replace the attention mechanism with the sliding-window mechanism and operate the sequence to sequence neural translation model on the character-level rather than on the word-level. The story segmentation and storyline clustering problem is tackled by examining the low-dimensional vectors produced as a side-product of the neural translation process. The results of this paper describe a novel approach to the automatic story segmentation and storyline clustering problem.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,592
inproceedings
van-hee-etal-2016-exploring
Exploring the Realization of Irony in {T}witter Data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1283/
Van Hee, Cynthia and Lefever, Els and Hoste, V{\'e}ronique
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1794--1799
Handling figurative language like irony is currently a challenging task in natural language processing. Since irony is commonly used in user-generated content, its presence can significantly undermine accurate analysis of opinions and sentiment in such texts. Understanding irony is therefore important if we want to push the state-of-the-art in tasks such as sentiment analysis. In this research, we present the construction of a Twitter dataset for two languages, being English and Dutch, and the development of new guidelines for the annotation of verbal irony in social media texts. Furthermore, we present some statistics on the annotated corpora, from which we can conclude that the detection of contrasting evaluations might be a good indicator for recognizing irony.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,593
inproceedings
goutte-etal-2016-discriminating
Discriminating Similar Languages: Evaluations and Explorations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1284/
Goutte, Cyril and L{\'e}ger, Serge and Malmasi, Shervin and Zampieri, Marcos
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1800--1807
We present an analysis of the performance of machine learning classifiers on discriminating between similar languages and language varieties. We carried out a number of experiments using the results of the two editions of the Discriminating between Similar Languages (DSL) shared task. We investigate the progress made between the two tasks, estimate an upper bound on possible performance using ensemble and oracle combination, and provide learning curves to help us understand which languages are more challenging. A number of difficult sentences are identified and investigated further with human annotation
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,594
inproceedings
al-sulaiti-etal-2016-compilation
Compilation of an {A}rabic Children`s Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1285/
Al-Sulaiti, Latifa and Abbas, Noorhan and Brierley, Claire and Atwell, Eric and Alghamdi, Ayman
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1808--1812
Inspired by the Oxford Children`s Corpus, we have developed a prototype corpus of Arabic texts written and/or selected for children. Our Arabic Children`s Corpus of 2950 documents and nearly 2 million words has been collected manually from the web during a 3-month project. It is of high quality, and contains a range of different children`s genres based on sources located, including classic tales from The Arabian Nights, and popular fictional characters such as Goha. We anticipate that the current and subsequent versions of our corpus will lead to interesting studies in text classification, language use, and ideology in children`s texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,595
inproceedings
eriksson-2016-quality
Quality Assessment of the {R}euters Vol. 2 Multilingual Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1286/
Eriksson, Robin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1813--1819
We introduce a framework for quality assurance of corpora, and apply it to the Reuters Multilingual Corpus (RCV2). The results of this quality assessment of this standard newsprint corpus reveal a significant duplication problem and, to a lesser extent, a problem with corrupted articles. From the raw collection of some 487,000 articles, almost one tenth are trivial duplicates. A smaller fraction of articles appear to be corrupted and should be excluded for that reason. The detailed results are being made available as on-line appendices to this article. This effort also demonstrates the beginnings of a constraint-based methodological framework for quality assessment and quality assurance for corpora. As a first implementation of this framework, we have investigated constraints to verify sample integrity, and to diagnose sample duplication, entropy aberrations, and tagging inconsistencies. To help identify near-duplicates in the corpus, we have employed both entropy measurements and a simple byte bigram incidence digest.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,596
inproceedings
el-haj-etal-2016-learning
Learning Tone and Attribution for Financial Text Mining
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1287/
El-Haj, Mahmoud and Rayson, Paul and Young, Steve and Moore, Andrew and Walker, Martin and Schleicher, Thomas and Athanasakou, Vasiliki
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1820--1825
Attribution bias refers to the tendency of people to attribute successes to their own abilities but failures to external factors. In a business context an internal factor might be the restructuring of the firm and an external factor might be an unfavourable change in exchange or interest rates. In accounting research, the presence of an attribution bias has been demonstrated for the narrative sections of the annual financial reports. Previous studies have applied manual content analysis to this problem but in this paper we present novel work to automate the analysis of attribution bias through using machine learning algorithms. Previous studies have only applied manual content analysis on a small scale to reveal such a bias in the narrative section of annual financial reports. In our work a group of experts in accounting and finance labelled and annotated a list of 32,449 sentences from a random sample of UK Preliminary Earning Announcements (PEAs) to allow us to examine whether sentences in PEAs contain internal or external attribution and which kinds of attributions are linked to positive or negative performance. We wished to examine whether human annotators could agree on coding this difficult task and whether Machine Learning (ML) could be applied reliably to replicate the coding process on a much larger scale. Our best machine learning algorithm correctly classified performance sentences with 70{\%} accuracy and detected tone and attribution in financial PEAs with accuracy of 79{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,597
inproceedings
sergienko-etal-2016-comparative
A Comparative Study of Text Preprocessing Approaches for Topic Detection of User Utterances
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1288/
Sergienko, Roman and Shan, Muhammad and Minker, Wolfgang
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1826--1831
The paper describes a comparative study of existing and novel text preprocessing and classification techniques for domain detection of user utterances. Two corpora are considered. The first one contains customer calls to a call centre for further call routing; the second one contains answers of call centre employees with different kinds of customer orientation behaviour. Seven different unsupervised and supervised term weighting methods were applied. The collective use of term weighting methods is proposed for classification effectiveness improvement. Four different dimensionality reduction methods were applied: stop-words filtering with stemming, feature selection based on term weights, feature transformation based on term clustering, and a novel feature transformation method based on terms belonging to classes. As classification algorithms we used k-NN and a SVM-based algorithm. The numerical experiments have shown that the simultaneous use of the novel proposed approaches (collectives of term weighting methods and the novel feature transformation method) allows reaching the high classification results with very small number of features.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,598
inproceedings
sharjeel-etal-2016-uppc
{UPPC} - {U}rdu Paraphrase Plagiarism Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1289/
Sharjeel, Muhammad and Rayson, Paul and Nawab, Rao Muhammad Adeel
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1832--1836
Paraphrase plagiarism is a significant and widespread problem and research shows that it is hard to detect. Several methods and automatic systems have been proposed to deal with it. However, evaluation and comparison of such solutions is not possible because of the unavailability of benchmark corpora with manual examples of paraphrase plagiarism. To deal with this issue, we present the novel development of a paraphrase plagiarism corpus containing simulated (manually created) examples in the Urdu language - a language widely spoken around the world. This resource is the first of its kind developed for the Urdu language and we believe that it will be a valuable contribution to the evaluation of paraphrase plagiarism detection systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,599
inproceedings
korkontzelos-etal-2016-identifying
Identifying Content Types of Messages Related to Open Source Software Projects
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1290/
Korkontzelos, Yannis and Thompson, Paul and Ananiadou, Sophia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1837--1844
Assessing the suitability of an Open Source Software project for adoption requires not only an analysis of aspects related to the code, such as code quality, frequency of updates and new version releases, but also an evaluation of the quality of support offered in related online forums and issue trackers. Understanding the content types of forum messages and issue trackers can provide information about the extent to which requests are being addressed and issues are being resolved, the percentage of issues that are not being fixed, the cases where the user acknowledged that the issue was successfully resolved, etc. These indicators can provide potential adopters of the OSS with estimates about the level of available support. We present a detailed hierarchy of content types of online forum messages and issue tracker comments and a corpus of messages annotated accordingly. We discuss our experiments to classify forum messages and issue tracker comments into content-related classes, i.e.{\textasciitilde}to assign them to nodes of the hierarchy. The results are very encouraging.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,600
inproceedings
li-etal-2016-emotion
Emotion Corpus Construction Based on Selection from Hashtags
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1291/
Li, Minglei and Long, Yunfei and Qin, Lu and Li, Wenjie
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1845--1849
The availability of labelled corpus is of great importance for supervised learning in emotion classification tasks. Because it is time-consuming to manually label text, hashtags have been used as naturally annotated labels to obtain a large amount of labelled training data from microblog. However, natural hashtags contain too much noise for it to be used directly in learning algorithms. In this paper, we design a three-stage semi-automatic method to construct an emotion corpus from microblogs. Firstly, a lexicon based voting approach is used to verify the hashtag automatically. Secondly, a SVM based classifier is used to select the data whose natural labels are consistent with the predicted labels. Finally, the remaining data will be manually examined to filter out the noisy data. Out of about 48K filtered Chinese microblogs, 39k microblogs are selected to form the final corpus with the Kappa value reaching over 0.92 for the automatic parts and over 0.81 for the manual part. The proportion of automatic selection reaches 54.1{\%}. Thus, the method can reduce about 44.5{\%} of manual workload for acquiring quality data. Experiment on a classifier trained on this corpus shows that it achieves comparable results compared to the manually annotated NLP{\&}CC2013 corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,601
inproceedings
gamback-das-2016-comparing
Comparing the Level of Code-Switching in Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1292/
Gamb{\"ack, Bj{\"orn and Das, Amitava
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1850--1855
Social media texts are often fairly informal and conversational, and when produced by bilinguals tend to be written in several different languages simultaneously, in the same way as conversational speech. The recent availability of large social media corpora has thus also made large-scale code-switched resources available for research. The paper addresses the issues of evaluation and comparison these new corpora entail, by defining an objective measure of corpus level complexity of code-switched texts. It is also shown how this formal measure can be used in practice, by applying it to several code-switched corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,602
inproceedings
muller-etal-2016-evaluation
Evaluation of the {KIT} Lecture Translation System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1293/
M{\"uller, Markus and F{\"unfer, Sarah and St{\"uker, Sebastian and Waibel, Alex
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1856--1861
To attract foreign students is among the goals of the Karlsruhe Institute of Technology (KIT). One obstacle to achieving this goal is that lectures at KIT are usually held in German which many foreign students are not sufficiently proficient in, as, e.g., opposed to English. While the students from abroad are learning German during their stay at KIT, it is challenging to become proficient enough in it in order to follow a lecture. As a solution to this problem we offer our automatic simultaneous lecture translation. It translates German lectures into English in real time. While not as good as human interpreters, the system is available at a price that KIT can afford in order to offer it in potentially all lectures. In order to assess whether the quality of the system we have conducted a user study. In this paper we present this study, the way it was conducted and its results. The results indicate that the quality of the system has passed a threshold as to be able to support students in their studies. The study has helped to identify the most crucial weaknesses of the systems and has guided us which steps to take next.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,603
inproceedings
qasemizadeh-schumann-2016-acl
The {ACL} {RD}-{TEC} 2.0: A Language Resource for Evaluating Term Extraction and Entity Recognition Methods
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1294/
QasemiZadeh, Behrang and Schumann, Anne-Kathrin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1862--1868
This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978{--}2006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,604
inproceedings
zaghouani-etal-2016-building
Building an {A}rabic Machine Translation Post-Edited Corpus: Guidelines and Annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1295/
Zaghouani, Wajdi and Habash, Nizar and Obeid, Ossama and Mohit, Behrang and Bouamor, Houda and Oflazer, Kemal
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1869--1876
We present our guidelines and annotation procedure to create a human corrected machine translated post-edited corpus for the Modern Standard Arabic. Our overarching goal is to use the annotated corpus to develop automatic machine translation post-editing systems for Arabic that can be used to help accelerate the human revision process of translated texts. The creation of any manually annotated corpus usually presents many challenges. In order to address these challenges, we created comprehensive and simplified annotation guidelines which were used by a team of five annotators and one lead annotator. In order to ensure a high annotation agreement between the annotators, multiple training sessions were held and regular inter-annotator agreement measures were performed to check the annotation quality. The created corpus of manual post-edited translations of English to Arabic articles is the largest to date for this language pair.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,605
inproceedings
aranberri-etal-2016-tools
Tools and Guidelines for Principled Machine Translation Development
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1296/
Aranberri, Nora and Avramidis, Eleftherios and Burchardt, Aljoscha and Klejch, Ond{\v{r}}ej and Popel, Martin and Popovi{\'c}, Maja
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1877--1882
This work addresses the need to aid Machine Translation (MT) development cycles with a complete workflow of MT evaluation methods. Our aim is to assess, compare and improve MT system variants. We hereby report on novel tools and practices that support various measures, developed in order to support a principled and informed approach of MT development. Our toolkit for automatic evaluation showcases quick and detailed comparison of MT system variants through automatic metrics and n-gram feedback, along with manual evaluation via edit-distance, error annotation and task-based feedback.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,606
inproceedings
galibert-etal-2016-generating
Generating Task-Pertinent sorted Error Lists for Speech Recognition
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1297/
Galibert, Olivier and Jannet, Mohamed Ameur Ben and Kahn, Juliette and Rosset, Sophie
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1883--1889
Automatic Speech recognition (ASR) is one of the most widely used components in spoken language processing applications. ASR errors are of varying importance with respect to the application, making error analysis keys to improving speech processing applications. Knowing the most serious errors for the applicative case is critical to build better systems. In the context of Automatic Speech Recognition (ASR) used as a first step towards Named Entity Recognition (NER) in speech, error seriousness is usually determined by their frequency, due to the use of the WER as metric to evaluate the ASR output, despite the emergence of more relevant measures in the literature. We propose to use a different evaluation metric form the literature in order to classify ASR errors according to their seriousness for NER. Our results show that the ASR errors importance is ranked differently depending on the used evaluation metric. A more detailed analysis shows that the estimation of the error impact given by the ATENE metric is more adapted to the NER task than the estimation based only on the most used frequency metric WER.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,607
inproceedings
francopoulo-etal-2016-study
A Study of Reuse and Plagiarism in {LREC} papers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1298/
Francopoulo, Gil and Mariani, Joseph and Paroubek, Patrick
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1890--1897
The aim of this experiment is to present an easy way to compare fragments of texts in order to detect (supposed) results of copy {\&} paste operations between articles in the domain of Natural Language Processing (NLP). The search space of the comparisons is a corpus labeled as NLP4NLP gathering a large part of the NLP field. The study is centered on LREC papers in both directions, first with an LREC paper borrowing a fragment of text from the collection, and secondly in the reverse direction with fragments of LREC documents borrowed and inserted in the collection.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,608
inproceedings
ganguly-etal-2016-developing
Developing a Dataset for Evaluating Approaches for Document Expansion with Images
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1299/
Ganguly, Debasis and Calixto, Iacer and Jones, Gareth
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1898--1901
Motivated by the adage that a {\textquotedblleft}picture is worth a thousand words{\textquotedblright} it can be reasoned that automatically enriching the textual content of a document with relevant images can increase the readability of a document. Moreover, features extracted from the additional image data inserted into the textual content of a document may, in principle, be also be used by a retrieval engine to better match the topic of a document with that of a given query. In this paper, we describe our approach of building a ground truth dataset to enable further research into automatic addition of relevant images to text documents. The dataset is comprised of the official ImageCLEF 2010 collection (a collection of images with textual metadata) to serve as the images available for automatic enrichment of text, a set of 25 benchmark documents that are to be enriched, which in this case are children`s short stories, and a set of manually judged relevant images for each query story obtained by the standard procedure of depth pooling. We use this benchmark dataset to evaluate the effectiveness of standard information retrieval methods as simple baselines for this task. The results indicate that using the whole story as a weighted query, where the weight of each query term is its tf-idf value, achieves an precision of 0:1714 within the top 5 retrieved images on an average.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,609
inproceedings
ruiz-etal-2016-word
More than Word Cooccurrence: Exploring Support and Opposition in International Climate Negotiations with Semantic Parsing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1300/
Ruiz Fabo, Pablo and Plancq, Cl{\'e}ment and Poibeau, Thierry
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1902--1907
Text analysis methods widely used in digital humanities often involve word co-occurrence, e.g. concept co-occurrence networks. These methods provide a useful corpus overview, but cannot determine the predicates that relate co-occurring concepts. Our goal was identifying propositions expressing the points supported or opposed by participants in international climate negotiations. Word co-occurrence methods were not sufficient, and an analysis based on open relation extraction had limited coverage for nominal predicates. We present a pipeline which identifies the points that different actors support and oppose, via a domain model with support/opposition predicates, and analysis rules that exploit the output of semantic role labelling, syntactic dependencies and anaphora resolution. Entity linking and keyphrase extraction are also performed on the propositions related to each actor. A user interface allows examining the main concepts in points supported or opposed by each participant, which participants agree or disagree with each other, and about which issues. The system is an example of tools that digital humanities scholars are asking for, to render rich textual information (beyond word co-occurrence) more amenable to quantitative treatment. An evaluation of the tool was satisfactory.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,610
inproceedings
collovini-etal-2016-sequence
A Sequence Model Approach to Relation Extraction in {P}ortuguese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1301/
Collovini, Sandra and Machado, Gabriel and Vieira, Renata
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1908--1912
The task of Relation Extraction from texts is one of the main challenges in the area of Information Extraction, considering the required linguistic knowledge and the sophistication of the language processing techniques employed. This task aims at identifying and classifying semantic relations that occur between entities recognized in a given text. In this paper, we evaluated a Conditional Random Fields classifier for the extraction of any relation descriptor occurring between named entities (Organisation, Person and Place categories), as well as pre-defined relation types between these entities in Portuguese texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,611
inproceedings
hladek-etal-2016-evaluation
Evaluation Set for {S}lovak News Information Retrieval
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1302/
Hl{\'a}dek, Daniel and Sta{\v{s}}, Jan and Juh{\'a}r, Jozef
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1913--1916
This work proposes an information retrieval evaluation set for the Slovak language. A set of 80 queries written in the natural language is given together with the set of relevant documents. The document set contains 3980 newspaper articles sorted into 6 categories. Each document in the result set is manually annotated for relevancy with its corresponding query. The evaluation set is mostly compatible with the Cranfield test collection using the same methodology for queries and annotation of relevancy. In addition to that it provides annotation for document title, author, publication date and category that can be used for evaluation of automatic document clustering and categorization.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,612
inproceedings
imada-etal-2016-analyzing
Analyzing Time Series Changes of Correlation between Market Share and Concerns on Companies measured through Search Engine Suggests
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1303/
Imada, Takakazu and Inoue, Yusuke and Chen, Lei and Doi, Syunya and Nie, Tian and Zhao, Chen and Utsuro, Takehito and Kawada, Yasuhide
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1917--1923
This paper proposes how to utilize a search engine in order to predict market shares. We propose to compare rates of concerns of those who search for Web pages among several companies which supply products, given a specific products domain. We measure concerns of those who search for Web pages through search engine suggests. Then, we analyze whether rates of concerns of those who search for Web pages have certain correlation with actual market share. We show that those statistics have certain correlations. We finally propose how to predict the market share of a specific product genre based on the rates of concerns of those who search for Web pages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,613
inproceedings
bougouin-etal-2016-termith
{T}erm{ITH}-Eval: a {F}rench Standard-Based Resource for Keyphrase Extraction Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1304/
Bougouin, Adrien and Barreaux, Sabine and Romary, Laurent and Boudin, Florian and Daille, B{\'e}atrice
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1924--1927
Keyphrase extraction is the task of finding phrases that represent the important content of a document. The main aim of keyphrase extraction is to propose textual units that represent the most important topics developed in a document. The output keyphrases of automatic keyphrase extraction methods for test documents are typically evaluated by comparing them to manually assigned reference keyphrases. Each output keyphrase is considered correct if it matches one of the reference keyphrases. However, the choice of the appropriate textual unit (keyphrase) for a topic is sometimes subjective and evaluating by exact matching underestimates the performance. This paper presents a dataset of evaluation scores assigned to automatically extracted keyphrases by human evaluators. Along with the reference keyphrases, the manual evaluations can be used to validate new evaluation measures. Indeed, an evaluation measure that is highly correlated to the manual evaluation is appropriate for the evaluation of automatic keyphrase extraction methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,614
inproceedings
kermes-etal-2016-royal
The Royal Society Corpus: From Uncharted Data to Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1305/
Kermes, Hannah and Degaetano-Ortlieb, Stefania and Khamis, Ashraf and Knappen, J{\"org and Teich, Elke
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1928--1931
We present the Royal Society Corpus (RSC) built from the Philosophical Transactions and Proceedings of the Royal Society of London. At present, the corpus contains articles from the first two centuries of the journal (1665{\textemdash}1869) and amounts to around 35 million tokens. The motivation for building the RSC is to investigate the diachronic linguistic development of scientific English. Specifically, we assume that due to specialization, linguistic encodings become more compact over time (Halliday, 1988; Halliday and Martin, 1993), thus creating a specific discourse type characterized by high information density that is functional for expert communication. When building corpora from uncharted material, typically not all relevant meta-data (e.g. author, time, genre) or linguistic data (e.g. sentence/word boundaries, words, parts of speech) is readily available. We present an approach to obtain good quality meta-data and base text data adopting the concept of Agile Software Development.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,615
inproceedings
goeuriot-etal-2016-building
Building Evaluation Datasets for Consumer-Oriented Information Retrieval
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1306/
Goeuriot, Lorraine and Kelly, Liadh and Zuccon, Guido and Palotti, Joao
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1932--1938
Common people often experience difficulties in accessing relevant, correct, accurate and understandable health information online. Developing search techniques that aid these information needs is challenging. In this paper we present the datasets created by CLEF eHealth Lab from 2013-2015 for evaluation of search solutions to support common people finding health information online. Specifically, the CLEF eHealth information retrieval (IR) task of this Lab has provided the research community with benchmarks for evaluating consumer-centered health information retrieval, thus fostering research and development aimed to address this challenging problem. Given consumer queries, the goal of the task is to retrieve relevant documents from the provided collection of web pages. The shared datasets provide a large health web crawl, queries representing people`s real world information needs, and relevance assessment judgements for the queries.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,616
inproceedings
nguyen-etal-2016-dataset
A Dataset for Open Event Extraction in {E}nglish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1307/
Nguyen, Kiem-Hieu and Tannier, Xavier and Ferret, Olivier and Besan{\c{c}}on, Romaric
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1939--1943
This article presents a corpus for development and testing of event schema induction systems in English. Schema induction is the task of learning templates with no supervision from unlabeled texts, and to group together entities corresponding to the same role in a template. Most of the previous work on this subject relies on the MUC-4 corpus. We describe the limits of using this corpus (size, non-representativeness, similarity of roles across templates) and propose a new, partially-annotated corpus in English which remedies some of these shortcomings. We make use of Wikinews to select the data inside the category Laws {\&} Justice, and query Google search engine to retrieve different documents on the same events. Only Wikinews documents are manually annotated and can be used for evaluation, while the others can be used for unsupervised learning. We detail the methodology used for building the corpus and evaluate some existing systems on this new data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,617
inproceedings
kocharov-2016-phoneme
Phoneme Alignment Using the Information on Phonological Processes in Continuous Speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1308/
Kocharov, Daniil
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1944--1948
The current study focuses on optimization of Levenshtein algorithm for the purpose of computing the optimal alignment between two phoneme transcriptions of spoken utterance containing sequences of phonetic symbols. The alignment is computed with the help of a confusion matrix in which costs for phonetic symbol deletion, insertion and substitution are defined taking into account various phonological processes that occur in fluent speech, such as anticipatory assimilation, phone elision and epenthesis. The corpus containing about 30 hours of Russian read speech was used to evaluate the presented algorithms. The experimental results have shown significant reduction of misalignment rate in comparison with the baseline Levenshtein algorithm: the number of errors has been reduced from 1.1 {\%} to 0.28 {\%}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,618
inproceedings
kachkovskaia-etal-2016-coruss
{C}o{R}u{SS} - a New Prosodically Annotated Corpus of {R}ussian Spontaneous Speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1309/
Kachkovskaia, Tatiana and Kocharov, Daniil and Skrelin, Pavel and Volskaya, Nina
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1949--1954
This paper describes speech data recording, processing and annotation of a new speech corpus CoRuSS (Corpus of Russian Spontaneous Speech), which is based on connected communicative speech recorded from 60 native Russian male and female speakers of different age groups (from 16 to 77). Some Russian speech corpora available at the moment contain plain orthographic texts and provide some kind of limited annotation, but there are no corpora providing detailed prosodic annotation of spontaneous conversational speech. This corpus contains 30 hours of high quality recorded spontaneous Russian speech, half of it has been transcribed and prosodically labeled. The recordings consist of dialogues between two speakers, monologues (speakers' self-presentations) and reading of a short phonetically balanced text. Since the corpus is labeled for a wide range of linguistic - phonetic and prosodic - information, it provides basis for empirical studies of various spontaneous speech phenomena as well as for comparison with those we observe in prepared read speech. Since the corpus is designed as a open-access resource of speech data, it will also make possible to advance corpus-based analysis of spontaneous speech data across languages and speech technology development as well.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,619
inproceedings
dediu-moisik-2016-defining
Defining and Counting Phonological Classes in Cross-linguistic Segment Databases
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1310/
Dediu, Dan and Moisik, Scott
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1955--1962
Recently, there has been an explosion in the availability of large, good-quality cross-linguistic databases such as WALS (Dryer {\&} Haspelmath, 2013), Glottolog (Hammarstrom et al., 2015) and Phoible (Moran {\&} McCloy, 2014). Databases such as Phoible contain the actual segments used by various languages as they are given in the primary language descriptions. However, this segment-level representation cannot be used directly for analyses that require generalizations over classes of segments that share theoretically interesting features. Here we present a method and the associated R (R Core Team, 2014) code that allows the flexible definition of such meaningful classes and that can identify the sets of segments falling into such a class for any language inventory. The method and its results are important for those interested in exploring cross-linguistic patterns of phonetic and phonological diversity and their relationship to extra-linguistic factors and processes such as climate, economics, history or human genetics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,620
inproceedings
martinez-etal-2016-introducing
Introducing the {SEA}{\_}{AP}: an Enhanced Tool for Automatic Prosodic Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1311/
Mart{\'i}nez, Marta and Varela, Roc{\'i}o and Mateo, Carmen Garc{\'i}a and Rei, Elisa Fern{\'a}ndez and Calvo, Adela Mart{\'i}nez
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1963--1969
SEA{\_}AP (Segmentador e Etiquetador Autom{\'a}tico para An{\'a}lise Pros{\'o}dica, Automatic Segmentation and Labelling for Prosodic Analysis) toolkit is an application that performs audio segmentation and labelling to create a TextGrid file which will be used to launch a prosodic analysis using Praat. In this paper, we want to describe the improved functionality of the tool achieved by adding a dialectometric analysis module using R scripts. The dialectometric analysis includes computing correlations among F0 curves and it obtains prosodic distances among the different variables of interest (location, speaker, structure, etc.). The dialectometric analysis requires large databases in order to be adequately computed, and automatic segmentation and labelling can create them thanks to a procedure less costly than the manual alternative. Thus, the integration of these tools into the SEA{\_}AP allows to propose a distribution of geoprosodic areas by means of a quantitative method, which completes the traditional dialectological point of view. The current version of the SEA{\_}AP toolkit is capable of analysing Galician, Spanish and Brazilian Portuguese data, and hence the distances between several prosodic linguistic varieties can be measured at present.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,621
inproceedings
mostafa-etal-2016-machine
A Machine Learning based Music Retrieval and Recommendation System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1312/
Mostafa, Naziba and Wan, Yan and Amitabh, Unnayan and Fung, Pascale
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1970--1977
In this paper, we present a music retrieval and recommendation system using machine learning techniques. We propose a query by humming system for music retrieval that uses deep neural networks for note transcription and a note-based retrieval system for retrieving the correct song from the database. We evaluate our query by humming system using the standard MIREX QBSH dataset. We also propose a similar artist recommendation system which recommends similar artists based on acoustic features of the artists' music, online text descriptions of the artists and social media data. We use supervised machine learning techniques over all our features and compare our recommendation results to those produced by a popular similar artist recommendation website.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,622
inproceedings
aman-etal-2016-cirdox
{C}irdo{X}: an on/off-line multisource speech and sound analysis software
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1313/
Aman, Fr{\'e}d{\'e}ric and Vacher, Michel and Portet, Fran{\c{c}}ois and Duclot, William and Lecouteux, Benjamin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1978--1985
Vocal User Interfaces in domestic environments recently gained interest in the speech processing community. This interest is due to the opportunity of using it in the framework of Ambient Assisted Living both for home automation (vocal command) and for call for help in case of distress situations, i.e. after a fall. C IRDO X, which is a modular software, is able to analyse online the audio environment in a home, to extract the uttered sentences and then to process them thanks to an ASR module. Moreover, this system perfoms non-speech audio event classification; in this case, specific models must be trained. The software is designed to be modular and to process on-line the audio multichannel stream. Some exemples of studies in which C IRDO X was involved are described. They were operated in real environment, namely a Living lab environment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,623
inproceedings
sperber-etal-2016-optimizing
Optimizing Computer-Assisted Transcription Quality with Iterative User Interfaces
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1314/
Sperber, Matthias and Neubig, Graham and Nakamura, Satoshi and Waibel, Alex
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1986--1992
Computer-assisted transcription promises high-quality speech transcription at reduced costs. This is achieved by limiting human effort to transcribing parts for which automatic transcription quality is insufficient. Our goal is to improve the human transcription quality via appropriate user interface design. We focus on iterative interfaces that allow humans to solve tasks based on an initially given suggestion, in this case an automatic transcription. We conduct a user study that reveals considerable quality gains for three variations of iterative interfaces over a non-iterative from-scratch transcription interface. Our iterative interfaces included post-editing, confidence-enhanced post-editing, and a novel retyping interface. All three yielded similar quality on average, but we found that the proposed retyping interface was less sensitive to the difficulty of the segment, and superior when the automatic transcription of the segment contained relatively many errors. An analysis using mixed-effects models allows us to quantify these and other factors and draw conclusions over which interface design should be chosen in which circumstance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,624
inproceedings
nicolao-etal-2016-framework
A Framework for Collecting Realistic Recordings of Dysarthric Speech - the home{S}ervice Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1315/
Nicolao, Mauro and Christensen, Heidi and Cunningham, Stuart and Green, Phil and Hain, Thomas
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1993--1997
This paper introduces a new British English speech database, named the homeService corpus, which has been gathered as part of the homeService project. This project aims to help users with speech and motor disabilities to operate their home appliances using voice commands. The audio recorded during such interactions consists of realistic data of speakers with severe dysarthria. The majority of the homeService corpus is recorded in real home environments where voice control is often the normal means by which users interact with their devices. The collection of the corpus is motivated by the shortage of realistic dysarthric speech corpora available to the scientific community. Along with the details on how the data is organised and how it can be accessed, a brief description of the framework used to make the recordings is provided. Finally, the performance of the homeService automatic recogniser for dysarthric speech trained with single-speaker data from the corpus is provided as an initial baseline. Access to the homeService corpus is provided through the dedicated web page at \url{http://mini.dcs.shef.ac.uk/resources/homeservice-corpus/}. This will also have the most updated description of the data. At the time of writing the collection process is still ongoing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,625
inproceedings
laaridh-etal-2016-automatic
Automatic Anomaly Detection for Dysarthria across Two Speech Styles: Read vs Spontaneous Speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1316/
Laaridh, Imed and Fredouille, Corinne and Meunier, Christine
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
1998--2004
Perceptive evaluation of speech disorders is still the standard method in clinical practice for the diagnosing and the following of the condition progression of patients. Such methods include different tasks such as read speech, spontaneous speech, isolated words, sustained vowels, etc. In this context, automatic speech processing tools have proven pertinence in speech quality evaluation and assistive technology-based applications. Though, a very few studies have investigated the use of automatic tools on spontaneous speech. This paper investigates the behavior of an automatic phone-based anomaly detection system when applied on read and spontaneous French dysarthric speech. The behavior of the automatic tool reveals interesting inter-pathology differences across speech styles.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,626
inproceedings
gutkin-etal-2016-tts
{TTS} for Low Resource Languages: A {B}angla Synthesizer
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1317/
Gutkin, Alexander and Ha, Linne and Jansche, Martin and Pipatsrisawat, Knot and Sproat, Richard
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2005--2010
We present a text-to-speech (TTS) system designed for the dialect of Bengali spoken in Bangladesh. This work is part of an ongoing effort to address the needs of under-resourced languages. We propose a process for streamlining the bootstrapping of TTS systems for under-resourced languages. First, we use crowdsourcing to collect the data from multiple ordinary speakers, each speaker recording small amount of sentences. Second, we leverage an existing text normalization system for a related language (Hindi) to bootstrap a linguistic front-end for Bangla. Third, we employ statistical techniques to construct multi-speaker acoustic models using Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) and Hidden Markov Model (HMM) approaches. We then describe our experiments that show that the resulting TTS voices score well in terms of their perceived quality as measured by Mean Opinion Score (MOS) evaluations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,627
inproceedings
vallet-etal-2016-speech
Speech Trax: A Bottom to the Top Approach for Speaker Tracking and Indexing in an Archiving Context
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1318/
Vallet, F{\'e}licien and Uro, Jim and Andriamakaoly, J{\'e}r{\'e}my and Nabi, Hakim and Derval, Mathieu and Carrive, Jean
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2011--2016
With the increasing amount of audiovisual and digital data deriving from televisual and radiophonic sources, professional archives such as INA, France`s national audiovisual institute, acknowledge a growing need for efficient indexing tools. In this paper, we describe the Speech Trax system that aims at analyzing the audio content of TV and radio documents. In particular, we focus on the speaker tracking task that is very valuable for indexing purposes. First, we detail the overall architecture of the system and show the results obtained on a large-scale experiment, the largest to our knowledge for this type of content (about 1,300 speakers). Then, we present the Speech Trax demonstrator that gathers the results of various automatic speech processing techniques on top of our speaker tracking system (speaker diarization, speech transcription, etc.). Finally, we provide insight on the obtained performances and suggest hints for future improvements.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,628
inproceedings
damnati-etal-2016-web
Web Chat Conversations from Contact Centers: a Descriptive Study
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1319/
Damnati, G{\'e}raldine and Guerraz, Aleksandra and Charlet, Delphine
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2017--2021
In this article we propose a descriptive study of a chat conversations corpus from an assistance contact center. Conversations are described from several view points, including interaction analysis, language deviation analysis and typographic expressivity marks analysis. We provide in particular a detailed analysis of language deviations that are encountered in our corpus of 230 conversations, corresponding to 6879 messages and 76839 words. These deviations may be challenging for further syntactic and semantic parsing. Analysis is performed with a distinction between Customer messages and Agent messages. On the overall only 4{\%} of the observed words are misspelled but 26{\%} of the messages contain at least one erroneous word (rising to 40{\%} when focused on Customer messages). Transcriptions of telephone conversations from an assistance call center are also studied, allowing comparisons between these two interaction modes to be drawn. The study reveals significant differences in terms of conversation flow, with an increased efficiency for chat conversations in spite of longer temporal span.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,629
inproceedings
morlane-hondere-etal-2016-identification
Identification of Drug-Related Medical Conditions in Social Media
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1320/
Morlane-Hond{\`e}re, Fran{\c{c}}ois and Grouin, Cyril and Zweigenbaum, Pierre
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2022--2028
Monitoring social media has been shown to be an interesting approach for the early detection of drug adverse effects. In this paper, we describe a system which extracts medical entities in French drug reviews written by users. We focus on the identification of medical conditions, which is based on the concept of post-coordination: we first extract minimal medical-related entities (pain, stomach) then we combine them to identify complex ones (It was the worst [pain I ever felt in my stomach]). These two steps are respectively performed by two classifiers, the first being based on Conditional Random Fields and the second one on Support Vector Machines. The overall results of the minimal entity classifier are the following: P=0.926; R=0.849; F1=0.886. A thourough analysis of the feature set shows that, when combined with word lemmas, clusters generated by word2vec are the most valuable features. When trained on the output of the first classifier, the second classifier`s performances are the following: p=0.683;r=0.956;f1=0.797. The addition of post-processing rules did not add any significant global improvement but was found to modify the precision/recall ratio.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,630
inproceedings
burghardt-etal-2016-creating
Creating a Lexicon of {B}avarian Dialect by Means of {F}acebook Language Data and Crowdsourcing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1321/
Burghardt, Manuel and Granvogl, Daniel and Wolff, Christian
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2029--2033
Data acquisition in dialectology is typically a tedious task, as dialect samples of spoken language have to be collected via questionnaires or interviews. In this article, we suggest to use the {\textquotedblleft}web as a corpus{\textquotedblright} approach for dialectology. We present a case study that demonstrates how authentic language data for the Bavarian dialect (ISO 639-3:bar) can be collected automatically from the social network Facebook. We also show that Facebook can be used effectively as a crowdsourcing platform, where users are willing to translate dialect words collaboratively in order to create a common lexicon of their Bavarian dialect. Key insights from the case study are summarized as {\textquotedblleft}lessons learned{\textquotedblright}, together with suggestions for future enhancements of the lexicon creation approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,631
inproceedings
prabhakaran-rambow-2016-corpus
A Corpus of {W}ikipedia Discussions: Over the Years, with Topic, Power and Gender Labels
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1322/
Prabhakaran, Vinodkumar and Rambow, Owen
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2034--2038
In order to gain a deep understanding of how social context manifests in interactions, we need data that represents interactions from a large community of people over a long period of time, capturing different aspects of social context. In this paper, we present a large corpus of Wikipedia Talk page discussions that are collected from a broad range of topics, containing discussions that happened over a period of 15 years. The dataset contains 166,322 discussion threads, across 1236 articles/topics that span 15 different topic categories or domains. The dataset also captures whether the post is made by an registered user or not, and whether he/she was an administrator at the time of making the post. It also captures the Wikipedia age of editors in terms of number of months spent as an editor, as well as their gender. This corpus will be a valuable resource to investigate a variety of computational sociolinguistics research questions regarding online social interactions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,632
inproceedings
chamberlain-etal-2016-phrase
Phrase Detectives Corpus 1.0 Crowdsourced Anaphoric Coreference.
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1323/
Chamberlain, Jon and Poesio, Massimo and Kruschwitz, Udo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2039--2046
Natural Language Engineering tasks require large and complex annotated datasets to build more advanced models of language. Corpora are typically annotated by several experts to create a gold standard; however, there are now compelling reasons to use a non-expert crowd to annotate text, driven by cost, speed and scalability. Phrase Detectives Corpus 1.0 is an anaphorically-annotated corpus of encyclopedic and narrative text that contains a gold standard created by multiple experts, as well as a set of annotations created by a large non-expert crowd. Analysis shows very good inter-expert agreement (kappa=.88-.93) but a more variable baseline crowd agreement (kappa=.52-.96). Encyclopedic texts show less agreement (and by implication are harder to annotate) than narrative texts. The release of this corpus is intended to encourage research into the use of crowds for text annotation and the development of more advanced, probabilistic language models, in particular for anaphoric coreference.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,633
inproceedings
burga-etal-2016-towards
Towards Multiple Antecedent Coreference Resolution in Specialized Discourse
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1325/
Burga, Alicia and Cajal, Sergio and Codina-Filb{\`a}, Joan and Wanner, Leo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2052--2057
Despite the popularity of coreference resolution as a research topic, the overwhelming majority of the work in this area focused so far on single antecedence coreference only. Multiple antecedent coreference (MAC) has been largely neglected. This can be explained by the scarcity of the phenomenon of MAC in generic discourse. However, in specialized discourse such as patents, MAC is very dominant. It seems thus unavoidable to address the problem of MAC resolution in the context of tasks related to automatic patent material processing, among them abstractive summarization, deep parsing of patents, construction of concept maps of the inventions, etc. We present the first version of an operational rule-based MAC resolution strategy for patent material that covers the three major types of MAC: (i) nominal MAC, (ii) MAC with personal / relative pronouns, and MAC with reflexive / reciprocal pronouns. The evaluation shows that our strategy performs well in terms of precision and recall.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,635
inproceedings
uryupina-etal-2016-arrau
{ARRAU}: Linguistically-Motivated Annotation of Anaphoric Descriptions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1326/
Uryupina, Olga and Artstein, Ron and Bristot, Antonella and Cavicchio, Federica and Rodriguez, Kepa and Poesio, Massimo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2058--2062
This paper presents a second release of the ARRAU dataset: a multi-domain corpus with thorough linguistically motivated annotation of anaphora and related phenomena. Building upon the first release almost a decade ago, a considerable effort had been invested in improving the data both quantitatively and qualitatively. Thus, we have doubled the corpus size, expanded the selection of covered phenomena to include referentiality and genericity and designed and implemented a methodology for enforcing the consistency of the manual annotation. We believe that the new release of ARRAU provides a valuable material for ongoing research in complex cases of coreference as well as for a variety of related tasks. The corpus is publicly available through LDC.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,636
inproceedings
yeh-etal-2016-annotated
An Annotated Corpus and Method for Analysis of Ad-Hoc Structures Embedded in Text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1327/
Yeh, Eric and Niekrasz, John and Freitag, Dayne and Rohwer, Richard
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2063--2070
We describe a method for identifying and performing functional analysis of structured regions that are embedded in natural language documents, such as tables or key-value lists. Such regions often encode information according to ad hoc schemas and avail themselves of visual cues in place of natural language grammar, presenting problems for standard information extraction algorithms. Unlike previous work in table extraction, which assumes a relatively noiseless two-dimensional layout, our aim is to accommodate a wide variety of naturally occurring structure types. Our approach has three main parts. First, we collect and annotate a a diverse sample of {\textquotedblleft}naturally{\textquotedblright} occurring structures from several sources. Second, we use probabilistic text segmentation techniques, featurized by skip bigrams over spatial and token category cues, to automatically identify contiguous regions of structured text that share a common schema. Finally, we identify the records and fields within each structured region using a combination of distributional similarity and sequence alignment methods, guided by minimal supervision in the form of a single annotated record. We evaluate the last two components individually, and conclude with a discussion of further work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,637
inproceedings
aga-etal-2016-learning
Learning Thesaurus Relations from Distributional Features
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1328/
Aga, Rosa Tsegaye and Wartena, Christian and Drumond, Lucas and Schmidt-Thieme, Lars
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2071--2075
In distributional semantics words are represented by aggregated context features. The similarity of words can be computed by comparing their feature vectors. Thus, we can predict whether two words are synonymous or similar with respect to some other semantic relation. We will show on six different datasets of pairs of similar and non-similar words that a supervised learning algorithm on feature vectors representing pairs of words outperforms cosine similarity between vectors representing single words. We compared different methods to construct a feature vector representing a pair of words. We show that simple methods like pairwise addition or multiplication give better results than a recently proposed method that combines different types of features. The semantic relation we consider is relatedness of terms in thesauri for intellectual document classification. Thus our findings can directly be applied for the maintenance and extension of such thesauri. To the best of our knowledge this relation was not considered before in the field of distributional semantics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,638
inproceedings
wonsever-etal-2016-factuality
Factuality Annotation and Learning in {S}panish Texts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1329/
Wonsever, Dina and Ros{\'a}, Aiala and Malcuori, Marisa
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2076--2080
We present a proposal for the annotation of factuality of event mentions in Spanish texts and a free available annotated corpus. Our factuality model aims to capture a pragmatic notion of factuality, trying to reflect a casual reader judgements about the realis / irrealis status of mentioned events. Also, some learning experiments (SVM and CRF) have been held, showing encouraging results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,639
inproceedings
caroli-etal-2016-nnblocks
{NNB}locks: A Deep Learning Framework for Computational Linguistics Neural Network Models
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1330/
Caroli, Frederico Tommasi and Freitas, Andr{\'e} and da Silva, Jo{\~a}o Carlos Pereira and Handschuh, Siegfried
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2081--2085
Lately, with the success of Deep Learning techniques in some computational linguistics tasks, many researchers want to explore new models for their linguistics applications. These models tend to be very different from what standard Neural Networks look like, limiting the possibility to use standard Neural Networks frameworks. This work presents NNBlocks, a new framework written in Python to build and train Neural Networks that are not constrained by a specific kind of architecture, making it possible to use it in computational linguistics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,640
inproceedings
beltrami-etal-2016-automatic
Automatic identification of Mild Cognitive Impairment through the analysis of {I}talian spontaneous speech productions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1331/
Beltrami, Daniela and Calz{\`a}, Laura and Gagliardi, Gloria and Ghidoni, Enrico and Marcello, Norina and Favretti, Rema Rossini and Tamburini, Fabio
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2086--2093
This paper presents some preliminary results of the OPLON project. It aimed at identifying early linguistic symptoms of cognitive decline in the elderly. This pilot study was conducted on a corpus composed of spontaneous speech sample collected from 39 subjects, who underwent a neuropsychological screening for visuo-spatial abilities, memory, language, executive functions and attention. A rich set of linguistic features was extracted from the digitalised utterances (at phonetic, suprasegmental, lexical, morphological and syntactic levels) and the statistical significance in pinpointing the pathological process was measured. Our results show remarkable trends for what concerns both the linguistic traits selection and the automatic classifiers building.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,641
inproceedings
corrales-astorgano-etal-2016-use
On the Use of a Serious Game for Recording a Speech Corpus of People with Intellectual Disabilities
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1332/
Corrales-Astorgano, Mario and Escudero-Mancebo, David and Guti{\'e}rrez-Gonz{\'a}lez, Yurena and Flores-Lucas, Valle and Gonz{\'a}lez-Ferreras, C{\'e}sar and Carde{\~n}oso-Payo, Valent{\'i}n
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2094--2099
This paper describes the recording of a speech corpus focused on prosody of people with intellectual disabilities. To do this, a video game is used with the aim of improving the user`s motivation. Moreover, the player`s profiles and the sentences recorded during the game sessions are described. With the purpose of identifying the main prosodic troubles of people with intellectual disabilities, some prosodic features are extracted from recordings, like fundamental frequency, energy and pauses. After that, a comparison is made between the recordings of people with intellectual disabilities and people without intellectual disabilities. This comparison shows that pauses are the best discriminative feature between these groups. To check this, a study has been done using machine learning techniques, with a classification rate superior to 80{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,642
inproceedings
parish-morris-etal-2016-building
Building Language Resources for Exploring Autism Spectrum Disorders
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1333/
Parish-Morris, Julia and Cieri, Christopher and Liberman, Mark and Bateman, Leila and Ferguson, Emily and Schultz, Robert T.
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2100--2107
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition that would benefit from low-cost and reliable improvements to screening and diagnosis. Human language technologies (HLTs) provide one possible route to automating a series of subjective decisions that currently inform {\textquotedblleft}Gold Standard{\textquotedblright} diagnosis based on clinical judgment. In this paper, we describe a new resource to support this goal, comprised of 100 20-minute semi-structured English language samples labeled with child age, sex, IQ, autism symptom severity, and diagnostic classification. We assess the feasibility of digitizing and processing sensitive clinical samples for data sharing, and identify areas of difficulty. Using the methods described here, we propose to join forces with researchers and clinicians throughout the world to establish an international repository of annotated language samples from individuals with ASD and related disorders. This project has the potential to improve the lives of individuals with ASD and their families by identifying linguistic features that could improve remote screening, inform personalized intervention, and promote advancements in clinically-oriented HLTs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,643
inproceedings
terbeh-zrigui-2016-vocal
Vocal Pathologies Detection and Mispronounced Phonemes Identification: Case of {A}rabic Continuous Speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1334/
Terbeh, Naim and Zrigui, Mounir
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2108--2113
We propose in this work a novel acoustic phonetic study for Arabic people suffering from language disabilities and non-native learners of Arabic language to classify Arabic continuous speech to pathological or healthy and to identify phonemes that pose pronunciation problems (case of pathological speeches). The main idea can be summarized in comparing between the phonetic model reference to Arabic spoken language and that proper to concerned speaker. For this task, we use techniques of automatic speech processing like forced alignment and artificial neural network (ANN) (Basheer, 2000). Based on a test corpus containing 100 speech sequences, recorded by different speakers (healthy/pathological speeches and native/foreign speakers), we attain 97{\%} as classification rate. Algorithms used in identifying phonemes that pose pronunciation problems show high efficiency: we attain an identification rate of 100{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,644
inproceedings
hoenen-2016-wikipedia
{W}ikipedia Titles As Noun Tag Predictors
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1335/
Hoenen, Armin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2114--2118
In this paper, we investigate a covert labeling cue, namely the probability that a title (by example of the Wikipedia titles) is a noun. If this probability is very large, any list such as or comparable to the Wikipedia titles can be used as a reliable word-class (or part-of-speech tag) predictor or noun lexicon. This may be especially useful in the case of Low Resource Languages (LRL) where labeled data is lacking and putatively for Natural Language Processing (NLP) tasks such as Word Sense Disambiguation, Sentiment Analysis and Machine Translation. Profitting from the ease of digital publication on the web as opposed to print, LRL speaker communities produce resources such as Wikipedia and Wiktionary, which can be used for an assessment. We provide statistical evidence for a strong noun bias for the Wikipedia titles from 2 corpora (English, Persian) and a dictionary (Japanese) and for a typologically balanced set of 17 languages including LRLs. Additionally, we conduct a small experiment on predicting noun tags for out-of-vocabulary items in part-of-speech tagging for English.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,645
inproceedings
harashima-2016-japanese
{J}apanese {W}ord{\textemdash}{C}olor Associations with and without Contexts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1336/
Harashima, Jun
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2119--2123
Although some words carry strong associations with specific colors (e.g., the word danger is associated with the color red), few studies have investigated these relationships. This may be due to the relative rarity of databases that contain large quantities of such information. Additionally, these resources are often limited to particular languages, such as English. Moreover, the existing resources often do not consider the possible contexts of words in assessing the associations between a word and a color. As a result, the influence of context on word{\textemdash}color associations is not fully understood. In this study, we constructed a novel language resource for word{\textemdash}color associations. The resource has two characteristics: First, our resource is the first to include Japanese word{\textemdash}color associations, which were collected via crowdsourcing. Second, the word{\textemdash}color associations in the resource are linked to contexts. We show that word{\textemdash}color associations depend on language and that associations with certain colors are affected by context information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,646
inproceedings
van-miltenburg-etal-2016-vu
The {VU} Sound Corpus: Adding More Fine-grained Annotations to the Freesound Database
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1337/
van Miltenburg, Emiel and Timmermans, Benjamin and Aroyo, Lora
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2124--2130
This paper presents a collection of annotations (tags or keywords) for a set of 2,133 environmental sounds taken from the Freesound database (www.freesound.org). The annotations are acquired through an open-ended crowd-labeling task, in which participants were asked to provide keywords for each of three sounds. The main goal of this study is to find out (i) whether it is feasible to collect keywords for a large collection of sounds through crowdsourcing, and (ii) how people talk about sounds, and what information they can infer from hearing a sound in isolation. Our main finding is that it is not only feasible to perform crowd-labeling for a large collection of sounds, it is also very useful to highlight different aspects of the sounds that authors may fail to mention. Our data is freely available, and can be used to ground semantic models, improve search in audio databases, and to study the language of sound.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,647
inproceedings
sukhareva-etal-2016-crowdsourcing
Crowdsourcing a Large Dataset of Domain-Specific Context-Sensitive Semantic Verb Relations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1338/
Sukhareva, Maria and Eckle-Kohler, Judith and Habernal, Ivan and Gurevych, Iryna
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2131--2137
We present a new large dataset of 12403 context-sensitive verb relations manually annotated via crowdsourcing. These relations capture fine-grained semantic information between verb-centric propositions, such as temporal or entailment relations. We propose a novel semantic verb relation scheme and design a multi-step annotation approach for scaling-up the annotations using crowdsourcing. We employ several quality measures and report on agreement scores. The resulting dataset is available under a permissive CreativeCommons license at www.ukp.tu-darmstadt.de/data/verb-relations/. It represents a valuable resource for various applications, such as automatic information consolidation or automatic summarization.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,648
inproceedings
feltracco-etal-2016-acquiring
Acquiring Opposition Relations among {I}talian Verb Senses using Crowdsourcing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1339/
Feltracco, Anna and Magnolini, Simone and Jezek, Elisabetta and Magnini, Bernardo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2138--2144
We describe an experiment for the acquisition of opposition relations among Italian verb senses, based on a crowdsourcing methodology. The goal of the experiment is to discuss whether the types of opposition we distinguish (i.e. complementarity, antonymy, converseness and reversiveness) are actually perceived by the crowd. In particular, we collect data for Italian by using the crowdsourcing platform CrowdFlower. We ask annotators to judge the type of opposition existing among pairs of sentences -previously judged as opposite- that differ only for a verb: the verb in the first sentence is opposite of the verb in second sentence. Data corroborate the hypothesis that some opposition relations exclude each other, while others interact, being recognized as compatible by the contributors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,649
inproceedings
caines-etal-2016-crowdsourcing
Crowdsourcing a Multi-lingual Speech Corpus: Recording, Transcription and Annotation of the {C}rowd{IS} Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1340/
Caines, Andrew and Bentz, Christian and Graham, Calbert and Polzehl, Tim and Buttery, Paula
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2145--2152
We announce the release of the CROWDED CORPUS: a pair of speech corpora collected via crowdsourcing, containing a native speaker corpus of English (CROWDED{\_}ENGLISH), and a corpus of German/English bilinguals (CROWDED{\_}BILINGUAL). Release 1 of the CROWDED CORPUS contains 1000 recordings amounting to 33,400 tokens collected from 80 speakers and is freely available to other researchers. We recruited participants via the Crowdee application for Android. Recruits were prompted to respond to business-topic questions of the type found in language learning oral tests. We then used the CrowdFlower web application to pass these recordings to crowdworkers for transcription and annotation of errors and sentence boundaries. Finally, the sentences were tagged and parsed using standard natural language processing tools. We propose that crowdsourcing is a valid and economical method for corpus collection, and discuss the advantages and disadvantages of this approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,650
inproceedings
bartie-etal-2016-real
The {REAL} Corpus: A Crowd-Sourced Corpus of Human Generated and Evaluated Spatial References to Real-World Urban Scenes
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1341/
Bartie, Phil and Mackaness, William and Gkatzia, Dimitra and Rieser, Verena
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2153--2155
Our interest is in people`s capacity to efficiently and effectively describe geographic objects in urban scenes. The broader ambition is to develop spatial models capable of equivalent functionality able to construct such referring expressions. To that end we present a newly crowd-sourced data set of natural language references to objects anchored in complex urban scenes (In short: The REAL Corpus {\textemdash} Referring Expressions Anchored Language). The REAL corpus contains a collection of images of real-world urban scenes together with verbal descriptions of target objects generated by humans, paired with data on how successful other people were able to identify the same object based on these descriptions. In total, the corpus contains 32 images with on average 27 descriptions per image and 3 verifications for each description. In addition, the corpus is annotated with a variety of linguistically motivated features. The paper highlights issues posed by collecting data using crowd-sourcing with an unrestricted input format, as well as using real-world urban scenes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,651
inproceedings
hantke-etal-2016-introducing
Introducing the Weighted Trustability Evaluator for Crowdsourcing Exemplified by Speaker Likability Classification
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1342/
Hantke, Simone and Marchi, Erik and Schuller, Bj{\"orn
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2156--2161
Crowdsourcing is an arising collaborative approach applicable among many other applications to the area of language and speech processing. In fact, the use of crowdsourcing was already applied in the field of speech processing with promising results. However, only few studies investigated the use of crowdsourcing in computational paralinguistics. In this contribution, we propose a novel evaluator for crowdsourced-based ratings termed Weighted Trustability Evaluator (WTE) which is computed from the rater-dependent consistency over the test questions. We further investigate the reliability of crowdsourced annotations as compared to the ones obtained with traditional labelling procedures, such as constrained listening experiments in laboratories or in controlled environments. This comparison includes an in-depth analysis of obtainable classification performances. The experiments were conducted on the Speaker Likability Database (SLD) already used in the INTERSPEECH Challenge 2012, and the results lend further weight to the assumption that crowdsourcing can be applied as a reliable annotation source for computational paralinguistics given a sufficient number of raters and suited measurements of their reliability.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,652
inproceedings
arimoto-okanoya-2016-comparison
Comparison of Emotional Understanding in Modality-Controlled Environments using Multimodal Online Emotional Communication Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1343/
Arimoto, Yoshiko and Okanoya, Kazuo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2162--2167
In online computer-mediated communication, speakers were considered to have experienced difficulties in catching their partner`s emotions and in conveying their own emotions. To explain why online emotional communication is so difficult and to investigate how this problem should be solved, multimodal online emotional communication corpus was constructed by recording approximately 100 speakers' emotional expressions and reactions in a modality-controlled environment. Speakers communicated over the Internet using video chat, voice chat or text chat; their face-to-face conversations were used for comparison purposes. The corpora incorporated emotional labels by evaluating the speaker`s dynamic emotional states and the measurements of the speaker`s facial expression, vocal expression and autonomic nervous system activity. For the initial study of this project, which used a large-scale emotional communication corpus, the accuracy of online emotional understanding was assessed to demonstrate the emotional labels evaluated by the speakers and to summarize the speaker`s answers on the questionnaire regarding the difference between an online chat and face-to-face conversations in which they actually participated. The results revealed that speakers have difficulty communicating their emotions in online communication environments, regardless of the type of communication modality and that inaccurate emotional understanding occurs more frequently in online computer-mediated communication than in face-to-face communication.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,653
inproceedings
bigi-bertrand-2016-laughter
Laughter in {F}rench Spontaneous Conversational Dialogs
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1344/
Bigi, Brigitte and Bertrand, Roxane
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2168--2174
This paper presents a quantitative description of laughter in height 1-hour French spontaneous conversations. The paper includes the raw figures for laughter as well as more details concerning inter-individual variability. It firstly describes to what extent the amount of laughter and their durations varies from speaker to speaker in all dialogs. In a second suite of analyses, this paper compares our corpus with previous analyzed corpora. In a final set of experiments, it presents some facts about overlapping laughs. This paper have quantified these all effects in free-style conversations, for the first time.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,654
inproceedings
haddad-etal-2016-avab
{AVAB}-{DBS}: an Audio-Visual Affect Bursts Database for Synthesis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1345/
Haddad, Kevin El and {\c{Cakmak, H{\"useyin and Dupont, St{\'ephane and Dutoit, Thierry
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2175--2179
It has been shown that adding expressivity and emotional expressions to an agent`s communication systems would improve the interaction quality between this agent and a human user. In this paper we present a multimodal database of affect bursts, which are very short non-verbal expressions with facial, vocal, and gestural components that are highly synchronized and triggered by an identifiable event. This database contains motion capture and audio data of affect bursts representing disgust, startle and surprise recorded at three different levels of arousal each. This database is to be used for synthesis purposes in order to generate affect bursts of these emotions on a continuous arousal level scale.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,655
inproceedings
lubis-etal-2016-construction
Construction of {J}apanese Audio-Visual Emotion Database and Its Application in Emotion Recognition
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1346/
Lubis, Nurul and Gomez, Randy and Sakti, Sakriani and Nakamura, Keisuke and Yoshino, Koichiro and Nakamura, Satoshi and Nakadai, Kazuhiro
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2180--2184
Emotional aspects play a vital role in making human communication a rich and dynamic experience. As we introduce more automated system in our daily lives, it becomes increasingly important to incorporate emotion to provide as natural an interaction as possible. To achieve said incorporation, rich sets of labeled emotional data is prerequisite. However, in Japanese, existing emotion database is still limited to unimodal and bimodal corpora. Since emotion is not only expressed through speech, but also visually at the same time, it is essential to include multiple modalities in an observation. In this paper, we present the first audio-visual emotion corpora in Japanese, collected from 14 native speakers. The corpus contains 100 minutes of annotated and transcribed material. We performed preliminary emotion recognition experiments on the corpus and achieved an accuracy of 61.42{\%} for five classes of emotion.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,656
inproceedings
passaro-lenci-2016-evaluating
Evaluating Context Selection Strategies to Build Emotive Vector Space Models
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1347/
Passaro, Lucia C. and Lenci, Alessandro
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2185--2191
In this paper we compare different context selection approaches to improve the creation of Emotive Vector Space Models (VSMs). The system is based on the results of an existing approach that showed the possibility to create and update VSMs by exploiting crowdsourcing and human annotation. Here, we introduce a method to manipulate the contexts of the VSMs under the assumption that the emotive connotation of a target word is a function of both its syntagmatic and paradigmatic association with the various emotions. To study the differences among the proposed spaces and to confirm the reliability of the system, we report on two experiments: in the first one we validated the best candidates extracted from each model, and in the second one we compared the models' performance on a random sample of target words. Both experiments have been implemented as crowdsourcing tasks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,657
inproceedings
bourlon-etal-2016-simultaneous
Simultaneous Sentence Boundary Detection and Alignment with Pivot-based Machine Translation Generated Lexicons
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1348/
Bourlon, Antoine and Chu, Chenhui and Nakazawa, Toshiaki and Kurohashi, Sadao
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2192--2198
Sentence alignment is a task that consists in aligning the parallel sentences in a translated article pair. This paper describes a method to perform sentence boundary detection and alignment simultaneously, which significantly improves the alignment accuracy on languages like Chinese with uncertain sentence boundaries. It relies on the definition of hard (certain) and soft (uncertain) punctuation delimiters, the latter being possibly ignored to optimize the alignment result. The alignment method is used in combination with lexicons automatically generated from the input article pairs using pivot-based MT, achieving better coverage of the input words with fewer entries than pre-existing dictionaries. Pivot-based MT makes it possible to build dictionaries for language pairs that have scarce parallel data. The alignment method is implemented in a tool that will be freely available in the near future.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,658
inproceedings
kanojia-etal-2016-thatll
That`ll Do Fine!: A Coarse Lexical Resource for {E}nglish-{H}indi {MT}, Using Polylingual Topic Models
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1349/
Kanojia, Diptesh and Joshi, Aditya and Bhattacharyya, Pushpak and Carman, Mark James
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2199--2203
Parallel corpora are often injected with bilingual lexical resources for improved Indian language machine translation (MT). In absence of such lexical resources, multilingual topic models have been used to create coarse lexical resources in the past, using a Cartesian product approach. Our results show that for morphologically rich languages like Hindi, the Cartesian product approach is detrimental for MT. We then present a novel {\textquoteleft}sentential' approach to use this coarse lexical resource from a multilingual topic model. Our coarse lexical resource when injected with a parallel corpus outperforms a system trained using parallel corpus and a good quality lexical resource. As demonstrated by the quality of our coarse lexical resource and its benefit to MT, we believe that our sentential approach to create such a resource will help MT for resource-constrained languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,659
inproceedings
nakazawa-etal-2016-aspec
{ASPEC}: {A}sian Scientific Paper Excerpt Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1350/
Nakazawa, Toshiaki and Yaguchi, Manabu and Uchimoto, Kiyotaka and Utiyama, Masao and Sumita, Eiichiro and Kurohashi, Sadao and Isahara, Hitoshi
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2204--2208
In this paper, we describe the details of the ASPEC (Asian Scientific Paper Excerpt Corpus), which is the first large-size parallel corpus of scientific paper domain. ASPEC was constructed in the Japanese-Chinese machine translation project conducted between 2006 and 2010 using the Special Coordination Funds for Promoting Science and Technology. It consists of a Japanese-English scientific paper abstract corpus of approximately 3 million parallel sentences (ASPEC-JE) and a Chinese-Japanese scientific paper excerpt corpus of approximately 0.68 million parallel sentences (ASPEC-JC). ASPEC is used as the official dataset for the machine translation evaluation workshop WAT (Workshop on Asian Translation).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,660
inproceedings
labaka-etal-2016-domain
Domain Adaptation in {MT} Using Titles in {W}ikipedia as a Parallel Corpus: Resources and Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1351/
Labaka, Gorka and Alegria, I{\~n}aki and Sarasola, Kepa
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2209--2213
This paper presents how an state-of-the-art SMT system is enriched by using an extra in-domain parallel corpora extracted from Wikipedia. We collect corpora from parallel titles and from parallel fragments in comparable articles from Wikipedia. We carried out an evaluation with a double objective: evaluating the quality of the extracted data and evaluating the improvement due to the domain-adaptation. We think this can be very useful for languages with limited amount of parallel corpora, where in-domain data is crucial to improve the performance of MT sytems. The experiments on the Spanish-English language pair improve a baseline trained with the Europarl corpus in more than 2 points of BLEU when translating in the Computer Science domain.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,661
inproceedings
wu-etal-2016-prophetmt
{P}rophet{MT}: A Tree-based {SMT}-driven Controlled Language Authoring/Post-Editing Tool
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1352/
Wu, Xiaofeng and Du, Jinhua and Liu, Qun and Way, Andy
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2214--2221
This paper presents ProphetMT, a tree-based SMT-driven Controlled Language (CL) authoring and post-editing tool. ProphetMT employs the source-side rules in a translation model and provides them as auto-suggestions to users. Accordingly, one might say that users are writing in a Controlled Language that is understood by the computer. ProphetMT also allows users to easily attach structural information as they compose content. When a specific rule is selected, a partial translation is promptly generated on-the-fly with the help of the structural information. Our experiments conducted on English-to-Chinese show that our proposed ProphetMT system can not only better regularise an author`s writing behaviour, but also significantly improve translation fluency which is vital to reduce the post-editing time. Additionally, when the writing and translation process is over, ProphetMT can provide an effective colour scheme to further improve the productivity of post-editors by explicitly featuring the relations between the source and target rules.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,662
inproceedings
han-bel-2016-towards
Towards producing bilingual lexica from monolingual corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1353/
Han, Jingyi and Bel, N{\'u}ria
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2222--2227
Bilingual lexica are the basis for many cross-lingual natural language processing tasks. Recent works have shown success in learning bilingual dictionary by taking advantages of comparable corpora and a diverse set of signals derived from monolingual corpora. In the present work, we describe an approach to automatically learn bilingual lexica by training a supervised classifier using word embedding-based vectors of only a few hundred translation equivalent word pairs. The word embedding representations of translation pairs were obtained from source and target monolingual corpora, which are not necessarily related. Our classifier is able to predict whether a new word pair is under a translation relation or not. We tested it on two quite distinct language pairs Chinese-Spanish and English-Spanish. The classifiers achieved more than 0.90 precision and recall for both language pairs in different evaluation scenarios. These results show a high potential for this method to be used in bilingual lexica production for language pairs with reduced amount of parallel or comparable corpora, in particular for phrase table expansion in Statistical Machine Translation systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,663
inproceedings
gomes-lopes-2016-first
First Steps Towards Coverage-Based Sentence Alignment
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1354/
Gomes, Lu{\'i}s and Lopes, Gabriel Pereira
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2228--2231
In this paper, we introduce a coverage-based scoring function that discriminates between parallel and non-parallel sentences. When plugged into Bleualign, a state-of-the-art sentence aligner, our function improves both precision and recall of alignments over the originally proposed BLEU score. Furthermore, since our scoring function uses Moses phrase tables directly we avoid the need to translate the texts to be aligned, which is time-consuming and a potential source of alignment errors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,664
inproceedings
liyanapathirana-popescu-belis-2016-using
Using the {TED} Talks to Evaluate Spoken Post-editing of Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1355/
Liyanapathirana, Jeevanthi and Popescu-Belis, Andrei
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2232--2239
This paper presents a solution to evaluate spoken post-editing of imperfect machine translation output by a human translator. We compare two approaches to the combination of machine translation (MT) and automatic speech recognition (ASR): a heuristic algorithm and a machine learning method. To obtain a data set with spoken post-editing information, we use the French version of TED talks as the source texts submitted to MT, and the spoken English counterparts as their corrections, which are submitted to an ASR system. We experiment with various levels of artificial ASR noise and also with a state-of-the-art ASR system. The results show that the combination of MT with ASR improves over both individual outputs of MT and ASR in terms of BLEU scores, especially when ASR performance is low.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,665
inproceedings
blain-etal-2016-phrase
Phrase Level Segmentation and Labelling of Machine Translation Errors
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1356/
Blain, Fr{\'e}d{\'e}ric and Logacheva, Varvara and Specia, Lucia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2240--2245
This paper presents our work towards a novel approach for Quality Estimation (QE) of machine translation based on sequences of adjacent words, the so-called phrases. This new level of QE aims to provide a natural balance between QE at word and sentence-level, which are either too fine grained or too coarse levels for some applications. However, phrase-level QE implies an intrinsic challenge: how to segment a machine translation into sequence of words (contiguous or not) that represent an error. We discuss three possible segmentation strategies to automatically extract erroneous phrases. We evaluate these strategies against annotations at phrase-level produced by humans, using a new dataset collected for this purpose.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,666
inproceedings
martinez-vela-2016-subco
{S}ub{C}o: A Learner Translation Corpus of Human and Machine Subtitles
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1357/
Mart{\'i}nez Mart{\'i}nez, Jos{\'e} Manuel and Vela, Mihaela
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2246--2254
In this paper, we present a freely available corpus of human and automatic translations of subtitles. The corpus comprises the original English subtitles (SRC), both human (HT) and machine translations (MT) into German, as well as post-editions (PE) of the MT output. HT and MT are annotated with errors. Moreover, human evaluation is included in HT, MT, and PE. Such a corpus is a valuable resource for both human and machine translation communities, enabling the direct comparison {--} in terms of errors and evaluation {--} between human and machine translations and post-edited machine translations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,667
inproceedings
bogantes-etal-2016-towards
Towards Lexical Encoding of Multi-Word Expressions in {S}panish Dialects
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1358/
Bogantes, Diana and Rodr{\'i}guez, Eric and Arauco, Alejandro and Rodr{\'i}guez, Alejandro and Savary, Agata
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2255--2261
This paper describes a pilot study in lexical encoding of multi-word expressions (MWEs) in 4 Latin American dialects of Spanish: Costa Rican, Colombian, Mexican and Peruvian. We describe the variability of MWE usage across dialects. We adapt an existing data model to a dialect-aware encoding, so as to represent dialect-related specificities, while avoiding redundancy of the data common for all dialects. A dozen of linguistic properties of MWEs can be expressed in this model, both on the level of a whole MWE and of its individual components. We describe the resulting lexical resource containing several dozens of MWEs in four dialects and we propose a method for constructing a web corpus as a support for crowdsourcing examples of MWE occurrences. The resource is available under an open license and paves the way towards a large-scale dialect-aware language resource construction, which should prove useful in both traditional and novel NLP applications.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,668
inproceedings
zhang-etal-2016-jate
{JATE} 2.0: {J}ava Automatic Term Extraction with {A}pache {S}olr
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1359/
Zhang, Ziqi and Gao, Jie and Ciravegna, Fabio
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2262--2269
Automatic Term Extraction (ATE) or Recognition (ATR) is a fundamental processing step preceding many complex knowledge engineering tasks. However, few methods have been implemented as public tools and in particular, available as open-source freeware. Further, little effort is made to develop an adaptable and scalable framework that enables customization, development, and comparison of algorithms under a uniform environment. This paper introduces JATE 2.0, a complete remake of the free Java Automatic Term Extraction Toolkit (Zhang et al., 2008) delivering new features including: (1) highly modular, adaptable and scalable ATE thanks to integration with Apache Solr, the open source free-text indexing and search platform; (2) an extended collection of state-of-the-art algorithms. We carry out experiments on two well-known benchmarking datasets and compare the algorithms along the dimensions of effectiveness (precision) and efficiency (speed and memory consumption). To the best of our knowledge, this is by far the only free ATE library offering a flexible architecture and the most comprehensive collection of algorithms.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,669
inproceedings
lievers-huang-2016-lexicon
A lexicon of perception for the identification of synaesthetic metaphors in corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1360/
Lievers, Francesca Strik and Huang, Chu-Ren
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2270--2277
Synaesthesia is a type of metaphor associating linguistic expressions that refer to two different sensory modalities. Previous studies, based on the analysis of poetic texts, have shown that synaesthetic transfers tend to go from the lower toward the higher senses (e.g., sweet music vs. musical sweetness). In non-literary language synaesthesia is rare, and finding a sufficient number of examples manually would be too time-consuming. In order to verify whether the directionality also holds for conventional synaesthesia found in non-literary texts, an automatic procedure for the identification of instances of synaesthesia is therefore highly desirable. In this paper, we first focus on the preliminary step of this procedure, that is, the creation of a controlled lexicon of perception. Next, we present the results of a small pilot study that applies the extraction procedure to English and Italian corpus data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,670
inproceedings
marciniak-etal-2016-termopl
{T}ermo{PL} - a Flexible Tool for Terminology Extraction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1361/
Marciniak, Malgorzata and Mykowiecka, Agnieszka and Rychlik, Piotr
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2278--2284
The purpose of this paper is to introduce the TermoPL tool created to extract terminology from domain corpora in Polish. The program extracts noun phrases, term candidates, with the help of a simple grammar that can be adapted for user`s needs. It applies the C-value method to rank term candidates being either the longest identified nominal phrases or their nested subphrases. The method operates on simplified base forms in order to unify morphological variants of terms and to recognize their contexts. We support the recognition of nested terms by word connection strength which allows us to eliminate truncated phrases from the top part of the term list. The program has an option to convert simplified forms of phrases into correct phrases in the nominal case. TermoPL accepts as input morphologically annotated and disambiguated domain texts and creates a list of terms, the top part of which comprises domain terminology. It can also compare two candidate term lists using three different coefficients showing asymmetry of term occurrences in this data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,671
inproceedings
schulte-im-walde-etal-2016-ghost
{G}ho{S}t-{NN}: A Representative Gold Standard of {G}erman Noun-Noun Compounds
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1362/
Schulte im Walde, Sabine and H{\"atty, Anna and Bott, Stefan and Khvtisavrishvili, Nana
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2285--2292
This paper presents a novel gold standard of German noun-noun compounds (Ghost-NN) including 868 compounds annotated with corpus frequencies of the compounds and their constituents, productivity and ambiguity of the constituents, semantic relations between the constituents, and compositionality ratings of compound-constituent pairs. Moreover, a subset of the compounds containing 180 compounds is balanced for the productivity of the modifiers (distinguishing low/mid/high productivity) and the ambiguity of the heads (distinguishing between heads with 1, 2 and {\ensuremath{>}}2 senses
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,672
inproceedings
ramisch-etal-2016-deque
{D}e{Q}ue: A Lexicon of Complex Prepositions and Conjunctions in {F}rench
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1363/
Ramisch, Carlos and Nasr, Alexis and Valli, Andr{\'e} and Deulofeu, Jos{\'e}
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2293--2298
We introduce DeQue, a lexicon covering French complex prepositions (CPRE) like {\textquotedblleft}{\`a} partir de{\textquotedblright} (from) and complex conjunctions (CCONJ) like {\textquotedblleft}bien que{\textquotedblright} (although). The lexicon includes fine-grained linguistic description based on empirical evidence. We describe the general characteristics of CPRE and CCONJ in French, with special focus on syntactic ambiguity. Then, we list the selection criteria used to build the lexicon and the corpus-based methodology employed to collect entries. Finally, we quantify the ambiguity of each construction by annotating around 100 sentences randomly taken from the FRWaC. In addition to its theoretical value, the resource has many potential practical applications. We intend to employ DeQue for treebank annotation and to train a dependency parser that can takes complex constructions into account.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,673
inproceedings
losnegaard-etal-2016-parseme
{PARSEME} Survey on {MWE} Resources
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1364/
Losnegaard, Gyri Sm{\o}rdal and Sangati, Federico and Escart{\'i}n, Carla Parra and Savary, Agata and Bargmann, Sascha and Monti, Johanna
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2299--2306
This paper summarizes the preliminary results of an ongoing survey on multiword resources carried out within the IC1207 Cost Action PARSEME (PARSing and Multi-word Expressions). Despite the availability of language resource catalogs and the inventory of multiword datasets on the SIGLEX-MWE website, multiword resources are scattered and difficult to find. In many cases, language resources such as corpora, treebanks, or lexical databases include multiwords as part of their data or take them into account in their annotations. However, these resources need to be centralized to make them accessible. The aim of this survey is to create a portal where researchers can easily find multiword(-aware) language resources for their research. We report on the design of the survey and analyze the data gathered so far. We also discuss the problems we have detected upon examination of the data as well as possible ways of enhancing the survey.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,674
inproceedings
wilkens-etal-2016-multiword
Multiword Expressions in Child Language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1365/
Wilkens, Rodrigo and Idiart, Marco and Villavicencio, Aline
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2307--2311
The goal of this work is to introduce CHILDES-MWE, which contains English CHILDES corpora automatically annotated with Multiword Expressions (MWEs) information. The result is a resource with almost 350,000 sentences annotated with more than 70,000 distinct MWEs of various types from both longitudinal and latitudinal corpora. This resource can be used for large scale language acquisition studies of how MWEs feature in child language. Focusing on compound nouns (CN), we then verify in a longitudinal study if there are differences in the distribution and compositionality of CNs in child-directed and child-produced sentences across ages. Moreover, using additional latitudinal data, we investigate if there are further differences in CN usage and in compositionality preferences. The results obtained for the child-produced sentences reflect CN distribution and compositionality in child-directed sentences.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,675
inproceedings
bouamor-etal-2016-transfer
Transfer-Based Learning-to-Rank Assessment of Medical Term Technicality
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1366/
Bouamor, Dhouha and Llanos, Leonardo Campillos and Ligozat, Anne-Laure and Rosset, Sophie and Zweigenbaum, Pierre
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2312--2316
While measuring the readability of texts has been a long-standing research topic, assessing the technicality of terms has only been addressed more recently and mostly for the English language. In this paper, we train a learning-to-rank model to determine a specialization degree for each term found in a given list. Since no training data for this task exist for French, we train our system with non-lexical features on English data, namely, the Consumer Health Vocabulary, then apply it to French. The features include the likelihood ratio of the term based on specialized and lay language models, and tests for containing morphologically complex words. The evaluation of this approach is conducted on 134 terms from the UMLS Metathesaurus and 868 terms from the Eugloss thesaurus. The Normalized Discounted Cumulative Gain obtained by our system is over 0.8 on both test sets. Besides, thanks to the learning-to-rank approach, adding morphological features to the language model features improves the results on the Eugloss thesaurus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,676
inproceedings
rodriguez-fernandez-etal-2016-example
Example-based Acquisition of Fine-grained Collocation Resources
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1367/
Rodr{\'i}guez-Fern{\'a}ndez, Sara and Carlini, Roberto and Anke, Luis Espinosa and Wanner, Leo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2317--2322
Collocations such as {\textquotedblleft}heavy rain{\textquotedblright} or {\textquotedblleft}make [a] decision{\textquotedblright}, are combinations of two elements where one (the base) is freely chosen, while the choice of the other (collocate) is restricted, depending on the base. Collocations present difficulties even to advanced language learners, who usually struggle to find the right collocate to express a particular meaning, e.g., both {\textquotedblleft}heavy{\textquotedblright} and {\textquotedblleft}strong{\textquotedblright} express the meaning {\textquoteleft}intense', but while {\textquotedblleft}rain{\textquotedblright} selects {\textquotedblleft}heavy{\textquotedblright}, {\textquotedblleft}wind{\textquotedblright} selects {\textquotedblleft}strong{\textquotedblright}. Lexical Functions (LFs) describe the meanings that hold between the elements of collocations, such as {\textquoteleft}intense', {\textquoteleft}perform', {\textquoteleft}create', {\textquoteleft}increase', etc. Language resources with semantically classified collocations would be of great help for students, however they are expensive to build, since they are manually constructed, and scarce. We present an unsupervised approach to the acquisition and semantic classification of collocations according to LFs, based on word embeddings in which, given an example of a collocation for each of the target LFs and a set of bases, the system retrieves a list of collocates for each base and LF.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,677
inproceedings
rosen-etal-2016-mwes
{MWE}s in Treebanks: From Survey to Guidelines
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1368/
Ros{\'e}n, Victoria and De Smedt, Koenraad and Losnegaard, Gyri Sm{\o}rdal and Bej{\v{c}}ek, Eduard and Savary, Agata and Osenova, Petya
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2323--2330
By means of an online survey, we have investigated ways in which various types of multiword expressions are annotated in existing treebanks. The results indicate that there is considerable variation in treatments across treebanks and thereby also, to some extent, across languages and across theoretical frameworks. The comparison is focused on the annotation of light verb constructions and verbal idioms. The survey shows that the light verb constructions either get special annotations as such, or are treated as ordinary verbs, while VP idioms are handled through different strategies. Based on insights from our investigation, we propose some general guidelines for annotating multiword expressions in treebanks. The recommendations address the following application-based needs: distinguishing MWEs from similar but compositional constructions; searching distinct types of MWEs in treebanks; awareness of literal and nonliteral meanings; and normalization of the MWE representation. The cross-lingually and cross-theoretically focused survey is intended as an aid to accessing treebanks and an aid for further work on treebank annotation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,678
inproceedings
singh-etal-2016-multiword
Multiword Expressions Dataset for {I}ndian Languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1369/
Singh, Dhirendra and Bhingardive, Sudha and Bhattacharyya, Pushpak
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2331--2335
Multiword Expressions (MWEs) are used frequently in natural languages, but understanding the diversity in MWEs is one of the open problem in the area of Natural Language Processing. In the context of Indian languages, MWEs play an important role. In this paper, we present MWEs annotation dataset created for Indian languages viz., Hindi and Marathi. We extract possible MWE candidates using two repositories: 1) the POS-tagged corpus and 2) the IndoWordNet synsets. Annotation is done for two types of MWEs: compound nouns and light verb constructions. In the process of annotation, human annotators tag valid MWEs from these candidates based on the standard guidelines provided to them. We obtained 3178 compound nouns and 2556 light verb constructions in Hindi and 1003 compound nouns and 2416 light verb constructions in Marathi using two repositories mentioned before. This created resource is made available publicly and can be used as a gold standard for Hindi and Marathi MWE systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,679
inproceedings
blache-etal-2016-marsagram
{M}arsa{G}ram: an excursion in the forests of parsing trees
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1370/
Blache, Philippe and Rauzy, St{\'e}phane and Montcheuil, Gr{\'e}goire
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2336--2342
The question of how to compare languages and more generally the domain of linguistic typology, relies on the study of different linguistic properties or phenomena. Classically, such a comparison is done semi-manually, for example by extracting information from databases such as the WALS. However, it remains difficult to identify precisely regular parameters, available for different languages, that can be used as a basis towards modeling. We propose in this paper, focusing on the question of syntactic typology, a method for automatically extracting such parameters from treebanks, bringing them into a typology perspective. We present the method and the tools for inferring such information and navigating through the treebanks. The approach has been applied to 10 languages of the Universal Dependencies Treebank. We approach is evaluated by showing how automatic classification correlates with language families.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,680
inproceedings
little-tratz-2016-easytree
{E}asy{T}ree: A Graphical Tool for Dependency Tree Annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1371/
Little, Alexa and Tratz, Stephen
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2343--2347
This paper introduces EasyTree, a dynamic graphical tool for dependency tree annotation. Built in JavaScript using the popular D3 data visualization library, EasyTree allows annotators to construct and label trees entirely by manipulating graphics, and then export the corresponding data in JSON format. Human users are thus able to annotate in an intuitive way without compromising the machine-compatibility of the output. EasyTree has a number of features to assist annotators, including color-coded part-of-speech indicators and optional translation displays. It can also be customized to suit a wide range of projects; part-of-speech categories, edge labels, and many other settings can be edited from within the GUI. The system also utilizes UTF-8 encoding and properly handles both left-to-right and right-to-left scripts. By providing a user-friendly annotation tool, we aim to reduce time spent transforming data or learning to use the software, to improve the user experience for annotators, and to make annotation approachable even for inexperienced users. Unlike existing solutions, EasyTree is built entirely with standard web technologies{--}JavaScript, HTML, and CSS{--}making it ideal for web-based annotation efforts, including crowdsourcing efforts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,681
inproceedings
morales-etal-2016-hypergraph
Hypergraph Modelization of a Syntactically Annotated {E}nglish {W}ikipedia Dump
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1372/
Morales, Edmundo Pavel Soriano and Ah-Pine, Julien and Loudcher, Sabine
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2348--2353
Wikipedia, the well known internet encyclopedia, is nowadays a widely used source of information. To leverage its rich information, already parsed versions of Wikipedia have been proposed. We present an annotated dump of the English Wikipedia. This dump draws upon previously released Wikipedia parsed dumps. Still, we head in a different direction. In this parse we focus more into the syntactical characteristics of words: aside from the classical Part-of-Speech (PoS) tags and dependency parsing relations, we provide the full constituent parse branch for each word in a succinct way. Additionally, we propose a hypergraph network representation of the extracted linguistic information. The proposed modelization aims to take advantage of the information stocked within our parsed Wikipedia dump. We hope that by releasing these resources, researchers from the concerned communities will have a ready-to-experiment Wikipedia corpus to compare and distribute their work. We render public our parsed Wikipedia dump as well as the tool (and its source code) used to perform the parse. The hypergraph network and its related metadata is also distributed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,682
inproceedings
versley-steen-2016-detecting
Detecting Annotation Scheme Variation in Out-of-Domain Treebanks
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1373/
Versley, Yannick and Steen, Julius
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2354--2360
To ensure portability of NLP systems across multiple domains, existing treebanks are often extended by adding trees from interesting domains that were not part of the initial annotation effort. In this paper, we will argue that it is both useful from an application viewpoint and enlightening from a linguistic viewpoint to detect and reduce divergence in annotation schemes between extant and new parts in a set of treebanks that is to be used in evaluation experiments. The results of our correction and harmonization efforts will be made available to the public as a test suite for the evaluation of constituent parsing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,683
inproceedings
seraji-etal-2016-universal
{U}niversal {D}ependencies for {P}ersian
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1374/
Seraji, Mojgan and Ginter, Filip and Nivre, Joakim
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2361--2365
The Persian Universal Dependency Treebank (Persian UD) is a recent effort of treebanking Persian with Universal Dependencies (UD), an ongoing project that designs unified and cross-linguistically valid grammatical representations including part-of-speech tags, morphological features, and dependency relations. The Persian UD is the converted version of the Uppsala Persian Dependency Treebank (UPDT) to the universal dependencies framework and consists of nearly 6,000 sentences and 152,871 word tokens with an average sentence length of 25 words. In addition to the universal dependencies syntactic annotation guidelines, the two treebanks differ in tokenization. All words containing unsegmented clitics (pronominal and copula clitics) annotated with complex labels in the UPDT have been separated from the clitics and appear with distinct labels in the Persian UD. The treebank has its original syntactic annotation scheme based on Stanford Typed Dependencies. In this paper, we present the approaches taken in the development of the Persian UD.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,684
inproceedings
seddah-candito-2016-hard
Hard Time Parsing Questions: Building a {Q}uestion{B}ank for {F}rench
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1375/
Seddah, Djam{\'e} and Candito, Marie
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2366--2370
We present the French Question Bank, a treebank of 2600 questions. We show that classical parsing model performance drop while the inclusion of this data set is highly beneficial without harming the parsing of non-question data. when facing out-of- domain data with strong structural diver- gences. Two thirds being aligned with the QB (Judge et al., 2006) and being freely available, this treebank will prove useful to build robust NLP systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,685
inproceedings
schuster-manning-2016-enhanced
Enhanced {E}nglish {U}niversal {D}ependencies: An Improved Representation for Natural Language Understanding Tasks
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1376/
Schuster, Sebastian and Manning, Christopher D.
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2371--2378
Many shallow natural language understanding tasks use dependency trees to extract relations between content words. However, strict surface-structure dependency trees tend to follow the linguistic structure of sentences too closely and frequently fail to provide direct relations between content words. To mitigate this problem, the original Stanford Dependencies representation also defines two dependency graph representations which contain additional and augmented relations that explicitly capture otherwise implicit relations between content words. In this paper, we revisit and extend these dependency graph representations in light of the recent Universal Dependencies (UD) initiative and provide a detailed account of an enhanced and an enhanced++ English UD representation. We further present a converter from constituency to basic, i.e., strict surface structure, UD trees, and a converter from basic UD trees to enhanced and enhanced++ English UD graphs. We release both converters as part of Stanford CoreNLP and the Stanford Parser.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,686